Nov 24 08:47:34 localhost kernel: Linux version 5.14.0-639.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025
Nov 24 08:47:34 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Nov 24 08:47:34 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 24 08:47:34 localhost kernel: BIOS-provided physical RAM map:
Nov 24 08:47:34 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Nov 24 08:47:34 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Nov 24 08:47:34 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Nov 24 08:47:34 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Nov 24 08:47:34 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Nov 24 08:47:34 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Nov 24 08:47:34 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Nov 24 08:47:34 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Nov 24 08:47:34 localhost kernel: NX (Execute Disable) protection: active
Nov 24 08:47:34 localhost kernel: APIC: Static calls initialized
Nov 24 08:47:34 localhost kernel: SMBIOS 2.8 present.
Nov 24 08:47:34 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Nov 24 08:47:34 localhost kernel: Hypervisor detected: KVM
Nov 24 08:47:34 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Nov 24 08:47:34 localhost kernel: kvm-clock: using sched offset of 5096163546 cycles
Nov 24 08:47:34 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Nov 24 08:47:34 localhost kernel: tsc: Detected 2799.998 MHz processor
Nov 24 08:47:34 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Nov 24 08:47:34 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Nov 24 08:47:34 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Nov 24 08:47:34 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Nov 24 08:47:34 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Nov 24 08:47:34 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Nov 24 08:47:34 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Nov 24 08:47:34 localhost kernel: Using GB pages for direct mapping
Nov 24 08:47:34 localhost kernel: RAMDISK: [mem 0x2d83a000-0x32c14fff]
Nov 24 08:47:34 localhost kernel: ACPI: Early table checksum verification disabled
Nov 24 08:47:34 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Nov 24 08:47:34 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 24 08:47:34 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 24 08:47:34 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 24 08:47:34 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Nov 24 08:47:34 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 24 08:47:34 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 24 08:47:34 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Nov 24 08:47:34 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Nov 24 08:47:34 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Nov 24 08:47:34 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Nov 24 08:47:34 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Nov 24 08:47:34 localhost kernel: No NUMA configuration found
Nov 24 08:47:34 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Nov 24 08:47:34 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Nov 24 08:47:34 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Nov 24 08:47:34 localhost kernel: Zone ranges:
Nov 24 08:47:34 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Nov 24 08:47:34 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Nov 24 08:47:34 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Nov 24 08:47:34 localhost kernel:   Device   empty
Nov 24 08:47:34 localhost kernel: Movable zone start for each node
Nov 24 08:47:34 localhost kernel: Early memory node ranges
Nov 24 08:47:34 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Nov 24 08:47:34 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Nov 24 08:47:34 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Nov 24 08:47:34 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Nov 24 08:47:34 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Nov 24 08:47:34 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Nov 24 08:47:34 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Nov 24 08:47:34 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Nov 24 08:47:34 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Nov 24 08:47:34 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Nov 24 08:47:34 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Nov 24 08:47:34 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Nov 24 08:47:34 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Nov 24 08:47:34 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Nov 24 08:47:34 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Nov 24 08:47:34 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Nov 24 08:47:34 localhost kernel: TSC deadline timer available
Nov 24 08:47:34 localhost kernel: CPU topo: Max. logical packages:   8
Nov 24 08:47:34 localhost kernel: CPU topo: Max. logical dies:       8
Nov 24 08:47:34 localhost kernel: CPU topo: Max. dies per package:   1
Nov 24 08:47:34 localhost kernel: CPU topo: Max. threads per core:   1
Nov 24 08:47:34 localhost kernel: CPU topo: Num. cores per package:     1
Nov 24 08:47:34 localhost kernel: CPU topo: Num. threads per package:   1
Nov 24 08:47:34 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Nov 24 08:47:34 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Nov 24 08:47:34 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Nov 24 08:47:34 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Nov 24 08:47:34 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Nov 24 08:47:34 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Nov 24 08:47:34 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Nov 24 08:47:34 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Nov 24 08:47:34 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Nov 24 08:47:34 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Nov 24 08:47:34 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Nov 24 08:47:34 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Nov 24 08:47:34 localhost kernel: Booting paravirtualized kernel on KVM
Nov 24 08:47:34 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Nov 24 08:47:34 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Nov 24 08:47:34 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Nov 24 08:47:34 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Nov 24 08:47:34 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Nov 24 08:47:34 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Nov 24 08:47:34 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 24 08:47:34 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64", will be passed to user space.
Nov 24 08:47:34 localhost kernel: random: crng init done
Nov 24 08:47:34 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Nov 24 08:47:34 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Nov 24 08:47:34 localhost kernel: Fallback order for Node 0: 0 
Nov 24 08:47:34 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Nov 24 08:47:34 localhost kernel: Policy zone: Normal
Nov 24 08:47:34 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Nov 24 08:47:34 localhost kernel: software IO TLB: area num 8.
Nov 24 08:47:34 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Nov 24 08:47:34 localhost kernel: ftrace: allocating 49298 entries in 193 pages
Nov 24 08:47:34 localhost kernel: ftrace: allocated 193 pages with 3 groups
Nov 24 08:47:34 localhost kernel: Dynamic Preempt: voluntary
Nov 24 08:47:34 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Nov 24 08:47:34 localhost kernel: rcu:         RCU event tracing is enabled.
Nov 24 08:47:34 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Nov 24 08:47:34 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Nov 24 08:47:34 localhost kernel:         Rude variant of Tasks RCU enabled.
Nov 24 08:47:34 localhost kernel:         Tracing variant of Tasks RCU enabled.
Nov 24 08:47:34 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Nov 24 08:47:34 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Nov 24 08:47:34 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 24 08:47:34 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 24 08:47:34 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 24 08:47:34 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Nov 24 08:47:34 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Nov 24 08:47:34 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Nov 24 08:47:34 localhost kernel: Console: colour VGA+ 80x25
Nov 24 08:47:34 localhost kernel: printk: console [ttyS0] enabled
Nov 24 08:47:34 localhost kernel: ACPI: Core revision 20230331
Nov 24 08:47:34 localhost kernel: APIC: Switch to symmetric I/O mode setup
Nov 24 08:47:34 localhost kernel: x2apic enabled
Nov 24 08:47:34 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Nov 24 08:47:34 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Nov 24 08:47:34 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Nov 24 08:47:34 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Nov 24 08:47:34 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Nov 24 08:47:34 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Nov 24 08:47:34 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Nov 24 08:47:34 localhost kernel: Spectre V2 : Mitigation: Retpolines
Nov 24 08:47:34 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Nov 24 08:47:34 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Nov 24 08:47:34 localhost kernel: RETBleed: Mitigation: untrained return thunk
Nov 24 08:47:34 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Nov 24 08:47:34 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Nov 24 08:47:34 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Nov 24 08:47:34 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Nov 24 08:47:34 localhost kernel: x86/bugs: return thunk changed
Nov 24 08:47:34 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Nov 24 08:47:34 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Nov 24 08:47:34 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Nov 24 08:47:34 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Nov 24 08:47:34 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Nov 24 08:47:34 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Nov 24 08:47:34 localhost kernel: Freeing SMP alternatives memory: 40K
Nov 24 08:47:34 localhost kernel: pid_max: default: 32768 minimum: 301
Nov 24 08:47:34 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Nov 24 08:47:34 localhost kernel: landlock: Up and running.
Nov 24 08:47:34 localhost kernel: Yama: becoming mindful.
Nov 24 08:47:34 localhost kernel: SELinux:  Initializing.
Nov 24 08:47:34 localhost kernel: LSM support for eBPF active
Nov 24 08:47:34 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 24 08:47:34 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 24 08:47:34 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Nov 24 08:47:34 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Nov 24 08:47:34 localhost kernel: ... version:                0
Nov 24 08:47:34 localhost kernel: ... bit width:              48
Nov 24 08:47:34 localhost kernel: ... generic registers:      6
Nov 24 08:47:34 localhost kernel: ... value mask:             0000ffffffffffff
Nov 24 08:47:34 localhost kernel: ... max period:             00007fffffffffff
Nov 24 08:47:34 localhost kernel: ... fixed-purpose events:   0
Nov 24 08:47:34 localhost kernel: ... event mask:             000000000000003f
Nov 24 08:47:34 localhost kernel: signal: max sigframe size: 1776
Nov 24 08:47:34 localhost kernel: rcu: Hierarchical SRCU implementation.
Nov 24 08:47:34 localhost kernel: rcu:         Max phase no-delay instances is 400.
Nov 24 08:47:34 localhost kernel: smp: Bringing up secondary CPUs ...
Nov 24 08:47:34 localhost kernel: smpboot: x86: Booting SMP configuration:
Nov 24 08:47:34 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Nov 24 08:47:34 localhost kernel: smp: Brought up 1 node, 8 CPUs
Nov 24 08:47:34 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Nov 24 08:47:34 localhost kernel: node 0 deferred pages initialised in 11ms
Nov 24 08:47:34 localhost kernel: Memory: 7766108K/8388068K available (16384K kernel code, 5786K rwdata, 13900K rodata, 4188K init, 7176K bss, 616268K reserved, 0K cma-reserved)
Nov 24 08:47:34 localhost kernel: devtmpfs: initialized
Nov 24 08:47:34 localhost kernel: x86/mm: Memory block size: 128MB
Nov 24 08:47:34 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Nov 24 08:47:34 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Nov 24 08:47:34 localhost kernel: pinctrl core: initialized pinctrl subsystem
Nov 24 08:47:34 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Nov 24 08:47:34 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Nov 24 08:47:34 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Nov 24 08:47:34 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Nov 24 08:47:34 localhost kernel: audit: initializing netlink subsys (disabled)
Nov 24 08:47:34 localhost kernel: audit: type=2000 audit(1763974053.123:1): state=initialized audit_enabled=0 res=1
Nov 24 08:47:34 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Nov 24 08:47:34 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Nov 24 08:47:34 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Nov 24 08:47:34 localhost kernel: cpuidle: using governor menu
Nov 24 08:47:34 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Nov 24 08:47:34 localhost kernel: PCI: Using configuration type 1 for base access
Nov 24 08:47:34 localhost kernel: PCI: Using configuration type 1 for extended access
Nov 24 08:47:34 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Nov 24 08:47:34 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Nov 24 08:47:34 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Nov 24 08:47:34 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Nov 24 08:47:34 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Nov 24 08:47:34 localhost kernel: Demotion targets for Node 0: null
Nov 24 08:47:34 localhost kernel: cryptd: max_cpu_qlen set to 1000
Nov 24 08:47:34 localhost kernel: ACPI: Added _OSI(Module Device)
Nov 24 08:47:34 localhost kernel: ACPI: Added _OSI(Processor Device)
Nov 24 08:47:34 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Nov 24 08:47:34 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Nov 24 08:47:34 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Nov 24 08:47:34 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Nov 24 08:47:34 localhost kernel: ACPI: Interpreter enabled
Nov 24 08:47:34 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Nov 24 08:47:34 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Nov 24 08:47:34 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Nov 24 08:47:34 localhost kernel: PCI: Using E820 reservations for host bridge windows
Nov 24 08:47:34 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Nov 24 08:47:34 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Nov 24 08:47:34 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Nov 24 08:47:34 localhost kernel: acpiphp: Slot [3] registered
Nov 24 08:47:34 localhost kernel: acpiphp: Slot [4] registered
Nov 24 08:47:34 localhost kernel: acpiphp: Slot [5] registered
Nov 24 08:47:34 localhost kernel: acpiphp: Slot [6] registered
Nov 24 08:47:34 localhost kernel: acpiphp: Slot [7] registered
Nov 24 08:47:34 localhost kernel: acpiphp: Slot [8] registered
Nov 24 08:47:34 localhost kernel: acpiphp: Slot [9] registered
Nov 24 08:47:34 localhost kernel: acpiphp: Slot [10] registered
Nov 24 08:47:34 localhost kernel: acpiphp: Slot [11] registered
Nov 24 08:47:34 localhost kernel: acpiphp: Slot [12] registered
Nov 24 08:47:34 localhost kernel: acpiphp: Slot [13] registered
Nov 24 08:47:34 localhost kernel: acpiphp: Slot [14] registered
Nov 24 08:47:34 localhost kernel: acpiphp: Slot [15] registered
Nov 24 08:47:34 localhost kernel: acpiphp: Slot [16] registered
Nov 24 08:47:34 localhost kernel: acpiphp: Slot [17] registered
Nov 24 08:47:34 localhost kernel: acpiphp: Slot [18] registered
Nov 24 08:47:34 localhost kernel: acpiphp: Slot [19] registered
Nov 24 08:47:34 localhost kernel: acpiphp: Slot [20] registered
Nov 24 08:47:34 localhost kernel: acpiphp: Slot [21] registered
Nov 24 08:47:34 localhost kernel: acpiphp: Slot [22] registered
Nov 24 08:47:34 localhost kernel: acpiphp: Slot [23] registered
Nov 24 08:47:34 localhost kernel: acpiphp: Slot [24] registered
Nov 24 08:47:34 localhost kernel: acpiphp: Slot [25] registered
Nov 24 08:47:34 localhost kernel: acpiphp: Slot [26] registered
Nov 24 08:47:34 localhost kernel: acpiphp: Slot [27] registered
Nov 24 08:47:34 localhost kernel: acpiphp: Slot [28] registered
Nov 24 08:47:34 localhost kernel: acpiphp: Slot [29] registered
Nov 24 08:47:34 localhost kernel: acpiphp: Slot [30] registered
Nov 24 08:47:34 localhost kernel: acpiphp: Slot [31] registered
Nov 24 08:47:34 localhost kernel: PCI host bridge to bus 0000:00
Nov 24 08:47:34 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Nov 24 08:47:34 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Nov 24 08:47:34 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Nov 24 08:47:34 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Nov 24 08:47:34 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Nov 24 08:47:34 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Nov 24 08:47:34 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Nov 24 08:47:34 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Nov 24 08:47:34 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Nov 24 08:47:34 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Nov 24 08:47:34 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Nov 24 08:47:34 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Nov 24 08:47:34 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Nov 24 08:47:34 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Nov 24 08:47:34 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Nov 24 08:47:34 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Nov 24 08:47:34 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Nov 24 08:47:34 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Nov 24 08:47:34 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Nov 24 08:47:34 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Nov 24 08:47:34 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Nov 24 08:47:34 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Nov 24 08:47:34 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Nov 24 08:47:34 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Nov 24 08:47:34 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Nov 24 08:47:34 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 24 08:47:34 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Nov 24 08:47:34 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Nov 24 08:47:34 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Nov 24 08:47:34 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Nov 24 08:47:34 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Nov 24 08:47:34 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Nov 24 08:47:34 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Nov 24 08:47:34 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Nov 24 08:47:34 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Nov 24 08:47:34 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Nov 24 08:47:34 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Nov 24 08:47:34 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Nov 24 08:47:34 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Nov 24 08:47:34 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Nov 24 08:47:34 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Nov 24 08:47:34 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Nov 24 08:47:34 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Nov 24 08:47:34 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Nov 24 08:47:34 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Nov 24 08:47:34 localhost kernel: iommu: Default domain type: Translated
Nov 24 08:47:34 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Nov 24 08:47:34 localhost kernel: SCSI subsystem initialized
Nov 24 08:47:34 localhost kernel: ACPI: bus type USB registered
Nov 24 08:47:34 localhost kernel: usbcore: registered new interface driver usbfs
Nov 24 08:47:34 localhost kernel: usbcore: registered new interface driver hub
Nov 24 08:47:34 localhost kernel: usbcore: registered new device driver usb
Nov 24 08:47:34 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Nov 24 08:47:34 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Nov 24 08:47:34 localhost kernel: PTP clock support registered
Nov 24 08:47:34 localhost kernel: EDAC MC: Ver: 3.0.0
Nov 24 08:47:34 localhost kernel: NetLabel: Initializing
Nov 24 08:47:34 localhost kernel: NetLabel:  domain hash size = 128
Nov 24 08:47:34 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Nov 24 08:47:34 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Nov 24 08:47:34 localhost kernel: PCI: Using ACPI for IRQ routing
Nov 24 08:47:34 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Nov 24 08:47:34 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Nov 24 08:47:34 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Nov 24 08:47:34 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Nov 24 08:47:34 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Nov 24 08:47:34 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Nov 24 08:47:34 localhost kernel: vgaarb: loaded
Nov 24 08:47:34 localhost kernel: clocksource: Switched to clocksource kvm-clock
Nov 24 08:47:34 localhost kernel: VFS: Disk quotas dquot_6.6.0
Nov 24 08:47:34 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Nov 24 08:47:34 localhost kernel: pnp: PnP ACPI init
Nov 24 08:47:34 localhost kernel: pnp 00:03: [dma 2]
Nov 24 08:47:34 localhost kernel: pnp: PnP ACPI: found 5 devices
Nov 24 08:47:34 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Nov 24 08:47:34 localhost kernel: NET: Registered PF_INET protocol family
Nov 24 08:47:34 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Nov 24 08:47:34 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Nov 24 08:47:34 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Nov 24 08:47:34 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Nov 24 08:47:34 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Nov 24 08:47:34 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Nov 24 08:47:34 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Nov 24 08:47:34 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 24 08:47:34 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 24 08:47:34 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Nov 24 08:47:34 localhost kernel: NET: Registered PF_XDP protocol family
Nov 24 08:47:34 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Nov 24 08:47:34 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Nov 24 08:47:34 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Nov 24 08:47:34 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Nov 24 08:47:34 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Nov 24 08:47:34 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Nov 24 08:47:34 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Nov 24 08:47:34 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Nov 24 08:47:34 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 73618 usecs
Nov 24 08:47:34 localhost kernel: PCI: CLS 0 bytes, default 64
Nov 24 08:47:34 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Nov 24 08:47:34 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Nov 24 08:47:34 localhost kernel: ACPI: bus type thunderbolt registered
Nov 24 08:47:34 localhost kernel: Trying to unpack rootfs image as initramfs...
Nov 24 08:47:34 localhost kernel: Initialise system trusted keyrings
Nov 24 08:47:34 localhost kernel: Key type blacklist registered
Nov 24 08:47:34 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Nov 24 08:47:34 localhost kernel: zbud: loaded
Nov 24 08:47:34 localhost kernel: integrity: Platform Keyring initialized
Nov 24 08:47:34 localhost kernel: integrity: Machine keyring initialized
Nov 24 08:47:34 localhost kernel: Freeing initrd memory: 85868K
Nov 24 08:47:34 localhost kernel: NET: Registered PF_ALG protocol family
Nov 24 08:47:34 localhost kernel: xor: automatically using best checksumming function   avx       
Nov 24 08:47:34 localhost kernel: Key type asymmetric registered
Nov 24 08:47:34 localhost kernel: Asymmetric key parser 'x509' registered
Nov 24 08:47:34 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Nov 24 08:47:34 localhost kernel: io scheduler mq-deadline registered
Nov 24 08:47:34 localhost kernel: io scheduler kyber registered
Nov 24 08:47:34 localhost kernel: io scheduler bfq registered
Nov 24 08:47:34 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Nov 24 08:47:34 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Nov 24 08:47:34 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Nov 24 08:47:34 localhost kernel: ACPI: button: Power Button [PWRF]
Nov 24 08:47:34 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Nov 24 08:47:34 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Nov 24 08:47:34 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Nov 24 08:47:34 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Nov 24 08:47:34 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Nov 24 08:47:34 localhost kernel: Non-volatile memory driver v1.3
Nov 24 08:47:34 localhost kernel: rdac: device handler registered
Nov 24 08:47:34 localhost kernel: hp_sw: device handler registered
Nov 24 08:47:34 localhost kernel: emc: device handler registered
Nov 24 08:47:34 localhost kernel: alua: device handler registered
Nov 24 08:47:34 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Nov 24 08:47:34 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Nov 24 08:47:34 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Nov 24 08:47:34 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Nov 24 08:47:34 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Nov 24 08:47:34 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 24 08:47:34 localhost kernel: usb usb1: Product: UHCI Host Controller
Nov 24 08:47:34 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-639.el9.x86_64 uhci_hcd
Nov 24 08:47:34 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Nov 24 08:47:34 localhost kernel: hub 1-0:1.0: USB hub found
Nov 24 08:47:34 localhost kernel: hub 1-0:1.0: 2 ports detected
Nov 24 08:47:34 localhost kernel: usbcore: registered new interface driver usbserial_generic
Nov 24 08:47:34 localhost kernel: usbserial: USB Serial support registered for generic
Nov 24 08:47:34 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Nov 24 08:47:34 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Nov 24 08:47:34 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Nov 24 08:47:34 localhost kernel: mousedev: PS/2 mouse device common for all mice
Nov 24 08:47:34 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Nov 24 08:47:34 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Nov 24 08:47:34 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Nov 24 08:47:34 localhost kernel: rtc_cmos 00:04: registered as rtc0
Nov 24 08:47:34 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-11-24T08:47:33 UTC (1763974053)
Nov 24 08:47:34 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Nov 24 08:47:34 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Nov 24 08:47:34 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Nov 24 08:47:34 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Nov 24 08:47:34 localhost kernel: usbcore: registered new interface driver usbhid
Nov 24 08:47:34 localhost kernel: usbhid: USB HID core driver
Nov 24 08:47:34 localhost kernel: drop_monitor: Initializing network drop monitor service
Nov 24 08:47:34 localhost kernel: Initializing XFRM netlink socket
Nov 24 08:47:34 localhost kernel: NET: Registered PF_INET6 protocol family
Nov 24 08:47:34 localhost kernel: Segment Routing with IPv6
Nov 24 08:47:34 localhost kernel: NET: Registered PF_PACKET protocol family
Nov 24 08:47:34 localhost kernel: mpls_gso: MPLS GSO support
Nov 24 08:47:34 localhost kernel: IPI shorthand broadcast: enabled
Nov 24 08:47:34 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Nov 24 08:47:34 localhost kernel: AES CTR mode by8 optimization enabled
Nov 24 08:47:34 localhost kernel: sched_clock: Marking stable (1168004268, 151616558)->(1441733510, -122112684)
Nov 24 08:47:34 localhost kernel: registered taskstats version 1
Nov 24 08:47:34 localhost kernel: Loading compiled-in X.509 certificates
Nov 24 08:47:34 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: f7751431c703da8a75244ce96aad68601cf1c188'
Nov 24 08:47:34 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Nov 24 08:47:34 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Nov 24 08:47:34 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Nov 24 08:47:34 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Nov 24 08:47:34 localhost kernel: Demotion targets for Node 0: null
Nov 24 08:47:34 localhost kernel: page_owner is disabled
Nov 24 08:47:34 localhost kernel: Key type .fscrypt registered
Nov 24 08:47:34 localhost kernel: Key type fscrypt-provisioning registered
Nov 24 08:47:34 localhost kernel: Key type big_key registered
Nov 24 08:47:34 localhost kernel: Key type encrypted registered
Nov 24 08:47:34 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Nov 24 08:47:34 localhost kernel: Loading compiled-in module X.509 certificates
Nov 24 08:47:34 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: f7751431c703da8a75244ce96aad68601cf1c188'
Nov 24 08:47:34 localhost kernel: ima: Allocated hash algorithm: sha256
Nov 24 08:47:34 localhost kernel: ima: No architecture policies found
Nov 24 08:47:34 localhost kernel: evm: Initialising EVM extended attributes:
Nov 24 08:47:34 localhost kernel: evm: security.selinux
Nov 24 08:47:34 localhost kernel: evm: security.SMACK64 (disabled)
Nov 24 08:47:34 localhost kernel: evm: security.SMACK64EXEC (disabled)
Nov 24 08:47:34 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Nov 24 08:47:34 localhost kernel: evm: security.SMACK64MMAP (disabled)
Nov 24 08:47:34 localhost kernel: evm: security.apparmor (disabled)
Nov 24 08:47:34 localhost kernel: evm: security.ima
Nov 24 08:47:34 localhost kernel: evm: security.capability
Nov 24 08:47:34 localhost kernel: evm: HMAC attrs: 0x1
Nov 24 08:47:34 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Nov 24 08:47:34 localhost kernel: Running certificate verification RSA selftest
Nov 24 08:47:34 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Nov 24 08:47:34 localhost kernel: Running certificate verification ECDSA selftest
Nov 24 08:47:34 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Nov 24 08:47:34 localhost kernel: clk: Disabling unused clocks
Nov 24 08:47:34 localhost kernel: Freeing unused decrypted memory: 2028K
Nov 24 08:47:34 localhost kernel: Freeing unused kernel image (initmem) memory: 4188K
Nov 24 08:47:34 localhost kernel: Write protecting the kernel read-only data: 30720k
Nov 24 08:47:34 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 436K
Nov 24 08:47:34 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 24 08:47:34 localhost kernel: Run /init as init process
Nov 24 08:47:34 localhost kernel:   with arguments:
Nov 24 08:47:34 localhost kernel:     /init
Nov 24 08:47:34 localhost kernel:   with environment:
Nov 24 08:47:34 localhost kernel:     HOME=/
Nov 24 08:47:34 localhost kernel:     TERM=linux
Nov 24 08:47:34 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64
Nov 24 08:47:34 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 24 08:47:34 localhost systemd[1]: Detected virtualization kvm.
Nov 24 08:47:34 localhost systemd[1]: Detected architecture x86-64.
Nov 24 08:47:34 localhost systemd[1]: Running in initrd.
Nov 24 08:47:34 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Nov 24 08:47:34 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Nov 24 08:47:34 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Nov 24 08:47:34 localhost kernel: usb 1-1: Manufacturer: QEMU
Nov 24 08:47:34 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Nov 24 08:47:34 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Nov 24 08:47:34 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Nov 24 08:47:34 localhost systemd[1]: No hostname configured, using default hostname.
Nov 24 08:47:34 localhost systemd[1]: Hostname set to <localhost>.
Nov 24 08:47:34 localhost systemd[1]: Initializing machine ID from VM UUID.
Nov 24 08:47:34 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Nov 24 08:47:34 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Nov 24 08:47:34 localhost systemd[1]: Reached target Local Encrypted Volumes.
Nov 24 08:47:34 localhost systemd[1]: Reached target Initrd /usr File System.
Nov 24 08:47:34 localhost systemd[1]: Reached target Local File Systems.
Nov 24 08:47:34 localhost systemd[1]: Reached target Path Units.
Nov 24 08:47:34 localhost systemd[1]: Reached target Slice Units.
Nov 24 08:47:34 localhost systemd[1]: Reached target Swaps.
Nov 24 08:47:34 localhost systemd[1]: Reached target Timer Units.
Nov 24 08:47:34 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 24 08:47:34 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Nov 24 08:47:34 localhost systemd[1]: Listening on Journal Socket.
Nov 24 08:47:34 localhost systemd[1]: Listening on udev Control Socket.
Nov 24 08:47:34 localhost systemd[1]: Listening on udev Kernel Socket.
Nov 24 08:47:34 localhost systemd[1]: Reached target Socket Units.
Nov 24 08:47:34 localhost systemd[1]: Starting Create List of Static Device Nodes...
Nov 24 08:47:34 localhost systemd[1]: Starting Journal Service...
Nov 24 08:47:34 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 24 08:47:34 localhost systemd[1]: Starting Apply Kernel Variables...
Nov 24 08:47:34 localhost systemd[1]: Starting Create System Users...
Nov 24 08:47:34 localhost systemd[1]: Starting Setup Virtual Console...
Nov 24 08:47:34 localhost systemd[1]: Finished Create List of Static Device Nodes.
Nov 24 08:47:34 localhost systemd[1]: Finished Apply Kernel Variables.
Nov 24 08:47:34 localhost systemd[1]: Finished Create System Users.
Nov 24 08:47:34 localhost systemd-journald[304]: Journal started
Nov 24 08:47:34 localhost systemd-journald[304]: Runtime Journal (/run/log/journal/4c455ecc8696436bb07b3b4a91ae800f) is 8.0M, max 153.6M, 145.6M free.
Nov 24 08:47:34 localhost systemd-sysusers[308]: Creating group 'users' with GID 100.
Nov 24 08:47:34 localhost systemd-sysusers[308]: Creating group 'dbus' with GID 81.
Nov 24 08:47:34 localhost systemd-sysusers[308]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Nov 24 08:47:34 localhost systemd[1]: Started Journal Service.
Nov 24 08:47:34 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 24 08:47:34 localhost systemd[1]: Starting Create Volatile Files and Directories...
Nov 24 08:47:34 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 24 08:47:34 localhost systemd[1]: Finished Create Volatile Files and Directories.
Nov 24 08:47:34 localhost systemd[1]: Finished Setup Virtual Console.
Nov 24 08:47:34 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Nov 24 08:47:34 localhost systemd[1]: Starting dracut cmdline hook...
Nov 24 08:47:34 localhost dracut-cmdline[324]: dracut-9 dracut-057-102.git20250818.el9
Nov 24 08:47:34 localhost dracut-cmdline[324]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 24 08:47:35 localhost systemd[1]: Finished dracut cmdline hook.
Nov 24 08:47:35 localhost systemd[1]: Starting dracut pre-udev hook...
Nov 24 08:47:35 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Nov 24 08:47:35 localhost kernel: device-mapper: uevent: version 1.0.3
Nov 24 08:47:35 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Nov 24 08:47:35 localhost kernel: RPC: Registered named UNIX socket transport module.
Nov 24 08:47:35 localhost kernel: RPC: Registered udp transport module.
Nov 24 08:47:35 localhost kernel: RPC: Registered tcp transport module.
Nov 24 08:47:35 localhost kernel: RPC: Registered tcp-with-tls transport module.
Nov 24 08:47:35 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Nov 24 08:47:35 localhost rpc.statd[442]: Version 2.5.4 starting
Nov 24 08:47:35 localhost rpc.statd[442]: Initializing NSM state
Nov 24 08:47:35 localhost rpc.idmapd[447]: Setting log level to 0
Nov 24 08:47:35 localhost systemd[1]: Finished dracut pre-udev hook.
Nov 24 08:47:35 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 24 08:47:35 localhost systemd-udevd[460]: Using default interface naming scheme 'rhel-9.0'.
Nov 24 08:47:35 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 24 08:47:35 localhost systemd[1]: Starting dracut pre-trigger hook...
Nov 24 08:47:35 localhost systemd[1]: Finished dracut pre-trigger hook.
Nov 24 08:47:35 localhost systemd[1]: Starting Coldplug All udev Devices...
Nov 24 08:47:35 localhost systemd[1]: Created slice Slice /system/modprobe.
Nov 24 08:47:35 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 24 08:47:35 localhost systemd[1]: Finished Coldplug All udev Devices.
Nov 24 08:47:35 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 24 08:47:35 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 24 08:47:35 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 24 08:47:35 localhost systemd[1]: Reached target Network.
Nov 24 08:47:35 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 24 08:47:35 localhost systemd[1]: Starting dracut initqueue hook...
Nov 24 08:47:35 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Nov 24 08:47:35 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Nov 24 08:47:35 localhost kernel:  vda: vda1
Nov 24 08:47:35 localhost kernel: libata version 3.00 loaded.
Nov 24 08:47:35 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Nov 24 08:47:35 localhost systemd-udevd[474]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 08:47:35 localhost kernel: scsi host0: ata_piix
Nov 24 08:47:35 localhost kernel: scsi host1: ata_piix
Nov 24 08:47:35 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Nov 24 08:47:35 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Nov 24 08:47:35 localhost systemd[1]: Found device /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709.
Nov 24 08:47:35 localhost systemd[1]: Reached target Initrd Root Device.
Nov 24 08:47:35 localhost systemd[1]: Mounting Kernel Configuration File System...
Nov 24 08:47:35 localhost kernel: ata1: found unknown device (class 0)
Nov 24 08:47:35 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Nov 24 08:47:35 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Nov 24 08:47:35 localhost systemd[1]: Mounted Kernel Configuration File System.
Nov 24 08:47:35 localhost systemd[1]: Reached target System Initialization.
Nov 24 08:47:35 localhost systemd[1]: Reached target Basic System.
Nov 24 08:47:35 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Nov 24 08:47:35 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Nov 24 08:47:35 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Nov 24 08:47:35 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Nov 24 08:47:35 localhost systemd[1]: Finished dracut initqueue hook.
Nov 24 08:47:35 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Nov 24 08:47:35 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Nov 24 08:47:35 localhost systemd[1]: Reached target Remote File Systems.
Nov 24 08:47:35 localhost systemd[1]: Starting dracut pre-mount hook...
Nov 24 08:47:35 localhost systemd[1]: Finished dracut pre-mount hook.
Nov 24 08:47:35 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709...
Nov 24 08:47:35 localhost systemd-fsck[552]: /usr/sbin/fsck.xfs: XFS file system.
Nov 24 08:47:35 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709.
Nov 24 08:47:35 localhost systemd[1]: Mounting /sysroot...
Nov 24 08:47:36 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Nov 24 08:47:36 localhost kernel: XFS (vda1): Mounting V5 Filesystem 47e3724e-7a1b-439a-9543-b98c9a290709
Nov 24 08:47:36 localhost kernel: XFS (vda1): Ending clean mount
Nov 24 08:47:36 localhost systemd[1]: Mounted /sysroot.
Nov 24 08:47:36 localhost systemd[1]: Reached target Initrd Root File System.
Nov 24 08:47:36 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Nov 24 08:47:36 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Nov 24 08:47:36 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Nov 24 08:47:36 localhost systemd[1]: Reached target Initrd File Systems.
Nov 24 08:47:36 localhost systemd[1]: Reached target Initrd Default Target.
Nov 24 08:47:36 localhost systemd[1]: Starting dracut mount hook...
Nov 24 08:47:36 localhost systemd[1]: Finished dracut mount hook.
Nov 24 08:47:36 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Nov 24 08:47:36 localhost rpc.idmapd[447]: exiting on signal 15
Nov 24 08:47:36 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Nov 24 08:47:36 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Nov 24 08:47:36 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Nov 24 08:47:36 localhost systemd[1]: Stopped target Network.
Nov 24 08:47:36 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Nov 24 08:47:36 localhost systemd[1]: Stopped target Timer Units.
Nov 24 08:47:36 localhost systemd[1]: dbus.socket: Deactivated successfully.
Nov 24 08:47:36 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Nov 24 08:47:36 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Nov 24 08:47:36 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Nov 24 08:47:36 localhost systemd[1]: Stopped target Initrd Default Target.
Nov 24 08:47:36 localhost systemd[1]: Stopped target Basic System.
Nov 24 08:47:36 localhost systemd[1]: Stopped target Initrd Root Device.
Nov 24 08:47:36 localhost systemd[1]: Stopped target Initrd /usr File System.
Nov 24 08:47:36 localhost systemd[1]: Stopped target Path Units.
Nov 24 08:47:36 localhost systemd[1]: Stopped target Remote File Systems.
Nov 24 08:47:36 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Nov 24 08:47:36 localhost systemd[1]: Stopped target Slice Units.
Nov 24 08:47:36 localhost systemd[1]: Stopped target Socket Units.
Nov 24 08:47:36 localhost systemd[1]: Stopped target System Initialization.
Nov 24 08:47:36 localhost systemd[1]: Stopped target Local File Systems.
Nov 24 08:47:36 localhost systemd[1]: Stopped target Swaps.
Nov 24 08:47:36 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Nov 24 08:47:36 localhost systemd[1]: Stopped dracut mount hook.
Nov 24 08:47:36 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Nov 24 08:47:36 localhost systemd[1]: Stopped dracut pre-mount hook.
Nov 24 08:47:36 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Nov 24 08:47:36 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Nov 24 08:47:36 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Nov 24 08:47:36 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Nov 24 08:47:36 localhost systemd[1]: Stopped dracut initqueue hook.
Nov 24 08:47:36 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 24 08:47:36 localhost systemd[1]: Stopped Apply Kernel Variables.
Nov 24 08:47:36 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Nov 24 08:47:36 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Nov 24 08:47:36 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Nov 24 08:47:36 localhost systemd[1]: Stopped Coldplug All udev Devices.
Nov 24 08:47:36 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Nov 24 08:47:36 localhost systemd[1]: Stopped dracut pre-trigger hook.
Nov 24 08:47:36 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Nov 24 08:47:36 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Nov 24 08:47:36 localhost systemd[1]: Stopped Setup Virtual Console.
Nov 24 08:47:36 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Nov 24 08:47:36 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 24 08:47:36 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Nov 24 08:47:36 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Nov 24 08:47:36 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Nov 24 08:47:36 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Nov 24 08:47:36 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Nov 24 08:47:36 localhost systemd[1]: Closed udev Control Socket.
Nov 24 08:47:36 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Nov 24 08:47:36 localhost systemd[1]: Closed udev Kernel Socket.
Nov 24 08:47:36 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Nov 24 08:47:36 localhost systemd[1]: Stopped dracut pre-udev hook.
Nov 24 08:47:36 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Nov 24 08:47:36 localhost systemd[1]: Stopped dracut cmdline hook.
Nov 24 08:47:36 localhost systemd[1]: Starting Cleanup udev Database...
Nov 24 08:47:36 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Nov 24 08:47:36 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Nov 24 08:47:36 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Nov 24 08:47:36 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Nov 24 08:47:36 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Nov 24 08:47:36 localhost systemd[1]: Stopped Create System Users.
Nov 24 08:47:36 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Nov 24 08:47:36 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Nov 24 08:47:36 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Nov 24 08:47:36 localhost systemd[1]: Finished Cleanup udev Database.
Nov 24 08:47:36 localhost systemd[1]: Reached target Switch Root.
Nov 24 08:47:36 localhost systemd[1]: Starting Switch Root...
Nov 24 08:47:36 localhost systemd[1]: Switching root.
Nov 24 08:47:36 localhost systemd-journald[304]: Journal stopped
Nov 24 08:47:37 localhost systemd-journald[304]: Received SIGTERM from PID 1 (systemd).
Nov 24 08:47:37 localhost kernel: audit: type=1404 audit(1763974056.933:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Nov 24 08:47:37 localhost kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 08:47:37 localhost kernel: SELinux:  policy capability open_perms=1
Nov 24 08:47:37 localhost kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 08:47:37 localhost kernel: SELinux:  policy capability always_check_network=0
Nov 24 08:47:37 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 08:47:37 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 08:47:37 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 08:47:37 localhost kernel: audit: type=1403 audit(1763974057.105:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Nov 24 08:47:37 localhost systemd[1]: Successfully loaded SELinux policy in 175.943ms.
Nov 24 08:47:37 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 30.767ms.
Nov 24 08:47:37 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 24 08:47:37 localhost systemd[1]: Detected virtualization kvm.
Nov 24 08:47:37 localhost systemd[1]: Detected architecture x86-64.
Nov 24 08:47:37 localhost systemd-rc-local-generator[634]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 08:47:37 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Nov 24 08:47:37 localhost systemd[1]: Stopped Switch Root.
Nov 24 08:47:37 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Nov 24 08:47:37 localhost systemd[1]: Created slice Slice /system/getty.
Nov 24 08:47:37 localhost systemd[1]: Created slice Slice /system/serial-getty.
Nov 24 08:47:37 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Nov 24 08:47:37 localhost systemd[1]: Created slice User and Session Slice.
Nov 24 08:47:37 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Nov 24 08:47:37 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Nov 24 08:47:37 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Nov 24 08:47:37 localhost systemd[1]: Reached target Local Encrypted Volumes.
Nov 24 08:47:37 localhost systemd[1]: Stopped target Switch Root.
Nov 24 08:47:37 localhost systemd[1]: Stopped target Initrd File Systems.
Nov 24 08:47:37 localhost systemd[1]: Stopped target Initrd Root File System.
Nov 24 08:47:37 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Nov 24 08:47:37 localhost systemd[1]: Reached target Path Units.
Nov 24 08:47:37 localhost systemd[1]: Reached target rpc_pipefs.target.
Nov 24 08:47:37 localhost systemd[1]: Reached target Slice Units.
Nov 24 08:47:37 localhost systemd[1]: Reached target Swaps.
Nov 24 08:47:37 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Nov 24 08:47:37 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Nov 24 08:47:37 localhost systemd[1]: Reached target RPC Port Mapper.
Nov 24 08:47:37 localhost systemd[1]: Listening on Process Core Dump Socket.
Nov 24 08:47:37 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Nov 24 08:47:37 localhost systemd[1]: Listening on udev Control Socket.
Nov 24 08:47:37 localhost systemd[1]: Listening on udev Kernel Socket.
Nov 24 08:47:37 localhost systemd[1]: Mounting Huge Pages File System...
Nov 24 08:47:37 localhost systemd[1]: Mounting POSIX Message Queue File System...
Nov 24 08:47:37 localhost systemd[1]: Mounting Kernel Debug File System...
Nov 24 08:47:37 localhost systemd[1]: Mounting Kernel Trace File System...
Nov 24 08:47:37 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 24 08:47:37 localhost systemd[1]: Starting Create List of Static Device Nodes...
Nov 24 08:47:37 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 24 08:47:37 localhost systemd[1]: Starting Load Kernel Module drm...
Nov 24 08:47:37 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Nov 24 08:47:37 localhost systemd[1]: Starting Load Kernel Module fuse...
Nov 24 08:47:37 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Nov 24 08:47:37 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Nov 24 08:47:37 localhost systemd[1]: Stopped File System Check on Root Device.
Nov 24 08:47:37 localhost systemd[1]: Stopped Journal Service.
Nov 24 08:47:37 localhost systemd[1]: Starting Journal Service...
Nov 24 08:47:37 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 24 08:47:37 localhost systemd[1]: Starting Generate network units from Kernel command line...
Nov 24 08:47:37 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 24 08:47:37 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Nov 24 08:47:37 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Nov 24 08:47:37 localhost systemd[1]: Starting Apply Kernel Variables...
Nov 24 08:47:37 localhost kernel: fuse: init (API version 7.37)
Nov 24 08:47:37 localhost systemd[1]: Starting Coldplug All udev Devices...
Nov 24 08:47:37 localhost systemd[1]: Mounted Huge Pages File System.
Nov 24 08:47:37 localhost systemd[1]: Mounted POSIX Message Queue File System.
Nov 24 08:47:37 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Nov 24 08:47:37 localhost systemd[1]: Mounted Kernel Debug File System.
Nov 24 08:47:37 localhost systemd[1]: Mounted Kernel Trace File System.
Nov 24 08:47:37 localhost systemd-journald[675]: Journal started
Nov 24 08:47:37 localhost systemd-journald[675]: Runtime Journal (/run/log/journal/fee38d0f94bf6f4b17ec77ba536bd6ab) is 8.0M, max 153.6M, 145.6M free.
Nov 24 08:47:37 localhost systemd[1]: Queued start job for default target Multi-User System.
Nov 24 08:47:37 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Nov 24 08:47:37 localhost systemd[1]: Finished Create List of Static Device Nodes.
Nov 24 08:47:37 localhost systemd[1]: Started Journal Service.
Nov 24 08:47:37 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 24 08:47:37 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 24 08:47:37 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Nov 24 08:47:37 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Nov 24 08:47:37 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Nov 24 08:47:37 localhost systemd[1]: Finished Load Kernel Module fuse.
Nov 24 08:47:37 localhost kernel: ACPI: bus type drm_connector registered
Nov 24 08:47:37 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Nov 24 08:47:37 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Nov 24 08:47:37 localhost systemd[1]: Finished Load Kernel Module drm.
Nov 24 08:47:37 localhost systemd[1]: Finished Generate network units from Kernel command line.
Nov 24 08:47:37 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Nov 24 08:47:37 localhost systemd[1]: Finished Apply Kernel Variables.
Nov 24 08:47:37 localhost systemd[1]: Mounting FUSE Control File System...
Nov 24 08:47:37 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 24 08:47:37 localhost systemd[1]: Starting Rebuild Hardware Database...
Nov 24 08:47:37 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Nov 24 08:47:37 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Nov 24 08:47:37 localhost systemd[1]: Starting Load/Save OS Random Seed...
Nov 24 08:47:37 localhost systemd[1]: Starting Create System Users...
Nov 24 08:47:37 localhost systemd[1]: Mounted FUSE Control File System.
Nov 24 08:47:37 localhost systemd-journald[675]: Runtime Journal (/run/log/journal/fee38d0f94bf6f4b17ec77ba536bd6ab) is 8.0M, max 153.6M, 145.6M free.
Nov 24 08:47:37 localhost systemd-journald[675]: Received client request to flush runtime journal.
Nov 24 08:47:37 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Nov 24 08:47:37 localhost systemd[1]: Finished Load/Save OS Random Seed.
Nov 24 08:47:37 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 24 08:47:37 localhost systemd[1]: Finished Coldplug All udev Devices.
Nov 24 08:47:37 localhost systemd[1]: Finished Create System Users.
Nov 24 08:47:37 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 24 08:47:37 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 24 08:47:37 localhost systemd[1]: Reached target Preparation for Local File Systems.
Nov 24 08:47:37 localhost systemd[1]: Reached target Local File Systems.
Nov 24 08:47:38 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Nov 24 08:47:38 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Nov 24 08:47:38 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Nov 24 08:47:38 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Nov 24 08:47:38 localhost systemd[1]: Starting Automatic Boot Loader Update...
Nov 24 08:47:38 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Nov 24 08:47:38 localhost systemd[1]: Starting Create Volatile Files and Directories...
Nov 24 08:47:38 localhost bootctl[695]: Couldn't find EFI system partition, skipping.
Nov 24 08:47:38 localhost systemd[1]: Finished Automatic Boot Loader Update.
Nov 24 08:47:38 localhost systemd[1]: Finished Create Volatile Files and Directories.
Nov 24 08:47:38 localhost systemd[1]: Starting Security Auditing Service...
Nov 24 08:47:38 localhost systemd[1]: Starting RPC Bind...
Nov 24 08:47:38 localhost systemd[1]: Starting Rebuild Journal Catalog...
Nov 24 08:47:38 localhost auditd[701]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Nov 24 08:47:38 localhost auditd[701]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Nov 24 08:47:38 localhost systemd[1]: Finished Rebuild Journal Catalog.
Nov 24 08:47:38 localhost systemd[1]: Started RPC Bind.
Nov 24 08:47:38 localhost augenrules[706]: /sbin/augenrules: No change
Nov 24 08:47:38 localhost augenrules[721]: No rules
Nov 24 08:47:38 localhost augenrules[721]: enabled 1
Nov 24 08:47:38 localhost augenrules[721]: failure 1
Nov 24 08:47:38 localhost augenrules[721]: pid 701
Nov 24 08:47:38 localhost augenrules[721]: rate_limit 0
Nov 24 08:47:38 localhost augenrules[721]: backlog_limit 8192
Nov 24 08:47:38 localhost augenrules[721]: lost 0
Nov 24 08:47:38 localhost augenrules[721]: backlog 0
Nov 24 08:47:38 localhost augenrules[721]: backlog_wait_time 60000
Nov 24 08:47:38 localhost augenrules[721]: backlog_wait_time_actual 0
Nov 24 08:47:38 localhost augenrules[721]: enabled 1
Nov 24 08:47:38 localhost augenrules[721]: failure 1
Nov 24 08:47:38 localhost augenrules[721]: pid 701
Nov 24 08:47:38 localhost augenrules[721]: rate_limit 0
Nov 24 08:47:38 localhost augenrules[721]: backlog_limit 8192
Nov 24 08:47:38 localhost augenrules[721]: lost 0
Nov 24 08:47:38 localhost augenrules[721]: backlog 0
Nov 24 08:47:38 localhost augenrules[721]: backlog_wait_time 60000
Nov 24 08:47:38 localhost augenrules[721]: backlog_wait_time_actual 0
Nov 24 08:47:38 localhost augenrules[721]: enabled 1
Nov 24 08:47:38 localhost augenrules[721]: failure 1
Nov 24 08:47:38 localhost augenrules[721]: pid 701
Nov 24 08:47:38 localhost augenrules[721]: rate_limit 0
Nov 24 08:47:38 localhost augenrules[721]: backlog_limit 8192
Nov 24 08:47:38 localhost augenrules[721]: lost 0
Nov 24 08:47:38 localhost augenrules[721]: backlog 0
Nov 24 08:47:38 localhost augenrules[721]: backlog_wait_time 60000
Nov 24 08:47:38 localhost augenrules[721]: backlog_wait_time_actual 0
Nov 24 08:47:38 localhost systemd[1]: Started Security Auditing Service.
Nov 24 08:47:38 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Nov 24 08:47:38 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Nov 24 08:47:38 localhost systemd[1]: Finished Rebuild Hardware Database.
Nov 24 08:47:38 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 24 08:47:38 localhost systemd-udevd[729]: Using default interface naming scheme 'rhel-9.0'.
Nov 24 08:47:38 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Nov 24 08:47:38 localhost systemd[1]: Starting Update is Completed...
Nov 24 08:47:38 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 24 08:47:38 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 24 08:47:38 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 24 08:47:38 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 24 08:47:38 localhost systemd[1]: Finished Update is Completed.
Nov 24 08:47:38 localhost systemd-udevd[738]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 08:47:38 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Nov 24 08:47:38 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Nov 24 08:47:38 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Nov 24 08:47:38 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Nov 24 08:47:38 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Nov 24 08:47:38 localhost kernel: kvm_amd: TSC scaling supported
Nov 24 08:47:38 localhost kernel: kvm_amd: Nested Virtualization enabled
Nov 24 08:47:38 localhost kernel: kvm_amd: Nested Paging enabled
Nov 24 08:47:38 localhost kernel: kvm_amd: LBR virtualization supported
Nov 24 08:47:38 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Nov 24 08:47:38 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Nov 24 08:47:38 localhost kernel: Console: switching to colour dummy device 80x25
Nov 24 08:47:38 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Nov 24 08:47:38 localhost kernel: [drm] features: -context_init
Nov 24 08:47:38 localhost kernel: [drm] number of scanouts: 1
Nov 24 08:47:38 localhost kernel: [drm] number of cap sets: 0
Nov 24 08:47:38 localhost systemd[1]: Reached target System Initialization.
Nov 24 08:47:38 localhost systemd[1]: Started dnf makecache --timer.
Nov 24 08:47:38 localhost systemd[1]: Started Daily rotation of log files.
Nov 24 08:47:38 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Nov 24 08:47:38 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Nov 24 08:47:38 localhost systemd[1]: Reached target Timer Units.
Nov 24 08:47:38 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Nov 24 08:47:38 localhost kernel: Console: switching to colour frame buffer device 128x48
Nov 24 08:47:38 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 24 08:47:38 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Nov 24 08:47:38 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Nov 24 08:47:38 localhost systemd[1]: Reached target Socket Units.
Nov 24 08:47:38 localhost systemd[1]: Starting D-Bus System Message Bus...
Nov 24 08:47:38 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 24 08:47:38 localhost systemd[1]: Started D-Bus System Message Bus.
Nov 24 08:47:38 localhost dbus-broker-lau[790]: Ready
Nov 24 08:47:38 localhost systemd[1]: Reached target Basic System.
Nov 24 08:47:38 localhost systemd[1]: Starting NTP client/server...
Nov 24 08:47:38 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Nov 24 08:47:38 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Nov 24 08:47:38 localhost systemd[1]: Starting IPv4 firewall with iptables...
Nov 24 08:47:38 localhost systemd[1]: Started irqbalance daemon.
Nov 24 08:47:38 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Nov 24 08:47:38 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 24 08:47:38 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 24 08:47:38 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 24 08:47:38 localhost systemd[1]: Reached target sshd-keygen.target.
Nov 24 08:47:38 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Nov 24 08:47:38 localhost systemd[1]: Reached target User and Group Name Lookups.
Nov 24 08:47:38 localhost systemd[1]: Starting User Login Management...
Nov 24 08:47:38 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Nov 24 08:47:38 localhost chronyd[830]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 24 08:47:38 localhost chronyd[830]: Loaded 0 symmetric keys
Nov 24 08:47:38 localhost chronyd[830]: Using right/UTC timezone to obtain leap second data
Nov 24 08:47:38 localhost chronyd[830]: Loaded seccomp filter (level 2)
Nov 24 08:47:38 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Nov 24 08:47:38 localhost systemd[1]: Started NTP client/server.
Nov 24 08:47:38 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Nov 24 08:47:38 localhost systemd-logind[822]: New seat seat0.
Nov 24 08:47:38 localhost systemd-logind[822]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 24 08:47:38 localhost systemd-logind[822]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 24 08:47:38 localhost systemd[1]: Started User Login Management.
Nov 24 08:47:39 localhost iptables.init[816]: iptables: Applying firewall rules: [  OK  ]
Nov 24 08:47:39 localhost systemd[1]: Finished IPv4 firewall with iptables.
Nov 24 08:47:39 localhost cloud-init[839]: Cloud-init v. 24.4-7.el9 running 'init-local' at Mon, 24 Nov 2025 08:47:39 +0000. Up 7.19 seconds.
Nov 24 08:47:39 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Nov 24 08:47:39 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Nov 24 08:47:39 localhost systemd[1]: run-cloud\x2dinit-tmp-tmppwvs3dcy.mount: Deactivated successfully.
Nov 24 08:47:39 localhost systemd[1]: Starting Hostname Service...
Nov 24 08:47:39 localhost systemd[1]: Started Hostname Service.
Nov 24 08:47:39 np0005533251.novalocal systemd-hostnamed[853]: Hostname set to <np0005533251.novalocal> (static)
Nov 24 08:47:40 np0005533251.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Nov 24 08:47:40 np0005533251.novalocal systemd[1]: Reached target Preparation for Network.
Nov 24 08:47:40 np0005533251.novalocal systemd[1]: Starting Network Manager...
Nov 24 08:47:40 np0005533251.novalocal NetworkManager[857]: <info>  [1763974060.1119] NetworkManager (version 1.54.1-1.el9) is starting... (boot:428e28ae-891b-4271-8668-6c1110086104)
Nov 24 08:47:40 np0005533251.novalocal NetworkManager[857]: <info>  [1763974060.1126] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 24 08:47:40 np0005533251.novalocal NetworkManager[857]: <info>  [1763974060.1263] manager[0x5559893b7080]: monitoring kernel firmware directory '/lib/firmware'.
Nov 24 08:47:40 np0005533251.novalocal NetworkManager[857]: <info>  [1763974060.1302] hostname: hostname: using hostnamed
Nov 24 08:47:40 np0005533251.novalocal NetworkManager[857]: <info>  [1763974060.1303] hostname: static hostname changed from (none) to "np0005533251.novalocal"
Nov 24 08:47:40 np0005533251.novalocal NetworkManager[857]: <info>  [1763974060.1306] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 24 08:47:40 np0005533251.novalocal NetworkManager[857]: <info>  [1763974060.1411] manager[0x5559893b7080]: rfkill: Wi-Fi hardware radio set enabled
Nov 24 08:47:40 np0005533251.novalocal NetworkManager[857]: <info>  [1763974060.1411] manager[0x5559893b7080]: rfkill: WWAN hardware radio set enabled
Nov 24 08:47:40 np0005533251.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Nov 24 08:47:40 np0005533251.novalocal NetworkManager[857]: <info>  [1763974060.1493] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 24 08:47:40 np0005533251.novalocal NetworkManager[857]: <info>  [1763974060.1493] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 24 08:47:40 np0005533251.novalocal NetworkManager[857]: <info>  [1763974060.1494] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 24 08:47:40 np0005533251.novalocal NetworkManager[857]: <info>  [1763974060.1494] manager: Networking is enabled by state file
Nov 24 08:47:40 np0005533251.novalocal NetworkManager[857]: <info>  [1763974060.1496] settings: Loaded settings plugin: keyfile (internal)
Nov 24 08:47:40 np0005533251.novalocal NetworkManager[857]: <info>  [1763974060.1526] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 24 08:47:40 np0005533251.novalocal NetworkManager[857]: <info>  [1763974060.1549] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 24 08:47:40 np0005533251.novalocal NetworkManager[857]: <info>  [1763974060.1573] dhcp: init: Using DHCP client 'internal'
Nov 24 08:47:40 np0005533251.novalocal NetworkManager[857]: <info>  [1763974060.1575] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 24 08:47:40 np0005533251.novalocal NetworkManager[857]: <info>  [1763974060.1591] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 08:47:40 np0005533251.novalocal NetworkManager[857]: <info>  [1763974060.1603] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 24 08:47:40 np0005533251.novalocal NetworkManager[857]: <info>  [1763974060.1610] device (lo): Activation: starting connection 'lo' (78ddbdbd-6a47-40ea-a116-11a5bade7fe9)
Nov 24 08:47:40 np0005533251.novalocal NetworkManager[857]: <info>  [1763974060.1618] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 24 08:47:40 np0005533251.novalocal NetworkManager[857]: <info>  [1763974060.1620] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 08:47:40 np0005533251.novalocal NetworkManager[857]: <info>  [1763974060.1643] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 24 08:47:40 np0005533251.novalocal NetworkManager[857]: <info>  [1763974060.1647] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 24 08:47:40 np0005533251.novalocal NetworkManager[857]: <info>  [1763974060.1651] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 24 08:47:40 np0005533251.novalocal NetworkManager[857]: <info>  [1763974060.1653] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 24 08:47:40 np0005533251.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 24 08:47:40 np0005533251.novalocal NetworkManager[857]: <info>  [1763974060.1656] device (eth0): carrier: link connected
Nov 24 08:47:40 np0005533251.novalocal NetworkManager[857]: <info>  [1763974060.1661] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 24 08:47:40 np0005533251.novalocal NetworkManager[857]: <info>  [1763974060.1666] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 24 08:47:40 np0005533251.novalocal NetworkManager[857]: <info>  [1763974060.1672] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 24 08:47:40 np0005533251.novalocal systemd[1]: Started Network Manager.
Nov 24 08:47:40 np0005533251.novalocal NetworkManager[857]: <info>  [1763974060.1677] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 24 08:47:40 np0005533251.novalocal NetworkManager[857]: <info>  [1763974060.1679] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 08:47:40 np0005533251.novalocal NetworkManager[857]: <info>  [1763974060.1681] manager: NetworkManager state is now CONNECTING
Nov 24 08:47:40 np0005533251.novalocal NetworkManager[857]: <info>  [1763974060.1684] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 08:47:40 np0005533251.novalocal systemd[1]: Reached target Network.
Nov 24 08:47:40 np0005533251.novalocal NetworkManager[857]: <info>  [1763974060.1693] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 08:47:40 np0005533251.novalocal NetworkManager[857]: <info>  [1763974060.1697] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 24 08:47:40 np0005533251.novalocal systemd[1]: Starting Network Manager Wait Online...
Nov 24 08:47:40 np0005533251.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Nov 24 08:47:40 np0005533251.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 24 08:47:40 np0005533251.novalocal NetworkManager[857]: <info>  [1763974060.1834] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 24 08:47:40 np0005533251.novalocal NetworkManager[857]: <info>  [1763974060.1837] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 24 08:47:40 np0005533251.novalocal NetworkManager[857]: <info>  [1763974060.1844] device (lo): Activation: successful, device activated.
Nov 24 08:47:40 np0005533251.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Nov 24 08:47:40 np0005533251.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 24 08:47:40 np0005533251.novalocal systemd[1]: Reached target NFS client services.
Nov 24 08:47:40 np0005533251.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Nov 24 08:47:40 np0005533251.novalocal systemd[1]: Reached target Remote File Systems.
Nov 24 08:47:40 np0005533251.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 24 08:47:40 np0005533251.novalocal NetworkManager[857]: <info>  [1763974060.5873] dhcp4 (eth0): state changed new lease, address=38.129.56.124
Nov 24 08:47:40 np0005533251.novalocal NetworkManager[857]: <info>  [1763974060.5887] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 24 08:47:40 np0005533251.novalocal NetworkManager[857]: <info>  [1763974060.5908] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 08:47:40 np0005533251.novalocal NetworkManager[857]: <info>  [1763974060.5946] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 08:47:40 np0005533251.novalocal NetworkManager[857]: <info>  [1763974060.5948] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 08:47:40 np0005533251.novalocal NetworkManager[857]: <info>  [1763974060.5954] manager: NetworkManager state is now CONNECTED_SITE
Nov 24 08:47:40 np0005533251.novalocal NetworkManager[857]: <info>  [1763974060.5957] device (eth0): Activation: successful, device activated.
Nov 24 08:47:40 np0005533251.novalocal NetworkManager[857]: <info>  [1763974060.5964] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 24 08:47:40 np0005533251.novalocal NetworkManager[857]: <info>  [1763974060.5970] manager: startup complete
Nov 24 08:47:40 np0005533251.novalocal systemd[1]: Finished Network Manager Wait Online.
Nov 24 08:47:40 np0005533251.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Nov 24 08:47:40 np0005533251.novalocal cloud-init[920]: Cloud-init v. 24.4-7.el9 running 'init' at Mon, 24 Nov 2025 08:47:40 +0000. Up 8.51 seconds.
Nov 24 08:47:40 np0005533251.novalocal cloud-init[920]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Nov 24 08:47:40 np0005533251.novalocal cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 24 08:47:40 np0005533251.novalocal cloud-init[920]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Nov 24 08:47:40 np0005533251.novalocal cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 24 08:47:40 np0005533251.novalocal cloud-init[920]: ci-info: |  eth0  | True |        38.129.56.124         | 255.255.255.0 | global | fa:16:3e:db:a7:d3 |
Nov 24 08:47:40 np0005533251.novalocal cloud-init[920]: ci-info: |  eth0  | True | fe80::f816:3eff:fedb:a7d3/64 |       .       |  link  | fa:16:3e:db:a7:d3 |
Nov 24 08:47:40 np0005533251.novalocal cloud-init[920]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Nov 24 08:47:40 np0005533251.novalocal cloud-init[920]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Nov 24 08:47:40 np0005533251.novalocal cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 24 08:47:40 np0005533251.novalocal cloud-init[920]: ci-info: ++++++++++++++++++++++++++++++++Route IPv4 info++++++++++++++++++++++++++++++++
Nov 24 08:47:40 np0005533251.novalocal cloud-init[920]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Nov 24 08:47:40 np0005533251.novalocal cloud-init[920]: ci-info: | Route |   Destination   |   Gateway   |     Genmask     | Interface | Flags |
Nov 24 08:47:40 np0005533251.novalocal cloud-init[920]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Nov 24 08:47:40 np0005533251.novalocal cloud-init[920]: ci-info: |   0   |     0.0.0.0     | 38.129.56.1 |     0.0.0.0     |    eth0   |   UG  |
Nov 24 08:47:40 np0005533251.novalocal cloud-init[920]: ci-info: |   1   |   38.129.56.0   |   0.0.0.0   |  255.255.255.0  |    eth0   |   U   |
Nov 24 08:47:40 np0005533251.novalocal cloud-init[920]: ci-info: |   2   | 169.254.169.254 | 38.129.56.5 | 255.255.255.255 |    eth0   |  UGH  |
Nov 24 08:47:40 np0005533251.novalocal cloud-init[920]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Nov 24 08:47:40 np0005533251.novalocal cloud-init[920]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Nov 24 08:47:40 np0005533251.novalocal cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 24 08:47:40 np0005533251.novalocal cloud-init[920]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Nov 24 08:47:40 np0005533251.novalocal cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 24 08:47:40 np0005533251.novalocal cloud-init[920]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Nov 24 08:47:40 np0005533251.novalocal cloud-init[920]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Nov 24 08:47:40 np0005533251.novalocal cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 24 08:47:41 np0005533251.novalocal useradd[987]: new group: name=cloud-user, GID=1001
Nov 24 08:47:41 np0005533251.novalocal useradd[987]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Nov 24 08:47:41 np0005533251.novalocal useradd[987]: add 'cloud-user' to group 'adm'
Nov 24 08:47:41 np0005533251.novalocal useradd[987]: add 'cloud-user' to group 'systemd-journal'
Nov 24 08:47:41 np0005533251.novalocal useradd[987]: add 'cloud-user' to shadow group 'adm'
Nov 24 08:47:41 np0005533251.novalocal useradd[987]: add 'cloud-user' to shadow group 'systemd-journal'
Nov 24 08:47:42 np0005533251.novalocal cloud-init[920]: Generating public/private rsa key pair.
Nov 24 08:47:42 np0005533251.novalocal cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Nov 24 08:47:42 np0005533251.novalocal cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Nov 24 08:47:42 np0005533251.novalocal cloud-init[920]: The key fingerprint is:
Nov 24 08:47:42 np0005533251.novalocal cloud-init[920]: SHA256:NG58/HwilkX8/YXKd8fUnrrcKqhQS/CWUUzeWYafjSw root@np0005533251.novalocal
Nov 24 08:47:42 np0005533251.novalocal cloud-init[920]: The key's randomart image is:
Nov 24 08:47:42 np0005533251.novalocal cloud-init[920]: +---[RSA 3072]----+
Nov 24 08:47:42 np0005533251.novalocal cloud-init[920]: |        oo  .o   |
Nov 24 08:47:42 np0005533251.novalocal cloud-init[920]: |        o..o+    |
Nov 24 08:47:42 np0005533251.novalocal cloud-init[920]: |     . .o. o= +  |
Nov 24 08:47:42 np0005533251.novalocal cloud-init[920]: |      o+oo E * +.|
Nov 24 08:47:42 np0005533251.novalocal cloud-init[920]: |       *S o o o =|
Nov 24 08:47:42 np0005533251.novalocal cloud-init[920]: |      +... * . ++|
Nov 24 08:47:42 np0005533251.novalocal cloud-init[920]: |     . .  = * o.*|
Nov 24 08:47:42 np0005533251.novalocal cloud-init[920]: |      .  o o.+o..|
Nov 24 08:47:42 np0005533251.novalocal cloud-init[920]: |       ..   .=o. |
Nov 24 08:47:42 np0005533251.novalocal cloud-init[920]: +----[SHA256]-----+
Nov 24 08:47:42 np0005533251.novalocal cloud-init[920]: Generating public/private ecdsa key pair.
Nov 24 08:47:42 np0005533251.novalocal cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Nov 24 08:47:42 np0005533251.novalocal cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Nov 24 08:47:42 np0005533251.novalocal cloud-init[920]: The key fingerprint is:
Nov 24 08:47:42 np0005533251.novalocal cloud-init[920]: SHA256:cL4Xt+GUha/Vkl/nMstBQ9rOmnbtNYuIoRdDoqvdyEQ root@np0005533251.novalocal
Nov 24 08:47:42 np0005533251.novalocal cloud-init[920]: The key's randomart image is:
Nov 24 08:47:42 np0005533251.novalocal cloud-init[920]: +---[ECDSA 256]---+
Nov 24 08:47:42 np0005533251.novalocal cloud-init[920]: |                 |
Nov 24 08:47:42 np0005533251.novalocal cloud-init[920]: |             .   |
Nov 24 08:47:42 np0005533251.novalocal cloud-init[920]: |      . .   . o  |
Nov 24 08:47:42 np0005533251.novalocal cloud-init[920]: |       = .   B o |
Nov 24 08:47:42 np0005533251.novalocal cloud-init[920]: |     E. S . * O +|
Nov 24 08:47:42 np0005533251.novalocal cloud-init[920]: |    ..   + = O =o|
Nov 24 08:47:42 np0005533251.novalocal cloud-init[920]: |     .. ..+ + *o+|
Nov 24 08:47:42 np0005533251.novalocal cloud-init[920]: |    +.o .oo o+o==|
Nov 24 08:47:42 np0005533251.novalocal cloud-init[920]: |   ..+ o.. ooooo.|
Nov 24 08:47:42 np0005533251.novalocal cloud-init[920]: +----[SHA256]-----+
Nov 24 08:47:42 np0005533251.novalocal cloud-init[920]: Generating public/private ed25519 key pair.
Nov 24 08:47:42 np0005533251.novalocal cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Nov 24 08:47:42 np0005533251.novalocal cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Nov 24 08:47:42 np0005533251.novalocal cloud-init[920]: The key fingerprint is:
Nov 24 08:47:42 np0005533251.novalocal cloud-init[920]: SHA256:dxDoZ1wUTjMzk5fOaOGbXIxz5hCfXO7gb+noNOYT0nE root@np0005533251.novalocal
Nov 24 08:47:42 np0005533251.novalocal cloud-init[920]: The key's randomart image is:
Nov 24 08:47:42 np0005533251.novalocal cloud-init[920]: +--[ED25519 256]--+
Nov 24 08:47:42 np0005533251.novalocal cloud-init[920]: |         ...Xo . |
Nov 24 08:47:42 np0005533251.novalocal cloud-init[920]: |        .  +=*o .|
Nov 24 08:47:42 np0005533251.novalocal cloud-init[920]: |       . ..o.% + |
Nov 24 08:47:42 np0005533251.novalocal cloud-init[920]: |        . +.B.%E.|
Nov 24 08:47:42 np0005533251.novalocal cloud-init[920]: |        So.oo@oo |
Nov 24 08:47:42 np0005533251.novalocal cloud-init[920]: |         . o+oo .|
Nov 24 08:47:42 np0005533251.novalocal cloud-init[920]: |            .+...|
Nov 24 08:47:42 np0005533251.novalocal cloud-init[920]: |            +.o.o|
Nov 24 08:47:42 np0005533251.novalocal cloud-init[920]: |            .+oo |
Nov 24 08:47:42 np0005533251.novalocal cloud-init[920]: +----[SHA256]-----+
Nov 24 08:47:42 np0005533251.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Nov 24 08:47:42 np0005533251.novalocal systemd[1]: Reached target Cloud-config availability.
Nov 24 08:47:42 np0005533251.novalocal systemd[1]: Reached target Network is Online.
Nov 24 08:47:42 np0005533251.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Nov 24 08:47:42 np0005533251.novalocal systemd[1]: Starting Crash recovery kernel arming...
Nov 24 08:47:42 np0005533251.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Nov 24 08:47:42 np0005533251.novalocal systemd[1]: Starting System Logging Service...
Nov 24 08:47:42 np0005533251.novalocal systemd[1]: Starting OpenSSH server daemon...
Nov 24 08:47:42 np0005533251.novalocal sm-notify[1003]: Version 2.5.4 starting
Nov 24 08:47:42 np0005533251.novalocal systemd[1]: Starting Permit User Sessions...
Nov 24 08:47:42 np0005533251.novalocal systemd[1]: Started Notify NFS peers of a restart.
Nov 24 08:47:42 np0005533251.novalocal systemd[1]: Finished Permit User Sessions.
Nov 24 08:47:42 np0005533251.novalocal sshd[1005]: Server listening on 0.0.0.0 port 22.
Nov 24 08:47:42 np0005533251.novalocal sshd[1005]: Server listening on :: port 22.
Nov 24 08:47:42 np0005533251.novalocal systemd[1]: Started Command Scheduler.
Nov 24 08:47:42 np0005533251.novalocal systemd[1]: Started Getty on tty1.
Nov 24 08:47:42 np0005533251.novalocal systemd[1]: Started Serial Getty on ttyS0.
Nov 24 08:47:42 np0005533251.novalocal crond[1008]: (CRON) STARTUP (1.5.7)
Nov 24 08:47:42 np0005533251.novalocal crond[1008]: (CRON) INFO (Syslog will be used instead of sendmail.)
Nov 24 08:47:42 np0005533251.novalocal crond[1008]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 35% if used.)
Nov 24 08:47:42 np0005533251.novalocal crond[1008]: (CRON) INFO (running with inotify support)
Nov 24 08:47:42 np0005533251.novalocal systemd[1]: Reached target Login Prompts.
Nov 24 08:47:42 np0005533251.novalocal systemd[1]: Started OpenSSH server daemon.
Nov 24 08:47:42 np0005533251.novalocal rsyslogd[1004]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="1004" x-info="https://www.rsyslog.com"] start
Nov 24 08:47:42 np0005533251.novalocal systemd[1]: Started System Logging Service.
Nov 24 08:47:42 np0005533251.novalocal rsyslogd[1004]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Nov 24 08:47:42 np0005533251.novalocal systemd[1]: Reached target Multi-User System.
Nov 24 08:47:42 np0005533251.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Nov 24 08:47:42 np0005533251.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Nov 24 08:47:42 np0005533251.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Nov 24 08:47:42 np0005533251.novalocal rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 08:47:42 np0005533251.novalocal kdumpctl[1013]: kdump: No kdump initial ramdisk found.
Nov 24 08:47:42 np0005533251.novalocal kdumpctl[1013]: kdump: Rebuilding /boot/initramfs-5.14.0-639.el9.x86_64kdump.img
Nov 24 08:47:42 np0005533251.novalocal cloud-init[1146]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Mon, 24 Nov 2025 08:47:42 +0000. Up 10.06 seconds.
Nov 24 08:47:42 np0005533251.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Nov 24 08:47:42 np0005533251.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Nov 24 08:47:42 np0005533251.novalocal dracut[1264]: dracut-057-102.git20250818.el9
Nov 24 08:47:42 np0005533251.novalocal dracut[1266]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-639.el9.x86_64kdump.img 5.14.0-639.el9.x86_64
Nov 24 08:47:42 np0005533251.novalocal cloud-init[1292]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Mon, 24 Nov 2025 08:47:42 +0000. Up 10.50 seconds.
Nov 24 08:47:42 np0005533251.novalocal cloud-init[1321]: #############################################################
Nov 24 08:47:42 np0005533251.novalocal sshd-session[1315]: Unable to negotiate with 38.102.83.114 port 46596: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Nov 24 08:47:42 np0005533251.novalocal cloud-init[1326]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Nov 24 08:47:42 np0005533251.novalocal cloud-init[1335]: 256 SHA256:cL4Xt+GUha/Vkl/nMstBQ9rOmnbtNYuIoRdDoqvdyEQ root@np0005533251.novalocal (ECDSA)
Nov 24 08:47:42 np0005533251.novalocal sshd-session[1330]: Connection reset by 38.102.83.114 port 46606 [preauth]
Nov 24 08:47:42 np0005533251.novalocal cloud-init[1341]: 256 SHA256:dxDoZ1wUTjMzk5fOaOGbXIxz5hCfXO7gb+noNOYT0nE root@np0005533251.novalocal (ED25519)
Nov 24 08:47:43 np0005533251.novalocal cloud-init[1349]: 3072 SHA256:NG58/HwilkX8/YXKd8fUnrrcKqhQS/CWUUzeWYafjSw root@np0005533251.novalocal (RSA)
Nov 24 08:47:43 np0005533251.novalocal sshd-session[1344]: Unable to negotiate with 38.102.83.114 port 46622: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Nov 24 08:47:43 np0005533251.novalocal cloud-init[1352]: -----END SSH HOST KEY FINGERPRINTS-----
Nov 24 08:47:43 np0005533251.novalocal cloud-init[1353]: #############################################################
Nov 24 08:47:43 np0005533251.novalocal sshd-session[1354]: Unable to negotiate with 38.102.83.114 port 46624: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Nov 24 08:47:43 np0005533251.novalocal sshd-session[1300]: Connection closed by 38.102.83.114 port 46580 [preauth]
Nov 24 08:47:43 np0005533251.novalocal cloud-init[1292]: Cloud-init v. 24.4-7.el9 finished at Mon, 24 Nov 2025 08:47:43 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 10.68 seconds
Nov 24 08:47:43 np0005533251.novalocal sshd-session[1366]: Unable to negotiate with 38.102.83.114 port 46652: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Nov 24 08:47:43 np0005533251.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Nov 24 08:47:43 np0005533251.novalocal systemd[1]: Reached target Cloud-init target.
Nov 24 08:47:43 np0005533251.novalocal sshd-session[1371]: Unable to negotiate with 38.102.83.114 port 46654: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Nov 24 08:47:43 np0005533251.novalocal sshd-session[1359]: Connection closed by 38.102.83.114 port 46626 [preauth]
Nov 24 08:47:43 np0005533251.novalocal sshd-session[1361]: Connection closed by 38.102.83.114 port 46642 [preauth]
Nov 24 08:47:43 np0005533251.novalocal dracut[1266]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Nov 24 08:47:43 np0005533251.novalocal dracut[1266]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Nov 24 08:47:43 np0005533251.novalocal dracut[1266]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Nov 24 08:47:43 np0005533251.novalocal dracut[1266]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 24 08:47:43 np0005533251.novalocal dracut[1266]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 24 08:47:43 np0005533251.novalocal dracut[1266]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 24 08:47:43 np0005533251.novalocal dracut[1266]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 24 08:47:43 np0005533251.novalocal dracut[1266]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 24 08:47:43 np0005533251.novalocal dracut[1266]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 24 08:47:43 np0005533251.novalocal dracut[1266]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 24 08:47:43 np0005533251.novalocal dracut[1266]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 24 08:47:43 np0005533251.novalocal dracut[1266]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 24 08:47:43 np0005533251.novalocal dracut[1266]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 24 08:47:43 np0005533251.novalocal dracut[1266]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 24 08:47:43 np0005533251.novalocal dracut[1266]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Nov 24 08:47:43 np0005533251.novalocal dracut[1266]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Nov 24 08:47:43 np0005533251.novalocal dracut[1266]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 24 08:47:43 np0005533251.novalocal dracut[1266]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 24 08:47:43 np0005533251.novalocal dracut[1266]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 24 08:47:43 np0005533251.novalocal dracut[1266]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 24 08:47:43 np0005533251.novalocal dracut[1266]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 24 08:47:43 np0005533251.novalocal dracut[1266]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 24 08:47:43 np0005533251.novalocal dracut[1266]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 24 08:47:43 np0005533251.novalocal dracut[1266]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 24 08:47:43 np0005533251.novalocal dracut[1266]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 24 08:47:43 np0005533251.novalocal dracut[1266]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 24 08:47:43 np0005533251.novalocal dracut[1266]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 24 08:47:43 np0005533251.novalocal dracut[1266]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 24 08:47:43 np0005533251.novalocal dracut[1266]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 24 08:47:43 np0005533251.novalocal dracut[1266]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 24 08:47:43 np0005533251.novalocal dracut[1266]: Module 'resume' will not be installed, because it's in the list to be omitted!
Nov 24 08:47:43 np0005533251.novalocal dracut[1266]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Nov 24 08:47:43 np0005533251.novalocal dracut[1266]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Nov 24 08:47:44 np0005533251.novalocal dracut[1266]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 24 08:47:44 np0005533251.novalocal dracut[1266]: memstrack is not available
Nov 24 08:47:44 np0005533251.novalocal dracut[1266]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 24 08:47:44 np0005533251.novalocal dracut[1266]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 24 08:47:44 np0005533251.novalocal dracut[1266]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 24 08:47:44 np0005533251.novalocal dracut[1266]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 24 08:47:44 np0005533251.novalocal dracut[1266]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 24 08:47:44 np0005533251.novalocal dracut[1266]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 24 08:47:44 np0005533251.novalocal dracut[1266]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 24 08:47:44 np0005533251.novalocal dracut[1266]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 24 08:47:44 np0005533251.novalocal dracut[1266]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 24 08:47:44 np0005533251.novalocal dracut[1266]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 24 08:47:44 np0005533251.novalocal dracut[1266]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 24 08:47:44 np0005533251.novalocal dracut[1266]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 24 08:47:44 np0005533251.novalocal dracut[1266]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 24 08:47:44 np0005533251.novalocal dracut[1266]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 24 08:47:44 np0005533251.novalocal dracut[1266]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 24 08:47:44 np0005533251.novalocal dracut[1266]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 24 08:47:44 np0005533251.novalocal dracut[1266]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 24 08:47:44 np0005533251.novalocal dracut[1266]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 24 08:47:44 np0005533251.novalocal dracut[1266]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 24 08:47:44 np0005533251.novalocal dracut[1266]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 24 08:47:44 np0005533251.novalocal dracut[1266]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 24 08:47:44 np0005533251.novalocal dracut[1266]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 24 08:47:44 np0005533251.novalocal dracut[1266]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 24 08:47:44 np0005533251.novalocal dracut[1266]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 24 08:47:44 np0005533251.novalocal dracut[1266]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 24 08:47:44 np0005533251.novalocal dracut[1266]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 24 08:47:44 np0005533251.novalocal dracut[1266]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 24 08:47:44 np0005533251.novalocal dracut[1266]: memstrack is not available
Nov 24 08:47:44 np0005533251.novalocal dracut[1266]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 24 08:47:44 np0005533251.novalocal dracut[1266]: *** Including module: systemd ***
Nov 24 08:47:44 np0005533251.novalocal dracut[1266]: *** Including module: fips ***
Nov 24 08:47:45 np0005533251.novalocal dracut[1266]: *** Including module: systemd-initrd ***
Nov 24 08:47:45 np0005533251.novalocal dracut[1266]: *** Including module: i18n ***
Nov 24 08:47:45 np0005533251.novalocal dracut[1266]: *** Including module: drm ***
Nov 24 08:47:45 np0005533251.novalocal chronyd[830]: Selected source 216.197.156.83 (2.centos.pool.ntp.org)
Nov 24 08:47:45 np0005533251.novalocal chronyd[830]: System clock TAI offset set to 37 seconds
Nov 24 08:47:45 np0005533251.novalocal dracut[1266]: *** Including module: prefixdevname ***
Nov 24 08:47:45 np0005533251.novalocal dracut[1266]: *** Including module: kernel-modules ***
Nov 24 08:47:45 np0005533251.novalocal kernel: block vda: the capability attribute has been deprecated.
Nov 24 08:47:46 np0005533251.novalocal dracut[1266]: *** Including module: kernel-modules-extra ***
Nov 24 08:47:46 np0005533251.novalocal dracut[1266]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Nov 24 08:47:46 np0005533251.novalocal dracut[1266]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Nov 24 08:47:46 np0005533251.novalocal dracut[1266]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Nov 24 08:47:46 np0005533251.novalocal dracut[1266]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Nov 24 08:47:46 np0005533251.novalocal dracut[1266]: *** Including module: qemu ***
Nov 24 08:47:46 np0005533251.novalocal dracut[1266]: *** Including module: fstab-sys ***
Nov 24 08:47:46 np0005533251.novalocal dracut[1266]: *** Including module: rootfs-block ***
Nov 24 08:47:46 np0005533251.novalocal dracut[1266]: *** Including module: terminfo ***
Nov 24 08:47:46 np0005533251.novalocal dracut[1266]: *** Including module: udev-rules ***
Nov 24 08:47:46 np0005533251.novalocal dracut[1266]: Skipping udev rule: 91-permissions.rules
Nov 24 08:47:46 np0005533251.novalocal dracut[1266]: Skipping udev rule: 80-drivers-modprobe.rules
Nov 24 08:47:46 np0005533251.novalocal dracut[1266]: *** Including module: virtiofs ***
Nov 24 08:47:46 np0005533251.novalocal dracut[1266]: *** Including module: dracut-systemd ***
Nov 24 08:47:47 np0005533251.novalocal dracut[1266]: *** Including module: usrmount ***
Nov 24 08:47:47 np0005533251.novalocal dracut[1266]: *** Including module: base ***
Nov 24 08:47:47 np0005533251.novalocal systemd[1]: getty@tty1.service: Deactivated successfully.
Nov 24 08:47:47 np0005533251.novalocal systemd[1]: getty@tty1.service: Scheduled restart job, restart counter is at 1.
Nov 24 08:47:47 np0005533251.novalocal systemd[1]: Stopped Getty on tty1.
Nov 24 08:47:47 np0005533251.novalocal systemd[1]: Started Getty on tty1.
Nov 24 08:47:47 np0005533251.novalocal dracut[1266]: *** Including module: fs-lib ***
Nov 24 08:47:47 np0005533251.novalocal dracut[1266]: *** Including module: kdumpbase ***
Nov 24 08:47:47 np0005533251.novalocal dracut[1266]: *** Including module: microcode_ctl-fw_dir_override ***
Nov 24 08:47:47 np0005533251.novalocal dracut[1266]:   microcode_ctl module: mangling fw_dir
Nov 24 08:47:47 np0005533251.novalocal dracut[1266]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Nov 24 08:47:47 np0005533251.novalocal dracut[1266]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Nov 24 08:47:47 np0005533251.novalocal dracut[1266]:     microcode_ctl: configuration "intel" is ignored
Nov 24 08:47:47 np0005533251.novalocal dracut[1266]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Nov 24 08:47:47 np0005533251.novalocal dracut[1266]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Nov 24 08:47:47 np0005533251.novalocal dracut[1266]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Nov 24 08:47:47 np0005533251.novalocal dracut[1266]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Nov 24 08:47:47 np0005533251.novalocal dracut[1266]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Nov 24 08:47:47 np0005533251.novalocal dracut[1266]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Nov 24 08:47:47 np0005533251.novalocal dracut[1266]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Nov 24 08:47:47 np0005533251.novalocal dracut[1266]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Nov 24 08:47:47 np0005533251.novalocal dracut[1266]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Nov 24 08:47:47 np0005533251.novalocal dracut[1266]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Nov 24 08:47:47 np0005533251.novalocal dracut[1266]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Nov 24 08:47:47 np0005533251.novalocal dracut[1266]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Nov 24 08:47:47 np0005533251.novalocal dracut[1266]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Nov 24 08:47:47 np0005533251.novalocal dracut[1266]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Nov 24 08:47:47 np0005533251.novalocal dracut[1266]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Nov 24 08:47:48 np0005533251.novalocal dracut[1266]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Nov 24 08:47:48 np0005533251.novalocal dracut[1266]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Nov 24 08:47:48 np0005533251.novalocal dracut[1266]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Nov 24 08:47:48 np0005533251.novalocal dracut[1266]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Nov 24 08:47:48 np0005533251.novalocal dracut[1266]: *** Including module: openssl ***
Nov 24 08:47:48 np0005533251.novalocal dracut[1266]: *** Including module: shutdown ***
Nov 24 08:47:48 np0005533251.novalocal dracut[1266]: *** Including module: squash ***
Nov 24 08:47:48 np0005533251.novalocal dracut[1266]: *** Including modules done ***
Nov 24 08:47:48 np0005533251.novalocal dracut[1266]: *** Installing kernel module dependencies ***
Nov 24 08:47:49 np0005533251.novalocal dracut[1266]: *** Installing kernel module dependencies done ***
Nov 24 08:47:49 np0005533251.novalocal dracut[1266]: *** Resolving executable dependencies ***
Nov 24 08:47:49 np0005533251.novalocal irqbalance[817]: Cannot change IRQ 25 affinity: Operation not permitted
Nov 24 08:47:49 np0005533251.novalocal irqbalance[817]: IRQ 25 affinity is now unmanaged
Nov 24 08:47:49 np0005533251.novalocal irqbalance[817]: Cannot change IRQ 31 affinity: Operation not permitted
Nov 24 08:47:49 np0005533251.novalocal irqbalance[817]: IRQ 31 affinity is now unmanaged
Nov 24 08:47:49 np0005533251.novalocal irqbalance[817]: Cannot change IRQ 28 affinity: Operation not permitted
Nov 24 08:47:49 np0005533251.novalocal irqbalance[817]: IRQ 28 affinity is now unmanaged
Nov 24 08:47:49 np0005533251.novalocal irqbalance[817]: Cannot change IRQ 32 affinity: Operation not permitted
Nov 24 08:47:49 np0005533251.novalocal irqbalance[817]: IRQ 32 affinity is now unmanaged
Nov 24 08:47:49 np0005533251.novalocal irqbalance[817]: Cannot change IRQ 30 affinity: Operation not permitted
Nov 24 08:47:49 np0005533251.novalocal irqbalance[817]: IRQ 30 affinity is now unmanaged
Nov 24 08:47:49 np0005533251.novalocal irqbalance[817]: Cannot change IRQ 29 affinity: Operation not permitted
Nov 24 08:47:49 np0005533251.novalocal irqbalance[817]: IRQ 29 affinity is now unmanaged
Nov 24 08:47:50 np0005533251.novalocal dracut[1266]: *** Resolving executable dependencies done ***
Nov 24 08:47:50 np0005533251.novalocal dracut[1266]: *** Generating early-microcode cpio image ***
Nov 24 08:47:50 np0005533251.novalocal dracut[1266]: *** Store current command line parameters ***
Nov 24 08:47:50 np0005533251.novalocal dracut[1266]: Stored kernel commandline:
Nov 24 08:47:50 np0005533251.novalocal dracut[1266]: No dracut internal kernel commandline stored in the initramfs
Nov 24 08:47:50 np0005533251.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 24 08:47:50 np0005533251.novalocal dracut[1266]: *** Install squash loader ***
Nov 24 08:47:51 np0005533251.novalocal dracut[1266]: *** Squashing the files inside the initramfs ***
Nov 24 08:47:52 np0005533251.novalocal dracut[1266]: *** Squashing the files inside the initramfs done ***
Nov 24 08:47:52 np0005533251.novalocal dracut[1266]: *** Creating image file '/boot/initramfs-5.14.0-639.el9.x86_64kdump.img' ***
Nov 24 08:47:52 np0005533251.novalocal dracut[1266]: *** Hardlinking files ***
Nov 24 08:47:52 np0005533251.novalocal dracut[1266]: Mode:           real
Nov 24 08:47:52 np0005533251.novalocal dracut[1266]: Files:          50
Nov 24 08:47:52 np0005533251.novalocal dracut[1266]: Linked:         0 files
Nov 24 08:47:52 np0005533251.novalocal dracut[1266]: Compared:       0 xattrs
Nov 24 08:47:52 np0005533251.novalocal dracut[1266]: Compared:       0 files
Nov 24 08:47:52 np0005533251.novalocal dracut[1266]: Saved:          0 B
Nov 24 08:47:52 np0005533251.novalocal dracut[1266]: Duration:       0.000511 seconds
Nov 24 08:47:52 np0005533251.novalocal dracut[1266]: *** Hardlinking files done ***
Nov 24 08:47:53 np0005533251.novalocal dracut[1266]: *** Creating initramfs image file '/boot/initramfs-5.14.0-639.el9.x86_64kdump.img' done ***
Nov 24 08:47:53 np0005533251.novalocal kdumpctl[1013]: kdump: kexec: loaded kdump kernel
Nov 24 08:47:53 np0005533251.novalocal kdumpctl[1013]: kdump: Starting kdump: [OK]
Nov 24 08:47:53 np0005533251.novalocal systemd[1]: Finished Crash recovery kernel arming.
Nov 24 08:47:53 np0005533251.novalocal systemd[1]: Startup finished in 1.497s (kernel) + 3.056s (initrd) + 16.887s (userspace) = 21.441s.
Nov 24 08:48:06 np0005533251.novalocal sshd-session[4296]: Accepted publickey for zuul from 38.102.83.114 port 36548 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Nov 24 08:48:06 np0005533251.novalocal systemd[1]: Created slice User Slice of UID 1000.
Nov 24 08:48:06 np0005533251.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Nov 24 08:48:06 np0005533251.novalocal systemd-logind[822]: New session 1 of user zuul.
Nov 24 08:48:06 np0005533251.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Nov 24 08:48:06 np0005533251.novalocal systemd[1]: Starting User Manager for UID 1000...
Nov 24 08:48:06 np0005533251.novalocal systemd[4301]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 08:48:07 np0005533251.novalocal systemd[4301]: Queued start job for default target Main User Target.
Nov 24 08:48:07 np0005533251.novalocal systemd[4301]: Created slice User Application Slice.
Nov 24 08:48:07 np0005533251.novalocal systemd[4301]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 24 08:48:07 np0005533251.novalocal systemd[4301]: Started Daily Cleanup of User's Temporary Directories.
Nov 24 08:48:07 np0005533251.novalocal systemd[4301]: Reached target Paths.
Nov 24 08:48:07 np0005533251.novalocal systemd[4301]: Reached target Timers.
Nov 24 08:48:07 np0005533251.novalocal systemd[4301]: Starting D-Bus User Message Bus Socket...
Nov 24 08:48:07 np0005533251.novalocal systemd[4301]: Starting Create User's Volatile Files and Directories...
Nov 24 08:48:07 np0005533251.novalocal systemd[4301]: Finished Create User's Volatile Files and Directories.
Nov 24 08:48:07 np0005533251.novalocal systemd[4301]: Listening on D-Bus User Message Bus Socket.
Nov 24 08:48:07 np0005533251.novalocal systemd[4301]: Reached target Sockets.
Nov 24 08:48:07 np0005533251.novalocal systemd[4301]: Reached target Basic System.
Nov 24 08:48:07 np0005533251.novalocal systemd[4301]: Reached target Main User Target.
Nov 24 08:48:07 np0005533251.novalocal systemd[4301]: Startup finished in 119ms.
Nov 24 08:48:07 np0005533251.novalocal systemd[1]: Started User Manager for UID 1000.
Nov 24 08:48:07 np0005533251.novalocal systemd[1]: Started Session 1 of User zuul.
Nov 24 08:48:07 np0005533251.novalocal sshd-session[4296]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 08:48:07 np0005533251.novalocal python3[4383]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 08:48:10 np0005533251.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 24 08:48:10 np0005533251.novalocal python3[4413]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 08:48:18 np0005533251.novalocal python3[4471]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 08:48:19 np0005533251.novalocal python3[4511]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Nov 24 08:48:22 np0005533251.novalocal python3[4537]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtGjVEb/lsDO7QcEMCxozreHmfSkbPYtukhJN3wVhqpj6xeUXDPmoULx3/bUoF5EPUMcOV3spnCrShHpk7CaLVLFC6oNrQxPD181TchE78zphBpk8I1ehE8T9c7obAmyKrEcACWMj7F602jB1LiYcFYv4jlfDhyW3uTQnip2LICS2Kfa99lM5/ASVfbkov0rOqv+cDcBEhm9XXnUuxfGF0JDXhqv4Moan3wsyDreG2bhonj0B8vCTteeQ78h13an4IV58Xfard0MCw6jIS9DyQLfwpc3OLaKIMe3CC2oVRB77qysEMlCAEihHk42CgdoK8E/tovexbpxYDVKE2PymKN81ObjmT/CgplB54Mo8icraKe+Q1PzX43HsSi20RnipJFuMU33UpP94PO+WoB11gl03bBmluLjuLt4uV5EmciWyTP/feSffjrkuNiIBwXnGakV1+NRH2S8kMbnITAdJAdL3vn8XkYw9FARF1VW6T8Ft+GxeEEJxt8kii/56xDiM= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 08:48:22 np0005533251.novalocal python3[4561]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 08:48:23 np0005533251.novalocal python3[4660]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 08:48:23 np0005533251.novalocal python3[4731]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763974102.6987975-251-174148235058304/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=7f3765bc298a427e931eb426db28639c_id_rsa follow=False checksum=1ba3cc8ce402543c463affbc560046c840463cbe backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 08:48:24 np0005533251.novalocal python3[4854]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 08:48:24 np0005533251.novalocal python3[4925]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763974103.7316625-306-125104312362448/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=7f3765bc298a427e931eb426db28639c_id_rsa.pub follow=False checksum=2646cb7a5a0d5a58175bb49a3d139e585d675669 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 08:48:25 np0005533251.novalocal python3[4973]: ansible-ping Invoked with data=pong
Nov 24 08:48:26 np0005533251.novalocal python3[4997]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 08:48:29 np0005533251.novalocal python3[5055]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Nov 24 08:48:31 np0005533251.novalocal python3[5087]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 08:48:31 np0005533251.novalocal python3[5111]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 08:48:31 np0005533251.novalocal python3[5135]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 08:48:31 np0005533251.novalocal python3[5159]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 08:48:32 np0005533251.novalocal python3[5183]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 08:48:32 np0005533251.novalocal python3[5207]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 08:48:34 np0005533251.novalocal sudo[5231]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwjmiqqwaanqwlncgoytarzrhrdgzoql ; /usr/bin/python3'
Nov 24 08:48:34 np0005533251.novalocal sudo[5231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:48:34 np0005533251.novalocal python3[5233]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 08:48:34 np0005533251.novalocal sudo[5231]: pam_unix(sudo:session): session closed for user root
Nov 24 08:48:34 np0005533251.novalocal sudo[5309]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ochuvvinuvthhalpdxlvfsagzbzxcnrt ; /usr/bin/python3'
Nov 24 08:48:34 np0005533251.novalocal sudo[5309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:48:34 np0005533251.novalocal python3[5311]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 08:48:34 np0005533251.novalocal sudo[5309]: pam_unix(sudo:session): session closed for user root
Nov 24 08:48:35 np0005533251.novalocal sudo[5382]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isauchcvijslyfbobjfwlxwrvjuhaavw ; /usr/bin/python3'
Nov 24 08:48:35 np0005533251.novalocal sudo[5382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:48:35 np0005533251.novalocal python3[5384]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1763974114.4965258-31-197717154555351/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 08:48:35 np0005533251.novalocal sudo[5382]: pam_unix(sudo:session): session closed for user root
Nov 24 08:48:36 np0005533251.novalocal python3[5432]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 08:48:36 np0005533251.novalocal python3[5456]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 08:48:36 np0005533251.novalocal python3[5480]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 08:48:36 np0005533251.novalocal python3[5504]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 08:48:37 np0005533251.novalocal python3[5528]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 08:48:37 np0005533251.novalocal python3[5552]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 08:48:37 np0005533251.novalocal python3[5576]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 08:48:37 np0005533251.novalocal python3[5600]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 08:48:38 np0005533251.novalocal python3[5624]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 08:48:38 np0005533251.novalocal python3[5648]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 08:48:38 np0005533251.novalocal python3[5672]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 08:48:39 np0005533251.novalocal python3[5696]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 08:48:39 np0005533251.novalocal python3[5720]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 08:48:39 np0005533251.novalocal python3[5744]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 08:48:39 np0005533251.novalocal python3[5768]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 08:48:40 np0005533251.novalocal python3[5792]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 08:48:40 np0005533251.novalocal python3[5816]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 08:48:40 np0005533251.novalocal python3[5840]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 08:48:40 np0005533251.novalocal python3[5864]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 08:48:41 np0005533251.novalocal python3[5888]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 08:48:41 np0005533251.novalocal python3[5912]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 08:48:41 np0005533251.novalocal python3[5936]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 08:48:41 np0005533251.novalocal python3[5960]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 08:48:42 np0005533251.novalocal python3[5984]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 08:48:42 np0005533251.novalocal python3[6008]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 08:48:42 np0005533251.novalocal python3[6032]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 08:48:45 np0005533251.novalocal sudo[6056]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogybdbvfvfrtlxswlkajctlswfoykmux ; /usr/bin/python3'
Nov 24 08:48:45 np0005533251.novalocal sudo[6056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:48:45 np0005533251.novalocal python3[6058]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 24 08:48:45 np0005533251.novalocal systemd[1]: Starting Time & Date Service...
Nov 24 08:48:45 np0005533251.novalocal systemd[1]: Started Time & Date Service.
Nov 24 08:48:45 np0005533251.novalocal systemd-timedated[6060]: Changed time zone to 'UTC' (UTC).
Nov 24 08:48:45 np0005533251.novalocal sudo[6056]: pam_unix(sudo:session): session closed for user root
Nov 24 08:48:45 np0005533251.novalocal sudo[6087]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isqvldvpghvlcyciobgkfalkjlpxsavv ; /usr/bin/python3'
Nov 24 08:48:45 np0005533251.novalocal sudo[6087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:48:46 np0005533251.novalocal python3[6089]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 08:48:46 np0005533251.novalocal sudo[6087]: pam_unix(sudo:session): session closed for user root
Nov 24 08:48:46 np0005533251.novalocal python3[6165]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 08:48:46 np0005533251.novalocal python3[6236]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1763974126.224626-251-79101231390622/source _original_basename=tmp50997lp7 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 08:48:47 np0005533251.novalocal python3[6336]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 08:48:47 np0005533251.novalocal python3[6407]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1763974126.9786062-301-79964932735818/source _original_basename=tmp47fxaaqc follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 08:48:48 np0005533251.novalocal sudo[6507]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnsywmbbcovggkwmgwotdobkbozhjbsp ; /usr/bin/python3'
Nov 24 08:48:48 np0005533251.novalocal sudo[6507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:48:48 np0005533251.novalocal python3[6509]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 08:48:48 np0005533251.novalocal sudo[6507]: pam_unix(sudo:session): session closed for user root
Nov 24 08:48:48 np0005533251.novalocal sudo[6580]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjusokazwdmvdnprpjxymurjmxkzfeoj ; /usr/bin/python3'
Nov 24 08:48:48 np0005533251.novalocal sudo[6580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:48:48 np0005533251.novalocal python3[6582]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1763974128.0577252-381-43153849477469/source _original_basename=tmpkmyxeyye follow=False checksum=b56c6897fa19c5ea0db56764a8ee4e876a047f10 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 08:48:48 np0005533251.novalocal sudo[6580]: pam_unix(sudo:session): session closed for user root
Nov 24 08:48:49 np0005533251.novalocal python3[6630]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 08:48:49 np0005533251.novalocal python3[6656]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 08:48:49 np0005533251.novalocal sudo[6734]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyrvduzmjrguzewhiezkgnqtyxmqczjk ; /usr/bin/python3'
Nov 24 08:48:49 np0005533251.novalocal sudo[6734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:48:49 np0005533251.novalocal python3[6736]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 08:48:49 np0005533251.novalocal sudo[6734]: pam_unix(sudo:session): session closed for user root
Nov 24 08:48:50 np0005533251.novalocal sudo[6807]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trhhwgoutzuhvctigaeyfcegwhnzwlgj ; /usr/bin/python3'
Nov 24 08:48:50 np0005533251.novalocal sudo[6807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:48:50 np0005533251.novalocal python3[6809]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1763974129.6375382-451-147060396253973/source _original_basename=tmpqdnnf9eb follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 08:48:50 np0005533251.novalocal sudo[6807]: pam_unix(sudo:session): session closed for user root
Nov 24 08:48:50 np0005533251.novalocal sudo[6858]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpsjcpzoljeekceybedsrygnhvkcigwl ; /usr/bin/python3'
Nov 24 08:48:50 np0005533251.novalocal sudo[6858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:48:50 np0005533251.novalocal python3[6860]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ec2-ffbe-e0db-79ba-00000000001f-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 08:48:51 np0005533251.novalocal sudo[6858]: pam_unix(sudo:session): session closed for user root
Nov 24 08:48:51 np0005533251.novalocal chronyd[830]: Selected source 207.34.48.31 (2.centos.pool.ntp.org)
Nov 24 08:48:51 np0005533251.novalocal python3[6887]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-e0db-79ba-000000000020-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Nov 24 08:48:52 np0005533251.novalocal python3[6916]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 08:49:11 np0005533251.novalocal sudo[6940]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwyxteasbvutoztjgaufspvsdwmxngns ; /usr/bin/python3'
Nov 24 08:49:11 np0005533251.novalocal sudo[6940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:49:12 np0005533251.novalocal python3[6942]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 08:49:12 np0005533251.novalocal sudo[6940]: pam_unix(sudo:session): session closed for user root
Nov 24 08:49:15 np0005533251.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 24 08:49:55 np0005533251.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 24 08:49:55 np0005533251.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Nov 24 08:49:55 np0005533251.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Nov 24 08:49:55 np0005533251.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Nov 24 08:49:55 np0005533251.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Nov 24 08:49:55 np0005533251.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Nov 24 08:49:55 np0005533251.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Nov 24 08:49:55 np0005533251.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Nov 24 08:49:55 np0005533251.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Nov 24 08:49:55 np0005533251.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Nov 24 08:49:55 np0005533251.novalocal NetworkManager[857]: <info>  [1763974195.9092] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 24 08:49:55 np0005533251.novalocal systemd-udevd[6946]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 08:49:55 np0005533251.novalocal NetworkManager[857]: <info>  [1763974195.9243] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 08:49:55 np0005533251.novalocal NetworkManager[857]: <info>  [1763974195.9265] settings: (eth1): created default wired connection 'Wired connection 1'
Nov 24 08:49:55 np0005533251.novalocal NetworkManager[857]: <info>  [1763974195.9268] device (eth1): carrier: link connected
Nov 24 08:49:55 np0005533251.novalocal NetworkManager[857]: <info>  [1763974195.9269] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 24 08:49:55 np0005533251.novalocal NetworkManager[857]: <info>  [1763974195.9274] policy: auto-activating connection 'Wired connection 1' (d1aedfc8-f035-3e2e-89d5-e0202d550efc)
Nov 24 08:49:55 np0005533251.novalocal NetworkManager[857]: <info>  [1763974195.9277] device (eth1): Activation: starting connection 'Wired connection 1' (d1aedfc8-f035-3e2e-89d5-e0202d550efc)
Nov 24 08:49:55 np0005533251.novalocal NetworkManager[857]: <info>  [1763974195.9278] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 08:49:55 np0005533251.novalocal NetworkManager[857]: <info>  [1763974195.9280] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 08:49:55 np0005533251.novalocal NetworkManager[857]: <info>  [1763974195.9283] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 08:49:55 np0005533251.novalocal NetworkManager[857]: <info>  [1763974195.9286] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 24 08:49:56 np0005533251.novalocal python3[6972]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ec2-ffbe-0f51-775e-000000000128-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 08:50:06 np0005533251.novalocal sudo[7050]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrdximyfzcudndtafeqprnktjfgmoerx ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 24 08:50:06 np0005533251.novalocal sudo[7050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:50:06 np0005533251.novalocal python3[7052]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 08:50:06 np0005533251.novalocal sudo[7050]: pam_unix(sudo:session): session closed for user root
Nov 24 08:50:07 np0005533251.novalocal sudo[7123]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljwxgjenidcymealvtflgasorgtsyrey ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 24 08:50:07 np0005533251.novalocal sudo[7123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:50:07 np0005533251.novalocal python3[7125]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763974206.595003-104-240912298291507/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=39286fd12689ca6a5d544021d50c8d0b170872dc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 08:50:07 np0005533251.novalocal sudo[7123]: pam_unix(sudo:session): session closed for user root
Nov 24 08:50:07 np0005533251.novalocal sudo[7173]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrzrmzywmnepnlhqbjwofvgvcxdhsceo ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 24 08:50:07 np0005533251.novalocal sudo[7173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:50:07 np0005533251.novalocal python3[7175]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 08:50:07 np0005533251.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 24 08:50:07 np0005533251.novalocal systemd[1]: Stopped Network Manager Wait Online.
Nov 24 08:50:07 np0005533251.novalocal systemd[1]: Stopping Network Manager Wait Online...
Nov 24 08:50:07 np0005533251.novalocal systemd[1]: Stopping Network Manager...
Nov 24 08:50:07 np0005533251.novalocal NetworkManager[857]: <info>  [1763974207.9132] caught SIGTERM, shutting down normally.
Nov 24 08:50:07 np0005533251.novalocal NetworkManager[857]: <info>  [1763974207.9139] dhcp4 (eth0): canceled DHCP transaction
Nov 24 08:50:07 np0005533251.novalocal NetworkManager[857]: <info>  [1763974207.9139] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 24 08:50:07 np0005533251.novalocal NetworkManager[857]: <info>  [1763974207.9139] dhcp4 (eth0): state changed no lease
Nov 24 08:50:07 np0005533251.novalocal NetworkManager[857]: <info>  [1763974207.9141] manager: NetworkManager state is now CONNECTING
Nov 24 08:50:07 np0005533251.novalocal NetworkManager[857]: <info>  [1763974207.9271] dhcp4 (eth1): canceled DHCP transaction
Nov 24 08:50:07 np0005533251.novalocal NetworkManager[857]: <info>  [1763974207.9271] dhcp4 (eth1): state changed no lease
Nov 24 08:50:07 np0005533251.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 24 08:50:07 np0005533251.novalocal NetworkManager[857]: <info>  [1763974207.9322] exiting (success)
Nov 24 08:50:07 np0005533251.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 24 08:50:07 np0005533251.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 24 08:50:07 np0005533251.novalocal systemd[1]: Stopped Network Manager.
Nov 24 08:50:07 np0005533251.novalocal systemd[1]: Starting Network Manager...
Nov 24 08:50:07 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974207.9845] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:428e28ae-891b-4271-8668-6c1110086104)
Nov 24 08:50:07 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974207.9847] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 24 08:50:07 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974207.9895] manager[0x55cc097e4070]: monitoring kernel firmware directory '/lib/firmware'.
Nov 24 08:50:08 np0005533251.novalocal systemd[1]: Starting Hostname Service...
Nov 24 08:50:08 np0005533251.novalocal systemd[1]: Started Hostname Service.
Nov 24 08:50:08 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974208.0559] hostname: hostname: using hostnamed
Nov 24 08:50:08 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974208.0559] hostname: static hostname changed from (none) to "np0005533251.novalocal"
Nov 24 08:50:08 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974208.0563] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 24 08:50:08 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974208.0567] manager[0x55cc097e4070]: rfkill: Wi-Fi hardware radio set enabled
Nov 24 08:50:08 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974208.0568] manager[0x55cc097e4070]: rfkill: WWAN hardware radio set enabled
Nov 24 08:50:08 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974208.0590] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 24 08:50:08 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974208.0591] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 24 08:50:08 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974208.0592] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 24 08:50:08 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974208.0593] manager: Networking is enabled by state file
Nov 24 08:50:08 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974208.0595] settings: Loaded settings plugin: keyfile (internal)
Nov 24 08:50:08 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974208.0599] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 24 08:50:08 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974208.0621] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 24 08:50:08 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974208.0629] dhcp: init: Using DHCP client 'internal'
Nov 24 08:50:08 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974208.0632] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 24 08:50:08 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974208.0636] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 08:50:08 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974208.0642] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 24 08:50:08 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974208.0649] device (lo): Activation: starting connection 'lo' (78ddbdbd-6a47-40ea-a116-11a5bade7fe9)
Nov 24 08:50:08 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974208.0655] device (eth0): carrier: link connected
Nov 24 08:50:08 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974208.0659] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 24 08:50:08 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974208.0665] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 24 08:50:08 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974208.0666] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 24 08:50:08 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974208.0673] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 24 08:50:08 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974208.0680] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 24 08:50:08 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974208.0685] device (eth1): carrier: link connected
Nov 24 08:50:08 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974208.0689] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 24 08:50:08 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974208.0693] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (d1aedfc8-f035-3e2e-89d5-e0202d550efc) (indicated)
Nov 24 08:50:08 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974208.0694] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 24 08:50:08 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974208.0698] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 24 08:50:08 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974208.0705] device (eth1): Activation: starting connection 'Wired connection 1' (d1aedfc8-f035-3e2e-89d5-e0202d550efc)
Nov 24 08:50:08 np0005533251.novalocal systemd[1]: Started Network Manager.
Nov 24 08:50:08 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974208.0710] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 24 08:50:08 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974208.0714] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 24 08:50:08 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974208.0716] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 24 08:50:08 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974208.0718] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 24 08:50:08 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974208.0722] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 24 08:50:08 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974208.0739] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 24 08:50:08 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974208.0742] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 24 08:50:08 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974208.0744] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 24 08:50:08 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974208.0747] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 24 08:50:08 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974208.0753] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 24 08:50:08 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974208.0755] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 24 08:50:08 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974208.0761] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 24 08:50:08 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974208.0764] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 24 08:50:08 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974208.0774] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 24 08:50:08 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974208.0778] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 24 08:50:08 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974208.0784] device (lo): Activation: successful, device activated.
Nov 24 08:50:08 np0005533251.novalocal systemd[1]: Starting Network Manager Wait Online...
Nov 24 08:50:08 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974208.0831] dhcp4 (eth0): state changed new lease, address=38.129.56.124
Nov 24 08:50:08 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974208.0837] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 24 08:50:08 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974208.0887] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 24 08:50:08 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974208.0901] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 24 08:50:08 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974208.0903] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 24 08:50:08 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974208.0905] manager: NetworkManager state is now CONNECTED_SITE
Nov 24 08:50:08 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974208.0907] device (eth0): Activation: successful, device activated.
Nov 24 08:50:08 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974208.0910] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 24 08:50:08 np0005533251.novalocal sudo[7173]: pam_unix(sudo:session): session closed for user root
Nov 24 08:50:08 np0005533251.novalocal python3[7260]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ec2-ffbe-0f51-775e-0000000000bd-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 08:50:14 np0005533251.novalocal systemd[4301]: Starting Mark boot as successful...
Nov 24 08:50:14 np0005533251.novalocal systemd[4301]: Finished Mark boot as successful.
Nov 24 08:50:18 np0005533251.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 24 08:50:38 np0005533251.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 24 08:50:53 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974253.3694] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 24 08:50:53 np0005533251.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 24 08:50:53 np0005533251.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 24 08:50:53 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974253.3984] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 24 08:50:53 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974253.3987] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 24 08:50:53 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974253.3997] device (eth1): Activation: successful, device activated.
Nov 24 08:50:53 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974253.4006] manager: startup complete
Nov 24 08:50:53 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974253.4007] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Nov 24 08:50:53 np0005533251.novalocal NetworkManager[7187]: <warn>  [1763974253.4015] device (eth1): Activation: failed for connection 'Wired connection 1'
Nov 24 08:50:53 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974253.4023] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Nov 24 08:50:53 np0005533251.novalocal systemd[1]: Finished Network Manager Wait Online.
Nov 24 08:50:53 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974253.4111] dhcp4 (eth1): canceled DHCP transaction
Nov 24 08:50:53 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974253.4112] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 24 08:50:53 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974253.4113] dhcp4 (eth1): state changed no lease
Nov 24 08:50:53 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974253.4128] policy: auto-activating connection 'ci-private-network' (e1906d45-4a19-53b0-9584-1b272dee14f0)
Nov 24 08:50:53 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974253.4134] device (eth1): Activation: starting connection 'ci-private-network' (e1906d45-4a19-53b0-9584-1b272dee14f0)
Nov 24 08:50:53 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974253.4135] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 08:50:53 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974253.4139] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 08:50:53 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974253.4146] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 08:50:53 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974253.4155] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 08:50:53 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974253.4190] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 08:50:53 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974253.4192] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 08:50:53 np0005533251.novalocal NetworkManager[7187]: <info>  [1763974253.4199] device (eth1): Activation: successful, device activated.
Nov 24 08:51:03 np0005533251.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 24 08:51:08 np0005533251.novalocal sshd-session[4310]: Received disconnect from 38.102.83.114 port 36548:11: disconnected by user
Nov 24 08:51:08 np0005533251.novalocal sshd-session[4310]: Disconnected from user zuul 38.102.83.114 port 36548
Nov 24 08:51:08 np0005533251.novalocal sshd-session[4296]: pam_unix(sshd:session): session closed for user zuul
Nov 24 08:51:08 np0005533251.novalocal systemd-logind[822]: Session 1 logged out. Waiting for processes to exit.
Nov 24 08:52:04 np0005533251.novalocal chronyd[830]: Selected source 216.197.156.83 (2.centos.pool.ntp.org)
Nov 24 08:52:30 np0005533251.novalocal sshd-session[7289]: Accepted publickey for zuul from 38.102.83.114 port 58890 ssh2: RSA SHA256:UBnduE29/r4JICQE22jchpBfdroBtCYqENielfKVzAM
Nov 24 08:52:30 np0005533251.novalocal systemd-logind[822]: New session 3 of user zuul.
Nov 24 08:52:30 np0005533251.novalocal systemd[1]: Started Session 3 of User zuul.
Nov 24 08:52:30 np0005533251.novalocal sshd-session[7289]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 08:52:30 np0005533251.novalocal sudo[7368]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtonluancozmrdwdrcruixkwjggqqfvr ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 24 08:52:30 np0005533251.novalocal sudo[7368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:52:31 np0005533251.novalocal python3[7370]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 08:52:31 np0005533251.novalocal sudo[7368]: pam_unix(sudo:session): session closed for user root
Nov 24 08:52:31 np0005533251.novalocal sudo[7441]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muoqpypycynxaqezvymhfjpeeaaxgtbq ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 24 08:52:31 np0005533251.novalocal sudo[7441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:52:31 np0005533251.novalocal python3[7443]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763974350.8102195-373-158640532216257/source _original_basename=tmpz8tsjsmi follow=False checksum=a3ebf95cc3e4718aba4e7a218d4b9424c08a2ec8 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 08:52:31 np0005533251.novalocal sudo[7441]: pam_unix(sudo:session): session closed for user root
Nov 24 08:52:35 np0005533251.novalocal sshd-session[7292]: Connection closed by 38.102.83.114 port 58890
Nov 24 08:52:35 np0005533251.novalocal sshd-session[7289]: pam_unix(sshd:session): session closed for user zuul
Nov 24 08:52:35 np0005533251.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Nov 24 08:52:35 np0005533251.novalocal systemd-logind[822]: Session 3 logged out. Waiting for processes to exit.
Nov 24 08:52:35 np0005533251.novalocal systemd-logind[822]: Removed session 3.
Nov 24 08:53:14 np0005533251.novalocal systemd[4301]: Created slice User Background Tasks Slice.
Nov 24 08:53:14 np0005533251.novalocal systemd[4301]: Starting Cleanup of User's Temporary Files and Directories...
Nov 24 08:53:14 np0005533251.novalocal systemd[4301]: Finished Cleanup of User's Temporary Files and Directories.
Nov 24 08:55:49 np0005533251.novalocal sshd-session[7471]: Connection closed by 180.180.9.248 port 60534
Nov 24 08:55:50 np0005533251.novalocal sshd-session[7472]: Invalid user a from 180.180.9.248 port 34330
Nov 24 08:55:50 np0005533251.novalocal sshd-session[7472]: Connection closed by invalid user a 180.180.9.248 port 34330 [preauth]
Nov 24 08:57:49 np0005533251.novalocal sshd-session[7475]: Accepted publickey for zuul from 38.102.83.114 port 52154 ssh2: RSA SHA256:UBnduE29/r4JICQE22jchpBfdroBtCYqENielfKVzAM
Nov 24 08:57:49 np0005533251.novalocal systemd-logind[822]: New session 4 of user zuul.
Nov 24 08:57:49 np0005533251.novalocal systemd[1]: Started Session 4 of User zuul.
Nov 24 08:57:49 np0005533251.novalocal sshd-session[7475]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 08:57:50 np0005533251.novalocal sudo[7502]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prtejoqgnwgdexmluvmngbkhsmcglvuv ; /usr/bin/python3'
Nov 24 08:57:50 np0005533251.novalocal sudo[7502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:57:50 np0005533251.novalocal python3[7504]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-be4d-b146-000000001cd2-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 08:57:50 np0005533251.novalocal sudo[7502]: pam_unix(sudo:session): session closed for user root
Nov 24 08:57:50 np0005533251.novalocal sudo[7530]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjzribhnvebzyuygjvjikineejemchxr ; /usr/bin/python3'
Nov 24 08:57:50 np0005533251.novalocal sudo[7530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:57:50 np0005533251.novalocal python3[7532]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 08:57:50 np0005533251.novalocal sudo[7530]: pam_unix(sudo:session): session closed for user root
Nov 24 08:57:50 np0005533251.novalocal sudo[7557]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmpmtypvlhbideudsnsoysexdkgopkom ; /usr/bin/python3'
Nov 24 08:57:50 np0005533251.novalocal sudo[7557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:57:50 np0005533251.novalocal python3[7559]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 08:57:50 np0005533251.novalocal sudo[7557]: pam_unix(sudo:session): session closed for user root
Nov 24 08:57:50 np0005533251.novalocal sudo[7583]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfqbkxtnndngpbouswhjiarqqsvxqpun ; /usr/bin/python3'
Nov 24 08:57:50 np0005533251.novalocal sudo[7583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:57:51 np0005533251.novalocal python3[7585]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 08:57:51 np0005533251.novalocal sudo[7583]: pam_unix(sudo:session): session closed for user root
Nov 24 08:57:51 np0005533251.novalocal sudo[7609]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grktvtqlkdgeidmzblumzdkjeqydcurl ; /usr/bin/python3'
Nov 24 08:57:51 np0005533251.novalocal sudo[7609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:57:51 np0005533251.novalocal python3[7611]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 08:57:51 np0005533251.novalocal sudo[7609]: pam_unix(sudo:session): session closed for user root
Nov 24 08:57:52 np0005533251.novalocal sudo[7635]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-geqjubgyyrhdsshlrmpaucehgpwlcxuj ; /usr/bin/python3'
Nov 24 08:57:52 np0005533251.novalocal sudo[7635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:57:52 np0005533251.novalocal python3[7637]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 08:57:52 np0005533251.novalocal sudo[7635]: pam_unix(sudo:session): session closed for user root
Nov 24 08:57:52 np0005533251.novalocal sudo[7713]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgzmrouzczoidyzqmofgkmicpnpsrfis ; /usr/bin/python3'
Nov 24 08:57:52 np0005533251.novalocal sudo[7713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:57:52 np0005533251.novalocal python3[7715]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 08:57:52 np0005533251.novalocal sudo[7713]: pam_unix(sudo:session): session closed for user root
Nov 24 08:57:53 np0005533251.novalocal sudo[7786]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkecabhwkqzmvqsdmwhvwvndiqyyjcpo ; /usr/bin/python3'
Nov 24 08:57:53 np0005533251.novalocal sudo[7786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:57:53 np0005533251.novalocal python3[7788]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763974672.676189-507-128448845728643/source _original_basename=tmpavvbohmw follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 08:57:53 np0005533251.novalocal sudo[7786]: pam_unix(sudo:session): session closed for user root
Nov 24 08:57:54 np0005533251.novalocal sudo[7836]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wowmjvaxsabcevojljlwdxltlpwdkwjk ; /usr/bin/python3'
Nov 24 08:57:54 np0005533251.novalocal sudo[7836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:57:54 np0005533251.novalocal python3[7838]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 08:57:54 np0005533251.novalocal systemd[1]: Reloading.
Nov 24 08:57:54 np0005533251.novalocal systemd-rc-local-generator[7862]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 08:57:54 np0005533251.novalocal sudo[7836]: pam_unix(sudo:session): session closed for user root
Nov 24 08:57:55 np0005533251.novalocal sudo[7893]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztixmagqofvsrhqwccrtqdjainzcrcyb ; /usr/bin/python3'
Nov 24 08:57:55 np0005533251.novalocal sudo[7893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:57:56 np0005533251.novalocal python3[7895]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Nov 24 08:57:56 np0005533251.novalocal sudo[7893]: pam_unix(sudo:session): session closed for user root
Nov 24 08:57:56 np0005533251.novalocal sudo[7919]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pprijopribysnrynchfxnsuivhlfzrxq ; /usr/bin/python3'
Nov 24 08:57:56 np0005533251.novalocal sudo[7919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:57:56 np0005533251.novalocal python3[7921]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 08:57:56 np0005533251.novalocal sudo[7919]: pam_unix(sudo:session): session closed for user root
Nov 24 08:57:56 np0005533251.novalocal sudo[7947]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvscgeelqmzwmftrclyzipydwyefpuyo ; /usr/bin/python3'
Nov 24 08:57:56 np0005533251.novalocal sudo[7947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:57:56 np0005533251.novalocal python3[7949]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 08:57:56 np0005533251.novalocal sudo[7947]: pam_unix(sudo:session): session closed for user root
Nov 24 08:57:57 np0005533251.novalocal sudo[7975]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmajkkpimghsbxznwbyxpljgpxzmtzdt ; /usr/bin/python3'
Nov 24 08:57:57 np0005533251.novalocal sudo[7975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:57:57 np0005533251.novalocal python3[7977]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 08:57:57 np0005533251.novalocal sudo[7975]: pam_unix(sudo:session): session closed for user root
Nov 24 08:57:57 np0005533251.novalocal sudo[8003]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujadzljyrfxjqmadiposdstpilzrflii ; /usr/bin/python3'
Nov 24 08:57:57 np0005533251.novalocal sudo[8003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:57:57 np0005533251.novalocal python3[8005]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 08:57:57 np0005533251.novalocal sudo[8003]: pam_unix(sudo:session): session closed for user root
Nov 24 08:57:58 np0005533251.novalocal python3[8032]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-be4d-b146-000000001cd9-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 08:57:58 np0005533251.novalocal python3[8062]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 24 08:58:01 np0005533251.novalocal sshd-session[7478]: Connection closed by 38.102.83.114 port 52154
Nov 24 08:58:01 np0005533251.novalocal sshd-session[7475]: pam_unix(sshd:session): session closed for user zuul
Nov 24 08:58:01 np0005533251.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Nov 24 08:58:01 np0005533251.novalocal systemd[1]: session-4.scope: Consumed 3.856s CPU time.
Nov 24 08:58:01 np0005533251.novalocal systemd-logind[822]: Session 4 logged out. Waiting for processes to exit.
Nov 24 08:58:01 np0005533251.novalocal systemd-logind[822]: Removed session 4.
Nov 24 08:58:03 np0005533251.novalocal sshd-session[8066]: Accepted publickey for zuul from 38.102.83.114 port 55322 ssh2: RSA SHA256:UBnduE29/r4JICQE22jchpBfdroBtCYqENielfKVzAM
Nov 24 08:58:03 np0005533251.novalocal systemd-logind[822]: New session 5 of user zuul.
Nov 24 08:58:03 np0005533251.novalocal systemd[1]: Started Session 5 of User zuul.
Nov 24 08:58:03 np0005533251.novalocal sshd-session[8066]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 08:58:03 np0005533251.novalocal sudo[8093]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdbbdqaazdroytneegaclstdllotlerc ; /usr/bin/python3'
Nov 24 08:58:03 np0005533251.novalocal sudo[8093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 08:58:03 np0005533251.novalocal python3[8095]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 24 08:58:09 np0005533251.novalocal irqbalance[817]: Cannot change IRQ 26 affinity: Operation not permitted
Nov 24 08:58:09 np0005533251.novalocal irqbalance[817]: IRQ 26 affinity is now unmanaged
Nov 24 08:58:49 np0005533251.novalocal kernel: SELinux:  Converting 385 SID table entries...
Nov 24 08:58:49 np0005533251.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 08:58:49 np0005533251.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 24 08:58:49 np0005533251.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 08:58:49 np0005533251.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 24 08:58:49 np0005533251.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 08:58:49 np0005533251.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 08:58:49 np0005533251.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 08:58:57 np0005533251.novalocal kernel: SELinux:  Converting 385 SID table entries...
Nov 24 08:58:57 np0005533251.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 08:58:57 np0005533251.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 24 08:58:57 np0005533251.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 08:58:57 np0005533251.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 24 08:58:57 np0005533251.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 08:58:57 np0005533251.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 08:58:57 np0005533251.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 08:59:06 np0005533251.novalocal kernel: SELinux:  Converting 385 SID table entries...
Nov 24 08:59:06 np0005533251.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 08:59:06 np0005533251.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 24 08:59:06 np0005533251.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 08:59:06 np0005533251.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 24 08:59:06 np0005533251.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 08:59:06 np0005533251.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 08:59:06 np0005533251.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 08:59:07 np0005533251.novalocal setsebool[8163]: The virt_use_nfs policy boolean was changed to 1 by root
Nov 24 08:59:07 np0005533251.novalocal setsebool[8163]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Nov 24 08:59:18 np0005533251.novalocal kernel: SELinux:  Converting 388 SID table entries...
Nov 24 08:59:18 np0005533251.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 08:59:18 np0005533251.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 24 08:59:18 np0005533251.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 08:59:18 np0005533251.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 24 08:59:18 np0005533251.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 08:59:18 np0005533251.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 08:59:18 np0005533251.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 08:59:37 np0005533251.novalocal dbus-broker-launch[810]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 24 08:59:37 np0005533251.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 08:59:37 np0005533251.novalocal systemd[1]: Starting man-db-cache-update.service...
Nov 24 08:59:37 np0005533251.novalocal systemd[1]: Reloading.
Nov 24 08:59:37 np0005533251.novalocal systemd-rc-local-generator[8916]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 08:59:37 np0005533251.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Nov 24 08:59:38 np0005533251.novalocal sudo[8093]: pam_unix(sudo:session): session closed for user root
Nov 24 08:59:39 np0005533251.novalocal irqbalance[817]: Cannot change IRQ 27 affinity: Operation not permitted
Nov 24 08:59:39 np0005533251.novalocal irqbalance[817]: IRQ 27 affinity is now unmanaged
Nov 24 08:59:59 np0005533251.novalocal python3[21564]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163ec2-ffbe-41c3-2628-00000000000c-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:00:00 np0005533251.novalocal kernel: evm: overlay not supported
Nov 24 09:00:00 np0005533251.novalocal systemd[4301]: Starting D-Bus User Message Bus...
Nov 24 09:00:00 np0005533251.novalocal dbus-broker-launch[22083]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Nov 24 09:00:00 np0005533251.novalocal dbus-broker-launch[22083]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Nov 24 09:00:00 np0005533251.novalocal systemd[4301]: Started D-Bus User Message Bus.
Nov 24 09:00:00 np0005533251.novalocal dbus-broker-lau[22083]: Ready
Nov 24 09:00:00 np0005533251.novalocal systemd[4301]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 24 09:00:00 np0005533251.novalocal systemd[4301]: Created slice Slice /user.
Nov 24 09:00:00 np0005533251.novalocal systemd[4301]: podman-22020.scope: unit configures an IP firewall, but not running as root.
Nov 24 09:00:00 np0005533251.novalocal systemd[4301]: (This warning is only shown for the first unit using IP firewalling.)
Nov 24 09:00:00 np0005533251.novalocal systemd[4301]: Started podman-22020.scope.
Nov 24 09:00:00 np0005533251.novalocal systemd[4301]: Started podman-pause-5fb9a7c5.scope.
Nov 24 09:00:01 np0005533251.novalocal sudo[22740]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtzfqpeyjlvqccpzztzjhjivydvqryit ; /usr/bin/python3'
Nov 24 09:00:01 np0005533251.novalocal sudo[22740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:00:01 np0005533251.novalocal python3[22752]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.129.56.16:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.129.56.16:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:00:01 np0005533251.novalocal python3[22752]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Nov 24 09:00:01 np0005533251.novalocal sudo[22740]: pam_unix(sudo:session): session closed for user root
Nov 24 09:00:02 np0005533251.novalocal sshd-session[8069]: Connection closed by 38.102.83.114 port 55322
Nov 24 09:00:02 np0005533251.novalocal sshd-session[8066]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:00:02 np0005533251.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Nov 24 09:00:02 np0005533251.novalocal systemd[1]: session-5.scope: Consumed 59.385s CPU time.
Nov 24 09:00:02 np0005533251.novalocal systemd-logind[822]: Session 5 logged out. Waiting for processes to exit.
Nov 24 09:00:02 np0005533251.novalocal systemd-logind[822]: Removed session 5.
Nov 24 09:00:17 np0005533251.novalocal systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 09:00:17 np0005533251.novalocal systemd[1]: Finished man-db-cache-update.service.
Nov 24 09:00:17 np0005533251.novalocal systemd[1]: man-db-cache-update.service: Consumed 46.712s CPU time.
Nov 24 09:00:17 np0005533251.novalocal systemd[1]: run-r843a9b409b1b44e1a89ff1dfa39c7330.service: Deactivated successfully.
Nov 24 09:00:22 np0005533251.novalocal sshd-session[29572]: Unable to negotiate with 38.129.56.127 port 33136: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Nov 24 09:00:22 np0005533251.novalocal sshd-session[29574]: Connection closed by 38.129.56.127 port 33116 [preauth]
Nov 24 09:00:22 np0005533251.novalocal sshd-session[29573]: Connection closed by 38.129.56.127 port 33124 [preauth]
Nov 24 09:00:22 np0005533251.novalocal sshd-session[29576]: Unable to negotiate with 38.129.56.127 port 33146: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Nov 24 09:00:22 np0005533251.novalocal sshd-session[29575]: Unable to negotiate with 38.129.56.127 port 33156: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Nov 24 09:00:27 np0005533251.novalocal sshd-session[29582]: Accepted publickey for zuul from 38.102.83.114 port 33498 ssh2: RSA SHA256:UBnduE29/r4JICQE22jchpBfdroBtCYqENielfKVzAM
Nov 24 09:00:27 np0005533251.novalocal systemd-logind[822]: New session 6 of user zuul.
Nov 24 09:00:27 np0005533251.novalocal systemd[1]: Started Session 6 of User zuul.
Nov 24 09:00:27 np0005533251.novalocal sshd-session[29582]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:00:27 np0005533251.novalocal python3[29609]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM8ruDbV0dT4f7otSS9ZkwTivv+VvdZBI90ZFtvHB0fKKCNPoKXMGfWx38kL9Jgkrr0hEGTFtsoY+YwwXpMooGE= zuul@np0005533250.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 09:00:27 np0005533251.novalocal sudo[29633]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjnflasocdzwpiozoylufzovjwvsiaeb ; /usr/bin/python3'
Nov 24 09:00:27 np0005533251.novalocal sudo[29633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:00:27 np0005533251.novalocal python3[29635]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM8ruDbV0dT4f7otSS9ZkwTivv+VvdZBI90ZFtvHB0fKKCNPoKXMGfWx38kL9Jgkrr0hEGTFtsoY+YwwXpMooGE= zuul@np0005533250.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 09:00:27 np0005533251.novalocal sudo[29633]: pam_unix(sudo:session): session closed for user root
Nov 24 09:00:28 np0005533251.novalocal sudo[29659]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpsuvddkzvfebqfmlgvvfffclnyzpstn ; /usr/bin/python3'
Nov 24 09:00:28 np0005533251.novalocal sudo[29659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:00:28 np0005533251.novalocal python3[29661]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005533251.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Nov 24 09:00:28 np0005533251.novalocal useradd[29663]: new group: name=cloud-admin, GID=1002
Nov 24 09:00:28 np0005533251.novalocal useradd[29663]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Nov 24 09:00:28 np0005533251.novalocal sudo[29659]: pam_unix(sudo:session): session closed for user root
Nov 24 09:00:29 np0005533251.novalocal sudo[29693]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfwygouzlqrfiwuoujhxqxufppeieekd ; /usr/bin/python3'
Nov 24 09:00:29 np0005533251.novalocal sudo[29693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:00:29 np0005533251.novalocal python3[29695]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM8ruDbV0dT4f7otSS9ZkwTivv+VvdZBI90ZFtvHB0fKKCNPoKXMGfWx38kL9Jgkrr0hEGTFtsoY+YwwXpMooGE= zuul@np0005533250.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 09:00:29 np0005533251.novalocal sudo[29693]: pam_unix(sudo:session): session closed for user root
Nov 24 09:00:29 np0005533251.novalocal sudo[29771]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urtbaozamjynfycrehsmrqirtkbdibxy ; /usr/bin/python3'
Nov 24 09:00:29 np0005533251.novalocal sudo[29771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:00:29 np0005533251.novalocal python3[29773]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 09:00:29 np0005533251.novalocal sudo[29771]: pam_unix(sudo:session): session closed for user root
Nov 24 09:00:30 np0005533251.novalocal sudo[29844]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-waxmjsyglrwvawlusqnnjazedcupccwe ; /usr/bin/python3'
Nov 24 09:00:30 np0005533251.novalocal sudo[29844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:00:30 np0005533251.novalocal python3[29846]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1763974829.3769236-167-215573492942811/source _original_basename=tmpisxyy66y follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:00:30 np0005533251.novalocal sudo[29844]: pam_unix(sudo:session): session closed for user root
Nov 24 09:00:30 np0005533251.novalocal sudo[29894]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szuapmylhwvzuendkjxhuyscxggreylz ; /usr/bin/python3'
Nov 24 09:00:30 np0005533251.novalocal sudo[29894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:00:31 np0005533251.novalocal python3[29896]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Nov 24 09:00:31 np0005533251.novalocal systemd[1]: Starting Hostname Service...
Nov 24 09:00:31 np0005533251.novalocal systemd[1]: Started Hostname Service.
Nov 24 09:00:31 np0005533251.novalocal systemd-hostnamed[29900]: Changed pretty hostname to 'compute-0'
Nov 24 09:00:31 compute-0 systemd-hostnamed[29900]: Hostname set to <compute-0> (static)
Nov 24 09:00:31 compute-0 NetworkManager[7187]: <info>  [1763974831.1859] hostname: static hostname changed from "np0005533251.novalocal" to "compute-0"
Nov 24 09:00:31 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 24 09:00:31 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 24 09:00:31 compute-0 sudo[29894]: pam_unix(sudo:session): session closed for user root
Nov 24 09:00:31 compute-0 sshd-session[29585]: Connection closed by 38.102.83.114 port 33498
Nov 24 09:00:31 compute-0 sshd-session[29582]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:00:31 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Nov 24 09:00:31 compute-0 systemd[1]: session-6.scope: Consumed 2.204s CPU time.
Nov 24 09:00:31 compute-0 systemd-logind[822]: Session 6 logged out. Waiting for processes to exit.
Nov 24 09:00:31 compute-0 systemd-logind[822]: Removed session 6.
Nov 24 09:00:41 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 24 09:01:01 compute-0 CROND[29914]: (root) CMD (run-parts /etc/cron.hourly)
Nov 24 09:01:01 compute-0 run-parts[29917]: (/etc/cron.hourly) starting 0anacron
Nov 24 09:01:01 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 24 09:01:01 compute-0 anacron[29927]: Anacron started on 2025-11-24
Nov 24 09:01:01 compute-0 anacron[29927]: Will run job `cron.daily' in 32 min.
Nov 24 09:01:01 compute-0 anacron[29927]: Will run job `cron.weekly' in 52 min.
Nov 24 09:01:01 compute-0 anacron[29927]: Will run job `cron.monthly' in 72 min.
Nov 24 09:01:01 compute-0 anacron[29927]: Jobs will be executed sequentially
Nov 24 09:01:01 compute-0 run-parts[29929]: (/etc/cron.hourly) finished 0anacron
Nov 24 09:01:01 compute-0 CROND[29913]: (root) CMDEND (run-parts /etc/cron.hourly)
Nov 24 09:03:14 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Nov 24 09:03:14 compute-0 systemd[1]: Starting dnf makecache...
Nov 24 09:03:14 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Nov 24 09:03:14 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Nov 24 09:03:14 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Nov 24 09:03:14 compute-0 dnf[29933]: Failed determining last makecache time.
Nov 24 09:03:15 compute-0 dnf[29933]: CentOS Stream 9 - BaseOS                         70 kB/s | 7.3 kB     00:00
Nov 24 09:03:15 compute-0 dnf[29933]: CentOS Stream 9 - AppStream                      77 kB/s | 7.4 kB     00:00
Nov 24 09:03:15 compute-0 dnf[29933]: CentOS Stream 9 - CRB                            46 kB/s | 7.2 kB     00:00
Nov 24 09:03:15 compute-0 dnf[29933]: CentOS Stream 9 - Extras packages                47 kB/s | 8.3 kB     00:00
Nov 24 09:03:15 compute-0 dnf[29933]: Metadata cache created.
Nov 24 09:03:15 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 24 09:03:15 compute-0 systemd[1]: Finished dnf makecache.
Nov 24 09:04:28 compute-0 sshd-session[29940]: Accepted publickey for zuul from 38.129.56.127 port 56644 ssh2: RSA SHA256:UBnduE29/r4JICQE22jchpBfdroBtCYqENielfKVzAM
Nov 24 09:04:28 compute-0 systemd-logind[822]: New session 7 of user zuul.
Nov 24 09:04:28 compute-0 systemd[1]: Started Session 7 of User zuul.
Nov 24 09:04:28 compute-0 sshd-session[29940]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:04:28 compute-0 python3[30016]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:04:30 compute-0 sudo[30130]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfczfdnauvxgqhiqxqjozpqseyuzbewl ; /usr/bin/python3'
Nov 24 09:04:30 compute-0 sudo[30130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:04:30 compute-0 python3[30132]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 09:04:30 compute-0 sudo[30130]: pam_unix(sudo:session): session closed for user root
Nov 24 09:04:31 compute-0 sudo[30203]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eikohmnzmsrtjgcflrbzbgibqspeygci ; /usr/bin/python3'
Nov 24 09:04:31 compute-0 sudo[30203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:04:31 compute-0 python3[30205]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763975070.4864428-33949-120841824148188/source mode=0755 _original_basename=delorean.repo follow=False checksum=1830be8248976a7f714fb01ca8550e92dfc79ad2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:04:31 compute-0 sudo[30203]: pam_unix(sudo:session): session closed for user root
Nov 24 09:04:31 compute-0 sudo[30229]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-subsuyxqdcchoqddaigfvmhffpqdcslu ; /usr/bin/python3'
Nov 24 09:04:31 compute-0 sudo[30229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:04:31 compute-0 python3[30231]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 09:04:31 compute-0 sudo[30229]: pam_unix(sudo:session): session closed for user root
Nov 24 09:04:31 compute-0 sudo[30302]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yleefxwxukosrxuhlkrvurepilyzqfjh ; /usr/bin/python3'
Nov 24 09:04:31 compute-0 sudo[30302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:04:31 compute-0 python3[30304]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763975070.4864428-33949-120841824148188/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:04:31 compute-0 sudo[30302]: pam_unix(sudo:session): session closed for user root
Nov 24 09:04:31 compute-0 sudo[30328]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-seasrmivyyjwakdemjqebixvfepsralf ; /usr/bin/python3'
Nov 24 09:04:31 compute-0 sudo[30328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:04:32 compute-0 python3[30330]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 09:04:32 compute-0 sudo[30328]: pam_unix(sudo:session): session closed for user root
Nov 24 09:04:32 compute-0 sudo[30401]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-riiwdduckikusmdvdmtzopohohqbkids ; /usr/bin/python3'
Nov 24 09:04:32 compute-0 sudo[30401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:04:32 compute-0 python3[30403]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763975070.4864428-33949-120841824148188/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:04:32 compute-0 sudo[30401]: pam_unix(sudo:session): session closed for user root
Nov 24 09:04:32 compute-0 sudo[30427]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxveovzyugfzyqhtoyvxidwdgghrmahp ; /usr/bin/python3'
Nov 24 09:04:32 compute-0 sudo[30427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:04:32 compute-0 python3[30429]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 09:04:32 compute-0 sudo[30427]: pam_unix(sudo:session): session closed for user root
Nov 24 09:04:32 compute-0 sudo[30500]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-essktfwrftqmeteonpqbcyojynetvxny ; /usr/bin/python3'
Nov 24 09:04:32 compute-0 sudo[30500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:04:32 compute-0 python3[30502]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763975070.4864428-33949-120841824148188/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:04:32 compute-0 sudo[30500]: pam_unix(sudo:session): session closed for user root
Nov 24 09:04:33 compute-0 sudo[30526]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgjmuurtujgsekgeqklqfcqdkmpmfque ; /usr/bin/python3'
Nov 24 09:04:33 compute-0 sudo[30526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:04:33 compute-0 python3[30528]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 09:04:33 compute-0 sudo[30526]: pam_unix(sudo:session): session closed for user root
Nov 24 09:04:33 compute-0 sudo[30599]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhzpjjrtayvtysmozkwpfxoihajjvbah ; /usr/bin/python3'
Nov 24 09:04:33 compute-0 sudo[30599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:04:33 compute-0 python3[30601]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763975070.4864428-33949-120841824148188/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:04:33 compute-0 sudo[30599]: pam_unix(sudo:session): session closed for user root
Nov 24 09:04:33 compute-0 sudo[30625]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdepfjsccosypnnwpttkwpomjwqkcdxi ; /usr/bin/python3'
Nov 24 09:04:33 compute-0 sudo[30625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:04:33 compute-0 python3[30627]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 09:04:33 compute-0 sudo[30625]: pam_unix(sudo:session): session closed for user root
Nov 24 09:04:34 compute-0 sudo[30698]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldnoosowgwhgmzngzfkwgcwchdfgzpdq ; /usr/bin/python3'
Nov 24 09:04:34 compute-0 sudo[30698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:04:34 compute-0 python3[30700]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763975070.4864428-33949-120841824148188/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:04:34 compute-0 sudo[30698]: pam_unix(sudo:session): session closed for user root
Nov 24 09:04:34 compute-0 sudo[30724]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkvwwsjplzuewzgcabagbmytypcetttk ; /usr/bin/python3'
Nov 24 09:04:34 compute-0 sudo[30724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:04:34 compute-0 python3[30726]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 09:04:34 compute-0 sudo[30724]: pam_unix(sudo:session): session closed for user root
Nov 24 09:04:34 compute-0 sudo[30797]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aodkdxcxnifkmfyitrbbarjvwrhtuiug ; /usr/bin/python3'
Nov 24 09:04:34 compute-0 sudo[30797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:04:34 compute-0 python3[30799]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763975070.4864428-33949-120841824148188/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=6646317362318a9831d66a1804f6bb7dd1b97cd5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:04:34 compute-0 sudo[30797]: pam_unix(sudo:session): session closed for user root
Nov 24 09:04:37 compute-0 sshd-session[30824]: Connection closed by 192.168.122.11 port 33498 [preauth]
Nov 24 09:04:37 compute-0 sshd-session[30825]: Connection closed by 192.168.122.11 port 33510 [preauth]
Nov 24 09:04:37 compute-0 sshd-session[30827]: Unable to negotiate with 192.168.122.11 port 33528: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Nov 24 09:04:37 compute-0 sshd-session[30828]: Unable to negotiate with 192.168.122.11 port 33530: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Nov 24 09:04:37 compute-0 sshd-session[30826]: Unable to negotiate with 192.168.122.11 port 33524: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Nov 24 09:04:47 compute-0 python3[30857]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:06:14 compute-0 sshd-session[30859]: banner exchange: Connection from 148.113.211.131 port 55228: invalid format
Nov 24 09:07:55 compute-0 sshd-session[30861]: Connection closed by 159.65.46.209 port 58184
Nov 24 09:09:46 compute-0 sshd-session[29943]: Received disconnect from 38.129.56.127 port 56644:11: disconnected by user
Nov 24 09:09:46 compute-0 sshd-session[29943]: Disconnected from user zuul 38.129.56.127 port 56644
Nov 24 09:09:46 compute-0 sshd-session[29940]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:09:46 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Nov 24 09:09:46 compute-0 systemd[1]: session-7.scope: Consumed 4.751s CPU time.
Nov 24 09:09:46 compute-0 systemd-logind[822]: Session 7 logged out. Waiting for processes to exit.
Nov 24 09:09:46 compute-0 systemd-logind[822]: Removed session 7.
Nov 24 09:16:27 compute-0 sshd-session[30865]: Accepted publickey for zuul from 192.168.122.30 port 32908 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 09:16:27 compute-0 systemd-logind[822]: New session 8 of user zuul.
Nov 24 09:16:27 compute-0 systemd[1]: Started Session 8 of User zuul.
Nov 24 09:16:27 compute-0 sshd-session[30865]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:16:28 compute-0 python3.9[31018]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:16:29 compute-0 sudo[31197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ethpgrkivydnlizgildtblyizmymtsfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975789.439449-56-157273808890508/AnsiballZ_command.py'
Nov 24 09:16:29 compute-0 sudo[31197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:16:30 compute-0 python3.9[31199]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:16:37 compute-0 sudo[31197]: pam_unix(sudo:session): session closed for user root
Nov 24 09:16:37 compute-0 sshd-session[30868]: Connection closed by 192.168.122.30 port 32908
Nov 24 09:16:37 compute-0 sshd-session[30865]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:16:37 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Nov 24 09:16:37 compute-0 systemd[1]: session-8.scope: Consumed 7.476s CPU time.
Nov 24 09:16:37 compute-0 systemd-logind[822]: Session 8 logged out. Waiting for processes to exit.
Nov 24 09:16:37 compute-0 systemd-logind[822]: Removed session 8.
Nov 24 09:16:52 compute-0 sshd-session[31257]: Accepted publickey for zuul from 192.168.122.30 port 53938 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 09:16:52 compute-0 systemd-logind[822]: New session 9 of user zuul.
Nov 24 09:16:52 compute-0 systemd[1]: Started Session 9 of User zuul.
Nov 24 09:16:52 compute-0 sshd-session[31257]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:16:53 compute-0 python3.9[31410]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 24 09:16:54 compute-0 python3.9[31584]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:16:56 compute-0 sudo[31734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqglzdlnnmoeajtcoscslahfdfwrmiqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975815.2471576-93-191163598820781/AnsiballZ_command.py'
Nov 24 09:16:56 compute-0 sudo[31734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:16:56 compute-0 python3.9[31736]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:16:56 compute-0 sudo[31734]: pam_unix(sudo:session): session closed for user root
Nov 24 09:16:57 compute-0 sudo[31887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyirtxjsqtxdsguaaotsichizzbrzfzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975816.7706096-129-154466221096334/AnsiballZ_stat.py'
Nov 24 09:16:57 compute-0 sudo[31887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:16:57 compute-0 python3.9[31889]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:16:57 compute-0 sudo[31887]: pam_unix(sudo:session): session closed for user root
Nov 24 09:16:58 compute-0 sudo[32039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjsfnjzuusoxqqydwlzxhrnqarlbicjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975817.6536703-153-60530455065330/AnsiballZ_file.py'
Nov 24 09:16:58 compute-0 sudo[32039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:16:58 compute-0 python3.9[32041]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:16:58 compute-0 sudo[32039]: pam_unix(sudo:session): session closed for user root
Nov 24 09:16:58 compute-0 sudo[32191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xaxgnayojvkaqqeevtucodsknbualfhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975818.4971974-177-95827136539784/AnsiballZ_stat.py'
Nov 24 09:16:58 compute-0 sudo[32191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:16:58 compute-0 python3.9[32193]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:16:58 compute-0 sudo[32191]: pam_unix(sudo:session): session closed for user root
Nov 24 09:16:59 compute-0 sudo[32314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtzrcjdgfhwgqazrgvbhmgojrpidmjuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975818.4971974-177-95827136539784/AnsiballZ_copy.py'
Nov 24 09:16:59 compute-0 sudo[32314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:16:59 compute-0 python3.9[32316]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1763975818.4971974-177-95827136539784/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:16:59 compute-0 sudo[32314]: pam_unix(sudo:session): session closed for user root
Nov 24 09:17:00 compute-0 sudo[32466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzlpqucilpraxhmjyxevhfqnuzxtaoeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975819.8539498-222-230932263477498/AnsiballZ_setup.py'
Nov 24 09:17:00 compute-0 sudo[32466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:17:00 compute-0 python3.9[32468]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:17:00 compute-0 sudo[32466]: pam_unix(sudo:session): session closed for user root
Nov 24 09:17:01 compute-0 sudo[32622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szriqvbibfqvjahhiigzgxagmbszucsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975820.8162246-246-122981332574724/AnsiballZ_file.py'
Nov 24 09:17:01 compute-0 sudo[32622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:17:01 compute-0 python3.9[32624]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:17:01 compute-0 sudo[32622]: pam_unix(sudo:session): session closed for user root
Nov 24 09:17:01 compute-0 sudo[32774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntpwvqbijtoeqoxnwgrckhywottwellk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975821.557817-273-53901389038001/AnsiballZ_file.py'
Nov 24 09:17:01 compute-0 sudo[32774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:17:02 compute-0 python3.9[32776]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:17:02 compute-0 sudo[32774]: pam_unix(sudo:session): session closed for user root
Nov 24 09:17:02 compute-0 python3.9[32926]: ansible-ansible.builtin.service_facts Invoked
Nov 24 09:17:06 compute-0 python3.9[33180]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:17:07 compute-0 python3.9[33330]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:17:08 compute-0 python3.9[33484]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:17:09 compute-0 sudo[33640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckncrkxkuccwpsyroyphbbknhbnwqwxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975829.2463531-417-274848946133051/AnsiballZ_setup.py'
Nov 24 09:17:09 compute-0 sudo[33640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:17:09 compute-0 python3.9[33642]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 09:17:09 compute-0 sudo[33640]: pam_unix(sudo:session): session closed for user root
Nov 24 09:17:10 compute-0 sudo[33724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzzuigsuvdybgbfrwmerltgehblgrckf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975829.2463531-417-274848946133051/AnsiballZ_dnf.py'
Nov 24 09:17:10 compute-0 sudo[33724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:17:10 compute-0 python3.9[33726]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 09:18:08 compute-0 systemd[1]: Reloading.
Nov 24 09:18:08 compute-0 systemd-rc-local-generator[33922]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:18:09 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Nov 24 09:18:09 compute-0 systemd[1]: Reloading.
Nov 24 09:18:09 compute-0 systemd-rc-local-generator[33962]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:18:09 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Nov 24 09:18:09 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Nov 24 09:18:09 compute-0 systemd[1]: Reloading.
Nov 24 09:18:09 compute-0 systemd-rc-local-generator[34002]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:18:09 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Nov 24 09:18:09 compute-0 dbus-broker-launch[790]: Noticed file-system modification, trigger reload.
Nov 24 09:18:09 compute-0 dbus-broker-launch[790]: Noticed file-system modification, trigger reload.
Nov 24 09:18:09 compute-0 dbus-broker-launch[790]: Noticed file-system modification, trigger reload.
Nov 24 09:19:09 compute-0 kernel: SELinux:  Converting 2718 SID table entries...
Nov 24 09:19:09 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 09:19:09 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 24 09:19:09 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 09:19:09 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 24 09:19:09 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 09:19:09 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 09:19:09 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 09:19:10 compute-0 dbus-broker-launch[810]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Nov 24 09:19:10 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 09:19:10 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 24 09:19:10 compute-0 systemd[1]: Reloading.
Nov 24 09:19:10 compute-0 systemd-rc-local-generator[34315]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:19:10 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 24 09:19:11 compute-0 sudo[33724]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:11 compute-0 sudo[35222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yoriitokyncrlnsfezjlucjzygdseevd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975951.152138-453-214835453201687/AnsiballZ_command.py'
Nov 24 09:19:11 compute-0 sudo[35222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:11 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 09:19:11 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 24 09:19:11 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.115s CPU time.
Nov 24 09:19:11 compute-0 systemd[1]: run-r5afa5c5217784999ad979261742ced7b.service: Deactivated successfully.
Nov 24 09:19:11 compute-0 python3.9[35225]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:19:12 compute-0 sudo[35222]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:13 compute-0 sudo[35505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxizhrwvtgeqlqclullgnqhnxcphvrmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975953.0213542-477-110252772108262/AnsiballZ_selinux.py'
Nov 24 09:19:13 compute-0 sudo[35505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:13 compute-0 python3.9[35507]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 24 09:19:13 compute-0 sudo[35505]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:14 compute-0 sudo[35657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ognymahsvhaivdrnhunbghwbcqujtgsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975954.385418-510-206563877960291/AnsiballZ_command.py'
Nov 24 09:19:14 compute-0 sudo[35657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:14 compute-0 python3.9[35659]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 24 09:19:15 compute-0 sudo[35657]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:16 compute-0 sudo[35810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcaxnbfitceiqiauukmuikkaaqrzqpob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975956.170897-534-216655979510347/AnsiballZ_file.py'
Nov 24 09:19:16 compute-0 sudo[35810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:17 compute-0 python3.9[35812]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:19:17 compute-0 sudo[35810]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:17 compute-0 sudo[35963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylraywbokrcynfmmyltxfevxvnzatfio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975957.4797616-558-276050443030231/AnsiballZ_mount.py'
Nov 24 09:19:17 compute-0 sudo[35963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:18 compute-0 python3.9[35965]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 24 09:19:18 compute-0 sudo[35963]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:19 compute-0 sudo[36115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xozaorekdibprwgugardyoplmkqfuhbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975959.397663-642-167250836137508/AnsiballZ_file.py'
Nov 24 09:19:19 compute-0 sudo[36115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:19 compute-0 python3.9[36117]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:19:19 compute-0 sudo[36115]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:20 compute-0 sudo[36267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umwsldkzceefwezrzdykwwyqzjqwnbyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975960.123113-666-117815022759188/AnsiballZ_stat.py'
Nov 24 09:19:20 compute-0 sudo[36267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:20 compute-0 python3.9[36269]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:19:20 compute-0 sudo[36267]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:21 compute-0 sudo[36390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-coquchfxjsnkdmcwldadhskfozwivhef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975960.123113-666-117815022759188/AnsiballZ_copy.py'
Nov 24 09:19:21 compute-0 sudo[36390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:21 compute-0 python3.9[36392]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763975960.123113-666-117815022759188/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=544ccad07cd49583316075cf420b5b550bb4de77 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:19:21 compute-0 sudo[36390]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:24 compute-0 sudo[36542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhwfdjdtsnvcgujpeplicwwifxwgzxrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975964.2600236-738-201278896818006/AnsiballZ_stat.py'
Nov 24 09:19:24 compute-0 sudo[36542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:24 compute-0 python3.9[36544]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:19:24 compute-0 sudo[36542]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:25 compute-0 sudo[36694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opdcfpfiqkssbpronbujyprmukhrkoca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975964.9132268-762-200287835250447/AnsiballZ_command.py'
Nov 24 09:19:25 compute-0 sudo[36694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:25 compute-0 python3.9[36696]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:19:25 compute-0 sudo[36694]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:25 compute-0 sudo[36847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tacochzhmasmmcneyizshuxqxemcifyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975965.6809263-786-14721637458987/AnsiballZ_file.py'
Nov 24 09:19:25 compute-0 sudo[36847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:26 compute-0 python3.9[36849]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:19:26 compute-0 sudo[36847]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:27 compute-0 sudo[36999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xceiokkmesmfbylpwywfuxysyvwbizxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975966.680004-819-159270739022235/AnsiballZ_getent.py'
Nov 24 09:19:27 compute-0 sudo[36999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:27 compute-0 python3.9[37001]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 24 09:19:27 compute-0 sudo[36999]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:27 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 09:19:27 compute-0 sudo[37153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pihjhdjiobwmhrhzpsokrjnaqemqgcdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975967.4885879-843-15456217588211/AnsiballZ_group.py'
Nov 24 09:19:27 compute-0 sudo[37153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:28 compute-0 python3.9[37155]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 24 09:19:28 compute-0 groupadd[37156]: group added to /etc/group: name=qemu, GID=107
Nov 24 09:19:28 compute-0 groupadd[37156]: group added to /etc/gshadow: name=qemu
Nov 24 09:19:28 compute-0 groupadd[37156]: new group: name=qemu, GID=107
Nov 24 09:19:28 compute-0 sudo[37153]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:29 compute-0 sudo[37311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwnfgsnwtefeqkgykuadqzgdvoshvsph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975968.4292984-867-268536743960615/AnsiballZ_user.py'
Nov 24 09:19:29 compute-0 sudo[37311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:29 compute-0 python3.9[37313]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 24 09:19:29 compute-0 useradd[37315]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Nov 24 09:19:29 compute-0 sudo[37311]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:29 compute-0 sudo[37472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zicuvkvaekzdgkumjtphluvbnrermhcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975969.7242231-891-96509894019789/AnsiballZ_getent.py'
Nov 24 09:19:29 compute-0 sudo[37472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:30 compute-0 python3.9[37474]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 24 09:19:30 compute-0 sudo[37472]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:30 compute-0 sudo[37625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muylqyhzbxmedjwxxhtleeafdibzdbyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975970.4671454-915-236868149255613/AnsiballZ_group.py'
Nov 24 09:19:30 compute-0 sudo[37625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:30 compute-0 python3.9[37627]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 24 09:19:30 compute-0 groupadd[37628]: group added to /etc/group: name=hugetlbfs, GID=42477
Nov 24 09:19:30 compute-0 groupadd[37628]: group added to /etc/gshadow: name=hugetlbfs
Nov 24 09:19:30 compute-0 groupadd[37628]: new group: name=hugetlbfs, GID=42477
Nov 24 09:19:30 compute-0 sudo[37625]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:31 compute-0 sudo[37783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jonuvwalddoxvaeywkjhsqydxjqzxsuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975971.3189619-942-47395832391150/AnsiballZ_file.py'
Nov 24 09:19:31 compute-0 sudo[37783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:31 compute-0 python3.9[37785]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 24 09:19:31 compute-0 sudo[37783]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:32 compute-0 sudo[37935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrazqitxwbtomatinekfmoknmzypkguc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975972.3947568-975-132954695783007/AnsiballZ_dnf.py'
Nov 24 09:19:32 compute-0 sudo[37935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:32 compute-0 python3.9[37937]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 09:19:34 compute-0 sudo[37935]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:35 compute-0 sudo[38090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eogalpnlhulnpurutspdaanamdrsuzhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975974.7870257-999-80754731239187/AnsiballZ_file.py'
Nov 24 09:19:35 compute-0 sudo[38090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:35 compute-0 python3.9[38092]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:19:35 compute-0 sudo[38090]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:35 compute-0 sudo[38242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwlrkzgtrfwylvcmrrakrovjwdtegqci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975975.5129805-1023-225659477917339/AnsiballZ_stat.py'
Nov 24 09:19:35 compute-0 sudo[38242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:35 compute-0 python3.9[38244]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:19:35 compute-0 sudo[38242]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:36 compute-0 sudo[38365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klweiwqnveaivffgzpvowynwvnoxujzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975975.5129805-1023-225659477917339/AnsiballZ_copy.py'
Nov 24 09:19:36 compute-0 sudo[38365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:36 compute-0 python3.9[38367]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763975975.5129805-1023-225659477917339/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:19:36 compute-0 sudo[38365]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:37 compute-0 sudo[38517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjtzpwyujxlpueqaafhndlibwlucqrgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975976.899751-1068-164421260101729/AnsiballZ_systemd.py'
Nov 24 09:19:37 compute-0 sudo[38517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:37 compute-0 python3.9[38519]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 09:19:37 compute-0 systemd[1]: Starting Load Kernel Modules...
Nov 24 09:19:37 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Nov 24 09:19:37 compute-0 kernel: Bridge firewalling registered
Nov 24 09:19:37 compute-0 systemd-modules-load[38523]: Inserted module 'br_netfilter'
Nov 24 09:19:37 compute-0 systemd[1]: Finished Load Kernel Modules.
Nov 24 09:19:37 compute-0 sudo[38517]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:38 compute-0 sudo[38676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dywxcuimdqmicjlyrhkznbcnxhccjiit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975978.0624115-1092-17279592249082/AnsiballZ_stat.py'
Nov 24 09:19:38 compute-0 sudo[38676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:38 compute-0 python3.9[38678]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:19:38 compute-0 sudo[38676]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:39 compute-0 sudo[38799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqlrvlmwqfszdvkapljiuewsgwbpfcse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975978.0624115-1092-17279592249082/AnsiballZ_copy.py'
Nov 24 09:19:39 compute-0 sudo[38799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:39 compute-0 python3.9[38801]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763975978.0624115-1092-17279592249082/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:19:39 compute-0 sudo[38799]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:39 compute-0 sudo[38951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xckhcivcbrbkjhtocjolvmmgokhyknto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975979.7288318-1146-190659236303082/AnsiballZ_dnf.py'
Nov 24 09:19:39 compute-0 sudo[38951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:40 compute-0 python3.9[38953]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 09:19:44 compute-0 dbus-broker-launch[790]: Noticed file-system modification, trigger reload.
Nov 24 09:19:44 compute-0 dbus-broker-launch[790]: Noticed file-system modification, trigger reload.
Nov 24 09:19:44 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 09:19:44 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 24 09:19:44 compute-0 systemd[1]: Reloading.
Nov 24 09:19:44 compute-0 systemd-rc-local-generator[39016]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:19:45 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 24 09:19:45 compute-0 sudo[38951]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:46 compute-0 python3.9[40371]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:19:47 compute-0 python3.9[41370]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 24 09:19:47 compute-0 python3.9[42120]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:19:48 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 09:19:48 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 24 09:19:48 compute-0 systemd[1]: man-db-cache-update.service: Consumed 4.672s CPU time.
Nov 24 09:19:48 compute-0 systemd[1]: run-rf9328d69990b40cbac6f9d311c1a196e.service: Deactivated successfully.
Nov 24 09:19:48 compute-0 sudo[43148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmabaqkfvfgbnmrijnioncasqynkoobn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975988.432568-1263-18553420791403/AnsiballZ_command.py'
Nov 24 09:19:48 compute-0 sudo[43148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:48 compute-0 python3.9[43150]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:19:48 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 24 09:19:49 compute-0 systemd[1]: Starting Authorization Manager...
Nov 24 09:19:49 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 24 09:19:49 compute-0 polkitd[43367]: Started polkitd version 0.117
Nov 24 09:19:49 compute-0 polkitd[43367]: Loading rules from directory /etc/polkit-1/rules.d
Nov 24 09:19:49 compute-0 polkitd[43367]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 24 09:19:49 compute-0 polkitd[43367]: Finished loading, compiling and executing 2 rules
Nov 24 09:19:49 compute-0 systemd[1]: Started Authorization Manager.
Nov 24 09:19:49 compute-0 polkitd[43367]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Nov 24 09:19:49 compute-0 sudo[43148]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:50 compute-0 sudo[43535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkzdutcwijhcaymgvqzuvvxqkwsxbjht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975989.9799244-1290-43474427564853/AnsiballZ_systemd.py'
Nov 24 09:19:50 compute-0 sudo[43535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:50 compute-0 python3.9[43537]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:19:50 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 24 09:19:50 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Nov 24 09:19:50 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 24 09:19:50 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 24 09:19:50 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 24 09:19:50 compute-0 sudo[43535]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:51 compute-0 python3.9[43698]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 24 09:19:54 compute-0 sudo[43848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utrjsvcbdvqnqlplpfhilskjqofrzixw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975994.5662801-1461-182651713001408/AnsiballZ_systemd.py'
Nov 24 09:19:54 compute-0 sudo[43848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:55 compute-0 python3.9[43850]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:19:55 compute-0 systemd[1]: Reloading.
Nov 24 09:19:55 compute-0 systemd-rc-local-generator[43882]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:19:55 compute-0 sudo[43848]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:55 compute-0 sudo[44037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-istgwkoqixjtwrrztybpoedtjudjhmmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975995.5153916-1461-233609225535535/AnsiballZ_systemd.py'
Nov 24 09:19:55 compute-0 sudo[44037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:56 compute-0 python3.9[44039]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:19:56 compute-0 systemd[1]: Reloading.
Nov 24 09:19:56 compute-0 systemd-rc-local-generator[44070]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:19:56 compute-0 sudo[44037]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:57 compute-0 sudo[44226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuepvkznoabndtkvdfwviehqlktmblur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975996.7601285-1509-27259659610692/AnsiballZ_command.py'
Nov 24 09:19:57 compute-0 sudo[44226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:57 compute-0 python3.9[44228]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:19:57 compute-0 sudo[44226]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:57 compute-0 sudo[44379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnyjcdyriqvosghoiwlirxjawwzgkdec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975997.4489124-1533-171900168910364/AnsiballZ_command.py'
Nov 24 09:19:57 compute-0 sudo[44379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:57 compute-0 python3.9[44381]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:19:57 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Nov 24 09:19:57 compute-0 sudo[44379]: pam_unix(sudo:session): session closed for user root
Nov 24 09:19:58 compute-0 sudo[44532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahwwbididkpbcnntqjsubjcaypljyzhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763975998.1233296-1557-152537705005241/AnsiballZ_command.py'
Nov 24 09:19:58 compute-0 sudo[44532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:19:58 compute-0 python3.9[44534]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:19:59 compute-0 sudo[44532]: pam_unix(sudo:session): session closed for user root
Nov 24 09:20:00 compute-0 sudo[44694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksefypecdwbbkedvedbkobhmhjyixdyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976000.2938015-1581-5963309288154/AnsiballZ_command.py'
Nov 24 09:20:00 compute-0 sudo[44694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:20:00 compute-0 python3.9[44696]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:20:00 compute-0 sudo[44694]: pam_unix(sudo:session): session closed for user root
Nov 24 09:20:01 compute-0 sudo[44847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suilrwxbxocjqgubhaizwrmjqngrfppv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976000.9747388-1605-239711216970343/AnsiballZ_systemd.py'
Nov 24 09:20:01 compute-0 sudo[44847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:20:01 compute-0 python3.9[44849]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 09:20:01 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 24 09:20:01 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Nov 24 09:20:01 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Nov 24 09:20:01 compute-0 systemd[1]: Starting Apply Kernel Variables...
Nov 24 09:20:01 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 24 09:20:01 compute-0 systemd[1]: Finished Apply Kernel Variables.
Nov 24 09:20:01 compute-0 sudo[44847]: pam_unix(sudo:session): session closed for user root
Nov 24 09:20:02 compute-0 sshd-session[31260]: Connection closed by 192.168.122.30 port 53938
Nov 24 09:20:02 compute-0 sshd-session[31257]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:20:02 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Nov 24 09:20:02 compute-0 systemd[1]: session-9.scope: Consumed 2min 7.426s CPU time.
Nov 24 09:20:02 compute-0 systemd-logind[822]: Session 9 logged out. Waiting for processes to exit.
Nov 24 09:20:02 compute-0 systemd-logind[822]: Removed session 9.
Nov 24 09:20:08 compute-0 sshd-session[44880]: Accepted publickey for zuul from 192.168.122.30 port 35074 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 09:20:08 compute-0 systemd-logind[822]: New session 10 of user zuul.
Nov 24 09:20:08 compute-0 systemd[1]: Started Session 10 of User zuul.
Nov 24 09:20:08 compute-0 sshd-session[44880]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:20:09 compute-0 python3.9[45033]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:20:10 compute-0 sudo[45187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fldhnuljspqzjsptllbtwmkhpymzwgxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976009.8095574-68-90114522270277/AnsiballZ_getent.py'
Nov 24 09:20:10 compute-0 sudo[45187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:20:10 compute-0 python3.9[45189]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 24 09:20:10 compute-0 sudo[45187]: pam_unix(sudo:session): session closed for user root
Nov 24 09:20:11 compute-0 sudo[45340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imosrfttytaynlunetyeforzaexeswvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976010.7382758-92-156871858824679/AnsiballZ_group.py'
Nov 24 09:20:11 compute-0 sudo[45340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:20:11 compute-0 python3.9[45342]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 24 09:20:11 compute-0 groupadd[45343]: group added to /etc/group: name=openvswitch, GID=42476
Nov 24 09:20:11 compute-0 groupadd[45343]: group added to /etc/gshadow: name=openvswitch
Nov 24 09:20:11 compute-0 groupadd[45343]: new group: name=openvswitch, GID=42476
Nov 24 09:20:11 compute-0 sudo[45340]: pam_unix(sudo:session): session closed for user root
Nov 24 09:20:12 compute-0 sudo[45498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgysteswaikjrkjtriqcumdbznwlebfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976011.5873165-116-125187286852809/AnsiballZ_user.py'
Nov 24 09:20:12 compute-0 sudo[45498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:20:12 compute-0 python3.9[45500]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 24 09:20:12 compute-0 useradd[45502]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Nov 24 09:20:12 compute-0 useradd[45502]: add 'openvswitch' to group 'hugetlbfs'
Nov 24 09:20:12 compute-0 useradd[45502]: add 'openvswitch' to shadow group 'hugetlbfs'
Nov 24 09:20:12 compute-0 sudo[45498]: pam_unix(sudo:session): session closed for user root
Nov 24 09:20:13 compute-0 sudo[45658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwufjxgqvgboghmycffvonhzwaxibwhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976012.796188-146-252346692790311/AnsiballZ_setup.py'
Nov 24 09:20:13 compute-0 sudo[45658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:20:13 compute-0 python3.9[45660]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 09:20:13 compute-0 sudo[45658]: pam_unix(sudo:session): session closed for user root
Nov 24 09:20:13 compute-0 sudo[45742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhpibkbalowxmipjzshexrlvvjavaroe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976012.796188-146-252346692790311/AnsiballZ_dnf.py'
Nov 24 09:20:13 compute-0 sudo[45742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:20:14 compute-0 python3.9[45744]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 24 09:20:16 compute-0 sudo[45742]: pam_unix(sudo:session): session closed for user root
Nov 24 09:20:16 compute-0 sudo[45906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icdnunnhuqdvadoczyggjfwbtnzshhzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976016.7619476-188-8311030619285/AnsiballZ_dnf.py'
Nov 24 09:20:16 compute-0 sudo[45906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:20:17 compute-0 python3.9[45908]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 09:20:28 compute-0 kernel: SELinux:  Converting 2730 SID table entries...
Nov 24 09:20:28 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 09:20:28 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 24 09:20:28 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 09:20:28 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 24 09:20:28 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 09:20:28 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 09:20:28 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 09:20:28 compute-0 groupadd[45932]: group added to /etc/group: name=unbound, GID=993
Nov 24 09:20:28 compute-0 groupadd[45932]: group added to /etc/gshadow: name=unbound
Nov 24 09:20:28 compute-0 groupadd[45932]: new group: name=unbound, GID=993
Nov 24 09:20:28 compute-0 useradd[45939]: new user: name=unbound, UID=993, GID=993, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Nov 24 09:20:28 compute-0 dbus-broker-launch[810]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Nov 24 09:20:28 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Nov 24 09:20:29 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 09:20:29 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 24 09:20:30 compute-0 systemd[1]: Reloading.
Nov 24 09:20:30 compute-0 systemd-rc-local-generator[46437]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:20:30 compute-0 systemd-sysv-generator[46441]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:20:30 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 24 09:20:30 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 09:20:30 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 24 09:20:30 compute-0 systemd[1]: run-r59408ae4cf3d470c892871081a66dccd.service: Deactivated successfully.
Nov 24 09:20:30 compute-0 sudo[45906]: pam_unix(sudo:session): session closed for user root
Nov 24 09:20:31 compute-0 sudo[47006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikwjapgfqpgzhwqjmtlzyhlqehpseljq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976031.052834-212-93865978709580/AnsiballZ_systemd.py'
Nov 24 09:20:31 compute-0 sudo[47006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:20:31 compute-0 python3.9[47008]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 09:20:31 compute-0 systemd[1]: Reloading.
Nov 24 09:20:32 compute-0 systemd-rc-local-generator[47041]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:20:32 compute-0 systemd-sysv-generator[47045]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:20:32 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Nov 24 09:20:32 compute-0 chown[47052]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Nov 24 09:20:32 compute-0 ovs-ctl[47057]: /etc/openvswitch/conf.db does not exist ... (warning).
Nov 24 09:20:32 compute-0 ovs-ctl[47057]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Nov 24 09:20:32 compute-0 ovs-ctl[47057]: Starting ovsdb-server [  OK  ]
Nov 24 09:20:32 compute-0 ovs-vsctl[47106]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Nov 24 09:20:32 compute-0 ovs-vsctl[47125]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"feb242b9-6422-4c37-bc7a-5c14a79beaf8\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Nov 24 09:20:32 compute-0 ovs-ctl[47057]: Configuring Open vSwitch system IDs [  OK  ]
Nov 24 09:20:32 compute-0 ovs-vsctl[47131]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 24 09:20:32 compute-0 ovs-ctl[47057]: Enabling remote OVSDB managers [  OK  ]
Nov 24 09:20:32 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Nov 24 09:20:32 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Nov 24 09:20:32 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Nov 24 09:20:32 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Nov 24 09:20:32 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Nov 24 09:20:32 compute-0 ovs-ctl[47175]: Inserting openvswitch module [  OK  ]
Nov 24 09:20:32 compute-0 ovs-ctl[47144]: Starting ovs-vswitchd [  OK  ]
Nov 24 09:20:32 compute-0 ovs-vsctl[47192]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 24 09:20:32 compute-0 ovs-ctl[47144]: Enabling remote OVSDB managers [  OK  ]
Nov 24 09:20:32 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Nov 24 09:20:32 compute-0 systemd[1]: Starting Open vSwitch...
Nov 24 09:20:32 compute-0 systemd[1]: Finished Open vSwitch.
Nov 24 09:20:32 compute-0 sudo[47006]: pam_unix(sudo:session): session closed for user root
Nov 24 09:20:33 compute-0 python3.9[47344]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:20:34 compute-0 sudo[47494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhbudaoncwgmjgalxcwdcrkijzlbtktz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976034.3318229-266-33304164356663/AnsiballZ_sefcontext.py'
Nov 24 09:20:34 compute-0 sudo[47494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:20:35 compute-0 python3.9[47496]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 24 09:20:36 compute-0 kernel: SELinux:  Converting 2744 SID table entries...
Nov 24 09:20:36 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 09:20:36 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 24 09:20:36 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 09:20:36 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 24 09:20:36 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 09:20:36 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 09:20:36 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 09:20:36 compute-0 sudo[47494]: pam_unix(sudo:session): session closed for user root
Nov 24 09:20:37 compute-0 python3.9[47651]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:20:38 compute-0 sudo[47807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvpidkfdehnmvruxefxdtlsgyajuxauk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976037.8345215-320-240127201531750/AnsiballZ_dnf.py'
Nov 24 09:20:38 compute-0 dbus-broker-launch[810]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Nov 24 09:20:38 compute-0 sudo[47807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:20:38 compute-0 python3.9[47809]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 09:20:39 compute-0 sudo[47807]: pam_unix(sudo:session): session closed for user root
Nov 24 09:20:40 compute-0 sudo[47960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anziaotnfsqftktqvcevktlnusoighxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976039.8465364-344-14958131569507/AnsiballZ_command.py'
Nov 24 09:20:40 compute-0 sudo[47960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:20:40 compute-0 python3.9[47962]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:20:41 compute-0 sudo[47960]: pam_unix(sudo:session): session closed for user root
Nov 24 09:20:41 compute-0 sudo[48247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzovzakhbagrgsoiopdusiufqjgotgem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976041.5292058-368-111931596796871/AnsiballZ_file.py'
Nov 24 09:20:41 compute-0 sudo[48247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:20:42 compute-0 python3.9[48249]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 24 09:20:42 compute-0 sudo[48247]: pam_unix(sudo:session): session closed for user root
Nov 24 09:20:43 compute-0 python3.9[48399]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:20:43 compute-0 sudo[48551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btazqgluewkgwpobgpirtkipiyljqmbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976043.4458177-416-116356025473815/AnsiballZ_dnf.py'
Nov 24 09:20:43 compute-0 sudo[48551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:20:43 compute-0 python3.9[48553]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 09:20:45 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 09:20:45 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 24 09:20:45 compute-0 systemd[1]: Reloading.
Nov 24 09:20:45 compute-0 systemd-rc-local-generator[48593]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:20:45 compute-0 systemd-sysv-generator[48596]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:20:45 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 24 09:20:46 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 09:20:46 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 24 09:20:46 compute-0 systemd[1]: run-rbeca849af7a84e6a806d4b74c2e7d183.service: Deactivated successfully.
Nov 24 09:20:46 compute-0 sudo[48551]: pam_unix(sudo:session): session closed for user root
Nov 24 09:20:46 compute-0 sudo[48868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdfusxnpaxbmhpibhlfjpimnjhyhiyae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976046.4816027-440-90757212664688/AnsiballZ_systemd.py'
Nov 24 09:20:46 compute-0 sudo[48868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:20:47 compute-0 python3.9[48870]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 09:20:47 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 24 09:20:47 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Nov 24 09:20:47 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Nov 24 09:20:47 compute-0 systemd[1]: Stopping Network Manager...
Nov 24 09:20:47 compute-0 NetworkManager[7187]: <info>  [1763976047.0901] caught SIGTERM, shutting down normally.
Nov 24 09:20:47 compute-0 NetworkManager[7187]: <info>  [1763976047.0919] dhcp4 (eth0): canceled DHCP transaction
Nov 24 09:20:47 compute-0 NetworkManager[7187]: <info>  [1763976047.0920] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 24 09:20:47 compute-0 NetworkManager[7187]: <info>  [1763976047.0920] dhcp4 (eth0): state changed no lease
Nov 24 09:20:47 compute-0 NetworkManager[7187]: <info>  [1763976047.0922] manager: NetworkManager state is now CONNECTED_SITE
Nov 24 09:20:47 compute-0 NetworkManager[7187]: <info>  [1763976047.0999] exiting (success)
Nov 24 09:20:47 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 24 09:20:47 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 24 09:20:47 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 24 09:20:47 compute-0 systemd[1]: Stopped Network Manager.
Nov 24 09:20:47 compute-0 systemd[1]: NetworkManager.service: Consumed 9.078s CPU time, 4.3M memory peak, read 0B from disk, written 31.5K to disk.
Nov 24 09:20:47 compute-0 systemd[1]: Starting Network Manager...
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.1610] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:428e28ae-891b-4271-8668-6c1110086104)
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.1611] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.1675] manager[0x55cb566cf090]: monitoring kernel firmware directory '/lib/firmware'.
Nov 24 09:20:47 compute-0 systemd[1]: Starting Hostname Service...
Nov 24 09:20:47 compute-0 systemd[1]: Started Hostname Service.
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2443] hostname: hostname: using hostnamed
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2444] hostname: static hostname changed from (none) to "compute-0"
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2449] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2454] manager[0x55cb566cf090]: rfkill: Wi-Fi hardware radio set enabled
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2454] manager[0x55cb566cf090]: rfkill: WWAN hardware radio set enabled
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2473] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2481] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2482] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2483] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2483] manager: Networking is enabled by state file
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2485] settings: Loaded settings plugin: keyfile (internal)
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2488] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2518] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2528] dhcp: init: Using DHCP client 'internal'
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2531] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2537] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2543] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2551] device (lo): Activation: starting connection 'lo' (78ddbdbd-6a47-40ea-a116-11a5bade7fe9)
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2557] device (eth0): carrier: link connected
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2561] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2566] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2567] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2574] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2582] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2588] device (eth1): carrier: link connected
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2591] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2597] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (e1906d45-4a19-53b0-9584-1b272dee14f0) (indicated)
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2598] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2605] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2613] device (eth1): Activation: starting connection 'ci-private-network' (e1906d45-4a19-53b0-9584-1b272dee14f0)
Nov 24 09:20:47 compute-0 systemd[1]: Started Network Manager.
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2620] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2632] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2635] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2637] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2639] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2640] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2642] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2643] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2646] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2658] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2661] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2667] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2681] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2687] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2689] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2692] device (lo): Activation: successful, device activated.
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2697] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2698] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2700] manager: NetworkManager state is now CONNECTED_LOCAL
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2702] device (eth1): Activation: successful, device activated.
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2708] dhcp4 (eth0): state changed new lease, address=38.129.56.124
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2714] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 24 09:20:47 compute-0 systemd[1]: Starting Network Manager Wait Online...
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2804] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2824] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2826] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2833] manager: NetworkManager state is now CONNECTED_SITE
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2838] device (eth0): Activation: successful, device activated.
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2845] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 24 09:20:47 compute-0 NetworkManager[48883]: <info>  [1763976047.2849] manager: startup complete
Nov 24 09:20:47 compute-0 sudo[48868]: pam_unix(sudo:session): session closed for user root
Nov 24 09:20:47 compute-0 systemd[1]: Finished Network Manager Wait Online.
Nov 24 09:20:47 compute-0 sudo[49094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdkfxkcpaxaoeaumqzyyutoyorwrhbwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976047.581822-464-68847182467218/AnsiballZ_dnf.py'
Nov 24 09:20:47 compute-0 sudo[49094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:20:48 compute-0 python3.9[49096]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 09:20:52 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 09:20:52 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 24 09:20:52 compute-0 systemd[1]: Reloading.
Nov 24 09:20:52 compute-0 systemd-sysv-generator[49152]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:20:52 compute-0 systemd-rc-local-generator[49149]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:20:52 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 24 09:20:53 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 09:20:53 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 24 09:20:53 compute-0 systemd[1]: run-raf544e34c575468c839d8fcb06a13c7a.service: Deactivated successfully.
Nov 24 09:20:53 compute-0 sudo[49094]: pam_unix(sudo:session): session closed for user root
Nov 24 09:20:54 compute-0 sudo[49552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozyolwserrptlvakgznnlnuelmlspmkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976054.035486-500-111109321810566/AnsiballZ_stat.py'
Nov 24 09:20:54 compute-0 sudo[49552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:20:54 compute-0 python3.9[49554]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:20:54 compute-0 sudo[49552]: pam_unix(sudo:session): session closed for user root
Nov 24 09:20:55 compute-0 sudo[49704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnrjsoelqelehqjkxhwlvduyirklyqeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976054.721021-527-260919586461385/AnsiballZ_ini_file.py'
Nov 24 09:20:55 compute-0 sudo[49704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:20:55 compute-0 python3.9[49706]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:20:55 compute-0 sudo[49704]: pam_unix(sudo:session): session closed for user root
Nov 24 09:20:55 compute-0 sudo[49858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwonhwtquxgfwweroglqlratnaxknpqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976055.6635766-557-212636698228342/AnsiballZ_ini_file.py'
Nov 24 09:20:55 compute-0 sudo[49858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:20:56 compute-0 python3.9[49860]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:20:56 compute-0 sudo[49858]: pam_unix(sudo:session): session closed for user root
Nov 24 09:20:56 compute-0 sudo[50010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtdcpuilqhovzhwksmzmmvjrspbnmama ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976056.2942603-557-185035385834403/AnsiballZ_ini_file.py'
Nov 24 09:20:56 compute-0 sudo[50010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:20:56 compute-0 python3.9[50012]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:20:56 compute-0 sudo[50010]: pam_unix(sudo:session): session closed for user root
Nov 24 09:20:57 compute-0 sudo[50162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivimfhxyrdygdoqziexzmqsuauqbzzwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976056.9668918-602-162189521416151/AnsiballZ_ini_file.py'
Nov 24 09:20:57 compute-0 sudo[50162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:20:57 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 24 09:20:57 compute-0 python3.9[50164]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:20:57 compute-0 sudo[50162]: pam_unix(sudo:session): session closed for user root
Nov 24 09:20:57 compute-0 sudo[50314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkbjzlhsrgeziorcmdgymliedheykxuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976057.6421425-602-220683299260092/AnsiballZ_ini_file.py'
Nov 24 09:20:57 compute-0 sudo[50314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:20:58 compute-0 python3.9[50316]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:20:58 compute-0 sudo[50314]: pam_unix(sudo:session): session closed for user root
Nov 24 09:20:58 compute-0 sudo[50466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isxrsdrinpjoqjzejdmryoaumomsyipk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976058.3168194-647-47925082577224/AnsiballZ_stat.py'
Nov 24 09:20:58 compute-0 sudo[50466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:20:58 compute-0 python3.9[50468]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:20:58 compute-0 sudo[50466]: pam_unix(sudo:session): session closed for user root
Nov 24 09:20:59 compute-0 sudo[50589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcrcscijyjxkginsybefakkstsdjqqkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976058.3168194-647-47925082577224/AnsiballZ_copy.py'
Nov 24 09:20:59 compute-0 sudo[50589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:20:59 compute-0 python3.9[50591]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1763976058.3168194-647-47925082577224/.source _original_basename=.50xt2hh0 follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:20:59 compute-0 sudo[50589]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:00 compute-0 sudo[50741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehrwnijktlnnckgqpyjturqzwpiycpto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976059.714064-692-127952341963077/AnsiballZ_file.py'
Nov 24 09:21:00 compute-0 sudo[50741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:00 compute-0 python3.9[50743]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:21:00 compute-0 sudo[50741]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:00 compute-0 sudo[50893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dowouruinqwqwmhelnksjzgzggjjfdwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976060.382953-716-90421333385487/AnsiballZ_edpm_os_net_config_mappings.py'
Nov 24 09:21:00 compute-0 sudo[50893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:00 compute-0 python3.9[50895]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Nov 24 09:21:00 compute-0 sudo[50893]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:01 compute-0 sudo[51045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lderxcmtwwdhxwdzlbptcuqsnpuzjmva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976061.246434-743-124335745308897/AnsiballZ_file.py'
Nov 24 09:21:01 compute-0 sudo[51045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:01 compute-0 python3.9[51047]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:21:01 compute-0 sudo[51045]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:02 compute-0 sudo[51197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcxquuhnmtqxwqkrcedvyhfziqpetdre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976062.0875065-773-34259811743015/AnsiballZ_stat.py'
Nov 24 09:21:02 compute-0 sudo[51197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:02 compute-0 sudo[51197]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:02 compute-0 sudo[51320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzitxqdnbejmzfoqjksvxvcopzzgqmlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976062.0875065-773-34259811743015/AnsiballZ_copy.py'
Nov 24 09:21:02 compute-0 sudo[51320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:03 compute-0 sudo[51320]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:03 compute-0 sudo[51472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thqjzlqehcgrbmdwcfwhpmooenzgukmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976063.4620883-818-78952584731746/AnsiballZ_slurp.py'
Nov 24 09:21:03 compute-0 sudo[51472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:04 compute-0 python3.9[51474]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Nov 24 09:21:04 compute-0 sudo[51472]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:05 compute-0 sudo[51647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-canwvltahzgwxpkxwgtvnzxhyarmumdz ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976064.371647-845-166790073941294/async_wrapper.py j492266663712 300 /home/zuul/.ansible/tmp/ansible-tmp-1763976064.371647-845-166790073941294/AnsiballZ_edpm_os_net_config.py _'
Nov 24 09:21:05 compute-0 sudo[51647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:05 compute-0 ansible-async_wrapper.py[51649]: Invoked with j492266663712 300 /home/zuul/.ansible/tmp/ansible-tmp-1763976064.371647-845-166790073941294/AnsiballZ_edpm_os_net_config.py _
Nov 24 09:21:05 compute-0 ansible-async_wrapper.py[51652]: Starting module and watcher
Nov 24 09:21:05 compute-0 ansible-async_wrapper.py[51652]: Start watching 51653 (300)
Nov 24 09:21:05 compute-0 ansible-async_wrapper.py[51653]: Start module (51653)
Nov 24 09:21:05 compute-0 ansible-async_wrapper.py[51649]: Return async_wrapper task started.
Nov 24 09:21:05 compute-0 sudo[51647]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:05 compute-0 python3.9[51654]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Nov 24 09:21:06 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Nov 24 09:21:06 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Nov 24 09:21:06 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Nov 24 09:21:06 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Nov 24 09:21:06 compute-0 kernel: cfg80211: failed to load regulatory.db
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1085] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51655 uid=0 result="success"
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1100] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51655 uid=0 result="success"
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1519] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1521] audit: op="connection-add" uuid="aad8d1d9-7cb6-4719-b0bf-6164f732cb2d" name="br-ex-br" pid=51655 uid=0 result="success"
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1534] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1535] audit: op="connection-add" uuid="39fa917a-09bb-4b82-a346-844b298146f9" name="br-ex-port" pid=51655 uid=0 result="success"
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1545] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1545] audit: op="connection-add" uuid="a3d3b688-2955-4094-b5f8-83639d501bcf" name="eth1-port" pid=51655 uid=0 result="success"
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1555] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1556] audit: op="connection-add" uuid="fb9f61cb-4567-4c98-98a2-3993566049b5" name="vlan20-port" pid=51655 uid=0 result="success"
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1565] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1566] audit: op="connection-add" uuid="2b8c5ec7-ba15-4e74-a10b-cb2bebdcea8e" name="vlan21-port" pid=51655 uid=0 result="success"
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1575] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1576] audit: op="connection-add" uuid="27ae5dba-377c-4494-8d41-88d24aeaaa57" name="vlan22-port" pid=51655 uid=0 result="success"
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1585] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1586] audit: op="connection-add" uuid="73dc000b-88a1-4b2a-b53b-51d2ea4354b0" name="vlan23-port" pid=51655 uid=0 result="success"
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1602] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="connection.timestamp,connection.autoconnect-priority,ipv4.dhcp-timeout,ipv4.dhcp-client-id,ipv6.dhcp-timeout,ipv6.method,ipv6.addr-gen-mode,802-3-ethernet.mtu" pid=51655 uid=0 result="success"
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1615] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1616] audit: op="connection-add" uuid="4258ee27-7347-40b4-a9b9-54fb04be66a1" name="br-ex-if" pid=51655 uid=0 result="success"
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1648] audit: op="connection-update" uuid="e1906d45-4a19-53b0-9584-1b272dee14f0" name="ci-private-network" args="connection.controller,connection.master,connection.port-type,connection.slave-type,connection.timestamp,ovs-interface.type,ipv4.never-default,ipv4.dns,ipv4.method,ipv4.routes,ipv4.addresses,ipv4.routing-rules,ipv6.routing-rules,ipv6.dns,ipv6.method,ipv6.addr-gen-mode,ipv6.addresses,ipv6.routes,ovs-external-ids.data" pid=51655 uid=0 result="success"
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1661] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1662] audit: op="connection-add" uuid="103e838e-7533-4e4b-b2aa-936c799f1bbf" name="vlan20-if" pid=51655 uid=0 result="success"
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1675] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1677] audit: op="connection-add" uuid="17e97025-cc64-488a-a16a-7b21bbd99dfd" name="vlan21-if" pid=51655 uid=0 result="success"
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1690] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1692] audit: op="connection-add" uuid="d89d9a20-1fbb-4c0c-96aa-9b564f57a15a" name="vlan22-if" pid=51655 uid=0 result="success"
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1705] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1707] audit: op="connection-add" uuid="bc5ba9a4-7750-4b6f-a420-f0b6778e8180" name="vlan23-if" pid=51655 uid=0 result="success"
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1717] audit: op="connection-delete" uuid="d1aedfc8-f035-3e2e-89d5-e0202d550efc" name="Wired connection 1" pid=51655 uid=0 result="success"
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1727] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1735] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1739] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (aad8d1d9-7cb6-4719-b0bf-6164f732cb2d)
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1739] audit: op="connection-activate" uuid="aad8d1d9-7cb6-4719-b0bf-6164f732cb2d" name="br-ex-br" pid=51655 uid=0 result="success"
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1741] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1747] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1750] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (39fa917a-09bb-4b82-a346-844b298146f9)
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1752] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1757] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1760] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (a3d3b688-2955-4094-b5f8-83639d501bcf)
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1762] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1768] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1772] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (fb9f61cb-4567-4c98-98a2-3993566049b5)
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1774] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1780] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1783] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (2b8c5ec7-ba15-4e74-a10b-cb2bebdcea8e)
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1785] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1790] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1795] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (27ae5dba-377c-4494-8d41-88d24aeaaa57)
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1796] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1802] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1805] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (73dc000b-88a1-4b2a-b53b-51d2ea4354b0)
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1806] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1808] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1810] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1815] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1819] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1823] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (4258ee27-7347-40b4-a9b9-54fb04be66a1)
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1823] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1826] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1828] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1829] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1830] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1839] device (eth1): disconnecting for new activation request.
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1840] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1842] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1843] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1844] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1845] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1849] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1852] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (103e838e-7533-4e4b-b2aa-936c799f1bbf)
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1853] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1854] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1855] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1856] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1858] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1862] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1865] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (17e97025-cc64-488a-a16a-7b21bbd99dfd)
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1865] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1867] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1868] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1869] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1870] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1875] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1879] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (d89d9a20-1fbb-4c0c-96aa-9b564f57a15a)
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1880] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1883] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1885] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1886] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1889] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1894] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1898] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (bc5ba9a4-7750-4b6f-a420-f0b6778e8180)
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1899] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1903] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1905] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1906] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1908] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1923] audit: op="device-reapply" interface="eth0" ifindex=2 args="connection.autoconnect-priority,ipv4.dhcp-timeout,ipv4.dhcp-client-id,ipv6.method,ipv6.addr-gen-mode,802-3-ethernet.mtu" pid=51655 uid=0 result="success"
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1925] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1929] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1931] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1937] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1941] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1945] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1949] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1951] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1955] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1959] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1963] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1965] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1970] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1975] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1979] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1981] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1986] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1990] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1994] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.1996] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.2001] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.2004] dhcp4 (eth0): canceled DHCP transaction
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.2005] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.2005] dhcp4 (eth0): state changed no lease
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.2006] dhcp4 (eth0): activation: beginning transaction (no timeout)
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.2014] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51655 uid=0 result="fail" reason="Device is not activated"
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.2053] dhcp4 (eth0): state changed new lease, address=38.129.56.124
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.2092] device (eth1): disconnecting for new activation request.
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.2093] audit: op="connection-activate" uuid="e1906d45-4a19-53b0-9584-1b272dee14f0" name="ci-private-network" pid=51655 uid=0 result="success"
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.2247] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.2260] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Nov 24 09:21:07 compute-0 kernel: ovs-system: entered promiscuous mode
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.2268] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.2279] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Nov 24 09:21:07 compute-0 kernel: Timeout policy base is empty
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.2286] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Nov 24 09:21:07 compute-0 systemd-udevd[51660]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.2289] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51655 uid=0 result="success"
Nov 24 09:21:07 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 24 09:21:07 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.2508] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.2608] device (eth1): Activation: starting connection 'ci-private-network' (e1906d45-4a19-53b0-9584-1b272dee14f0)
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.2614] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.2623] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 kernel: br-ex: entered promiscuous mode
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.2626] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.2633] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.2637] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.2641] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.2642] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.2650] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.2652] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.2653] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.2654] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.2662] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.2667] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.2671] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.2675] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.2679] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.2682] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.2686] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.2689] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.2693] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.2695] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.2699] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.2702] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.2706] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.2710] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.2716] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 kernel: vlan22: entered promiscuous mode
Nov 24 09:21:07 compute-0 systemd-udevd[51659]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.2768] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.2781] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.2787] device (eth1): Activation: successful, device activated.
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.2799] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.2813] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.2832] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.2836] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 kernel: vlan23: entered promiscuous mode
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.2850] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.2896] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Nov 24 09:21:07 compute-0 kernel: vlan21: entered promiscuous mode
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.2929] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.2954] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.2955] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.2962] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 24 09:21:07 compute-0 kernel: vlan20: entered promiscuous mode
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.3010] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.3032] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.3050] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.3056] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.3064] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.3082] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.3097] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.3139] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.3142] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.3143] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.3149] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.3165] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.3203] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.3204] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 09:21:07 compute-0 NetworkManager[48883]: <info>  [1763976067.3209] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 24 09:21:08 compute-0 NetworkManager[48883]: <info>  [1763976068.4554] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51655 uid=0 result="success"
Nov 24 09:21:08 compute-0 NetworkManager[48883]: <info>  [1763976068.5984] checkpoint[0x55cb566a6950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Nov 24 09:21:08 compute-0 NetworkManager[48883]: <info>  [1763976068.5986] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51655 uid=0 result="success"
Nov 24 09:21:08 compute-0 sudo[52012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdvkkiifzieejipjpbntyzpsmpdhhjvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976068.357338-845-18611249153116/AnsiballZ_async_status.py'
Nov 24 09:21:08 compute-0 sudo[52012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:08 compute-0 NetworkManager[48883]: <info>  [1763976068.9126] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51655 uid=0 result="success"
Nov 24 09:21:08 compute-0 NetworkManager[48883]: <info>  [1763976068.9137] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51655 uid=0 result="success"
Nov 24 09:21:09 compute-0 python3.9[52014]: ansible-ansible.legacy.async_status Invoked with jid=j492266663712.51649 mode=status _async_dir=/root/.ansible_async
Nov 24 09:21:09 compute-0 sudo[52012]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:09 compute-0 NetworkManager[48883]: <info>  [1763976069.0833] audit: op="networking-control" arg="global-dns-configuration" pid=51655 uid=0 result="success"
Nov 24 09:21:09 compute-0 NetworkManager[48883]: <info>  [1763976069.0861] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Nov 24 09:21:09 compute-0 NetworkManager[48883]: <info>  [1763976069.0889] audit: op="networking-control" arg="global-dns-configuration" pid=51655 uid=0 result="success"
Nov 24 09:21:09 compute-0 NetworkManager[48883]: <info>  [1763976069.0907] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51655 uid=0 result="success"
Nov 24 09:21:09 compute-0 NetworkManager[48883]: <info>  [1763976069.2125] checkpoint[0x55cb566a6a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Nov 24 09:21:09 compute-0 NetworkManager[48883]: <info>  [1763976069.2129] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51655 uid=0 result="success"
Nov 24 09:21:09 compute-0 ansible-async_wrapper.py[51653]: Module complete (51653)
Nov 24 09:21:10 compute-0 ansible-async_wrapper.py[51652]: Done in kid B.
Nov 24 09:21:12 compute-0 sudo[52116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlaznhyshqrrhrdsasobpovqcasfvifi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976068.357338-845-18611249153116/AnsiballZ_async_status.py'
Nov 24 09:21:12 compute-0 sudo[52116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:12 compute-0 python3.9[52118]: ansible-ansible.legacy.async_status Invoked with jid=j492266663712.51649 mode=status _async_dir=/root/.ansible_async
Nov 24 09:21:12 compute-0 sudo[52116]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:12 compute-0 sudo[52216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezqdtgoypnmxoaynzajivrqbjoniltzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976068.357338-845-18611249153116/AnsiballZ_async_status.py'
Nov 24 09:21:12 compute-0 sudo[52216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:12 compute-0 python3.9[52218]: ansible-ansible.legacy.async_status Invoked with jid=j492266663712.51649 mode=cleanup _async_dir=/root/.ansible_async
Nov 24 09:21:13 compute-0 sudo[52216]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:13 compute-0 sudo[52368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iuanfgeaelicrfygjsdordfqgggcuded ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976073.2248254-926-24904996528585/AnsiballZ_stat.py'
Nov 24 09:21:13 compute-0 sudo[52368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:13 compute-0 python3.9[52370]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:21:13 compute-0 sudo[52368]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:13 compute-0 sudo[52491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgeyopgrjbzkjwzitmwmyamjpvdtzuhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976073.2248254-926-24904996528585/AnsiballZ_copy.py'
Nov 24 09:21:13 compute-0 sudo[52491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:14 compute-0 python3.9[52493]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763976073.2248254-926-24904996528585/.source.returncode _original_basename=.t71gflu_ follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:21:14 compute-0 sudo[52491]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:14 compute-0 sudo[52643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxkizmdjezskeutfdthpazkxnznxkoxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976074.5801172-974-95404406898453/AnsiballZ_stat.py'
Nov 24 09:21:14 compute-0 sudo[52643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:14 compute-0 python3.9[52645]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:21:14 compute-0 sudo[52643]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:15 compute-0 sudo[52766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcxpopareahzpzjnorsnocrnphbgbmql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976074.5801172-974-95404406898453/AnsiballZ_copy.py'
Nov 24 09:21:15 compute-0 sudo[52766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:15 compute-0 python3.9[52768]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763976074.5801172-974-95404406898453/.source.cfg _original_basename=.q5gozw91 follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:21:15 compute-0 sudo[52766]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:16 compute-0 sudo[52919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qggkziqdhzbyrgghitweqqoruafvodxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976075.8166032-1019-175754910089586/AnsiballZ_systemd.py'
Nov 24 09:21:16 compute-0 sudo[52919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:16 compute-0 python3.9[52921]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 09:21:16 compute-0 systemd[1]: Reloading Network Manager...
Nov 24 09:21:16 compute-0 NetworkManager[48883]: <info>  [1763976076.4409] audit: op="reload" arg="0" pid=52925 uid=0 result="success"
Nov 24 09:21:16 compute-0 NetworkManager[48883]: <info>  [1763976076.4413] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Nov 24 09:21:16 compute-0 systemd[1]: Reloaded Network Manager.
Nov 24 09:21:16 compute-0 sudo[52919]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:16 compute-0 sshd-session[44883]: Connection closed by 192.168.122.30 port 35074
Nov 24 09:21:16 compute-0 sshd-session[44880]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:21:16 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Nov 24 09:21:16 compute-0 systemd-logind[822]: Session 10 logged out. Waiting for processes to exit.
Nov 24 09:21:16 compute-0 systemd[1]: session-10.scope: Consumed 47.610s CPU time.
Nov 24 09:21:16 compute-0 systemd-logind[822]: Removed session 10.
Nov 24 09:21:17 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 24 09:21:22 compute-0 sshd-session[52959]: Accepted publickey for zuul from 192.168.122.30 port 40464 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 09:21:22 compute-0 systemd-logind[822]: New session 11 of user zuul.
Nov 24 09:21:22 compute-0 systemd[1]: Started Session 11 of User zuul.
Nov 24 09:21:22 compute-0 sshd-session[52959]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:21:23 compute-0 python3.9[53112]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:21:24 compute-0 python3.9[53266]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 09:21:25 compute-0 python3.9[53460]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:21:26 compute-0 sshd-session[52962]: Connection closed by 192.168.122.30 port 40464
Nov 24 09:21:26 compute-0 sshd-session[52959]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:21:26 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Nov 24 09:21:26 compute-0 systemd[1]: session-11.scope: Consumed 2.084s CPU time.
Nov 24 09:21:26 compute-0 systemd-logind[822]: Session 11 logged out. Waiting for processes to exit.
Nov 24 09:21:26 compute-0 systemd-logind[822]: Removed session 11.
Nov 24 09:21:26 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 24 09:21:31 compute-0 sshd-session[53488]: Accepted publickey for zuul from 192.168.122.30 port 57052 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 09:21:31 compute-0 systemd-logind[822]: New session 12 of user zuul.
Nov 24 09:21:31 compute-0 systemd[1]: Started Session 12 of User zuul.
Nov 24 09:21:31 compute-0 sshd-session[53488]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:21:32 compute-0 python3.9[53642]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:21:33 compute-0 python3.9[53796]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:21:34 compute-0 sudo[53950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxfbqnigyqugfoymxjieextankrllsof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976094.173937-80-160681546881891/AnsiballZ_setup.py'
Nov 24 09:21:34 compute-0 sudo[53950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:34 compute-0 python3.9[53952]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 09:21:35 compute-0 sudo[53950]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:35 compute-0 sudo[54035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyyyxfzvkpdppgkznrdhpuhkvyjozqfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976094.173937-80-160681546881891/AnsiballZ_dnf.py'
Nov 24 09:21:35 compute-0 sudo[54035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:35 compute-0 python3.9[54037]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 09:21:37 compute-0 sudo[54035]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:37 compute-0 sudo[54188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyyrwykhggkopebxrrqtskewuqhatjuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976097.3311565-116-114968281211768/AnsiballZ_setup.py'
Nov 24 09:21:37 compute-0 sudo[54188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:37 compute-0 python3.9[54190]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 09:21:38 compute-0 sudo[54188]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:38 compute-0 sudo[54384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sniiyqmusnubwffeuyzykqbfuvflehzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976098.5355062-149-83553292452466/AnsiballZ_file.py'
Nov 24 09:21:38 compute-0 sudo[54384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:39 compute-0 python3.9[54386]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:21:39 compute-0 sudo[54384]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:39 compute-0 sudo[54536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nenxqszcybfdxuliwmlugntjkfounrhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976099.3565266-173-195860520651732/AnsiballZ_command.py'
Nov 24 09:21:39 compute-0 sudo[54536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:39 compute-0 python3.9[54538]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:21:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat2971605487-merged.mount: Deactivated successfully.
Nov 24 09:21:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-metacopy\x2dcheck2688293326-merged.mount: Deactivated successfully.
Nov 24 09:21:39 compute-0 podman[54539]: 2025-11-24 09:21:39.998321607 +0000 UTC m=+0.048792895 system refresh
Nov 24 09:21:40 compute-0 sudo[54536]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:40 compute-0 sudo[54699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfvbdikxtipdxewlhwwddgynvzbcfwuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976100.312343-197-245510051174588/AnsiballZ_stat.py'
Nov 24 09:21:40 compute-0 sudo[54699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:40 compute-0 python3.9[54701]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:21:40 compute-0 sudo[54699]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:40 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 09:21:41 compute-0 sudo[54822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebklrbdajujnouoivyziubyngbachwha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976100.312343-197-245510051174588/AnsiballZ_copy.py'
Nov 24 09:21:41 compute-0 sudo[54822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:41 compute-0 python3.9[54824]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976100.312343-197-245510051174588/.source.json follow=False _original_basename=podman_network_config.j2 checksum=63722bb70ee7f1b8ccf45f9fdfeabda0bdcf1ff9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:21:41 compute-0 sudo[54822]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:42 compute-0 sudo[54974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpodfodyikljpxajlbnhmiozhavxntmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976102.5894756-242-222505728776380/AnsiballZ_stat.py'
Nov 24 09:21:42 compute-0 sudo[54974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:43 compute-0 python3.9[54976]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:21:43 compute-0 sudo[54974]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:43 compute-0 sudo[55097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nshhieqqttfckweexnfyvgcsxdlubxpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976102.5894756-242-222505728776380/AnsiballZ_copy.py'
Nov 24 09:21:43 compute-0 sudo[55097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:43 compute-0 python3.9[55099]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763976102.5894756-242-222505728776380/.source.conf follow=False _original_basename=registries.conf.j2 checksum=d119d0981ddb964361aab9d45fb39837ba29c925 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:21:43 compute-0 sudo[55097]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:44 compute-0 sudo[55249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqysgoacybedvllsaudaeuknjzuvtirb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976103.9711933-290-5130920203135/AnsiballZ_ini_file.py'
Nov 24 09:21:44 compute-0 sudo[55249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:44 compute-0 python3.9[55251]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:21:44 compute-0 sudo[55249]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:44 compute-0 sudo[55401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbvinazxmrlblmylrmlvwxfrjrxgvupm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976104.7062676-290-141590154046781/AnsiballZ_ini_file.py'
Nov 24 09:21:44 compute-0 sudo[55401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:45 compute-0 python3.9[55403]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:21:45 compute-0 sudo[55401]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:45 compute-0 sudo[55553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwutgbcutzjvgmdovhdbqbrcqceloxji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976105.2805974-290-132156925157353/AnsiballZ_ini_file.py'
Nov 24 09:21:45 compute-0 sudo[55553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:45 compute-0 python3.9[55555]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:21:45 compute-0 sudo[55553]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:46 compute-0 sudo[55705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apfeqokfduxizscbmhjfycqvjezwhsai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976105.8693175-290-231451788165081/AnsiballZ_ini_file.py'
Nov 24 09:21:46 compute-0 sudo[55705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:46 compute-0 python3.9[55707]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:21:46 compute-0 sudo[55705]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:47 compute-0 sudo[55857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmsanzopqbnhxhzwxlsnzggcioulszkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976106.830572-383-161816666487865/AnsiballZ_dnf.py'
Nov 24 09:21:47 compute-0 sudo[55857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:47 compute-0 python3.9[55859]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 09:21:48 compute-0 sudo[55857]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:49 compute-0 sudo[56010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpkechzndrwmnnwdcegqpayoevkrrool ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976109.1684275-416-101985184631848/AnsiballZ_setup.py'
Nov 24 09:21:49 compute-0 sudo[56010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:49 compute-0 python3.9[56012]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:21:49 compute-0 sudo[56010]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:50 compute-0 sudo[56164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgxlryxflzmeetkubxugruzverqjbhan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976109.9624312-440-89568649565351/AnsiballZ_stat.py'
Nov 24 09:21:50 compute-0 sudo[56164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:50 compute-0 python3.9[56166]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:21:50 compute-0 sudo[56164]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:50 compute-0 sudo[56316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwyzxvmnghmphjykikqlchnllonckgzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976110.6931202-467-99177255712381/AnsiballZ_stat.py'
Nov 24 09:21:50 compute-0 sudo[56316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:51 compute-0 python3.9[56318]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:21:51 compute-0 sudo[56316]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:51 compute-0 sudo[56468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqrlaalzuiuznpotczqeqwwaszswphrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976111.5909853-497-254393341604317/AnsiballZ_command.py'
Nov 24 09:21:51 compute-0 sudo[56468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:52 compute-0 python3.9[56470]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:21:52 compute-0 sudo[56468]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:52 compute-0 sudo[56621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iviflzegexqlwcohrnnqntmjcaeuasox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976112.4947531-527-7218036871825/AnsiballZ_service_facts.py'
Nov 24 09:21:52 compute-0 sudo[56621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:53 compute-0 python3.9[56623]: ansible-service_facts Invoked
Nov 24 09:21:53 compute-0 network[56640]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 09:21:53 compute-0 network[56641]: 'network-scripts' will be removed from distribution in near future.
Nov 24 09:21:53 compute-0 network[56642]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 09:21:56 compute-0 sudo[56621]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:58 compute-0 sudo[56926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghhjsljzpprxohssnfsksxqpplfvzmue ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1763976117.9560144-572-128989342847166/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1763976117.9560144-572-128989342847166/args'
Nov 24 09:21:58 compute-0 sudo[56926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:58 compute-0 sudo[56926]: pam_unix(sudo:session): session closed for user root
Nov 24 09:21:59 compute-0 sudo[57093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvjhjteuplcxpoubbmfsdkjdbqehdhar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976118.9927423-605-157920192472853/AnsiballZ_dnf.py'
Nov 24 09:21:59 compute-0 sudo[57093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:21:59 compute-0 python3.9[57095]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 09:22:00 compute-0 sudo[57093]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:01 compute-0 sudo[57246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsfgulbdvnfghgakswcxtoxgligznxzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976121.343588-644-241191968091973/AnsiballZ_package_facts.py'
Nov 24 09:22:01 compute-0 sudo[57246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:02 compute-0 python3.9[57248]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 24 09:22:02 compute-0 sudo[57246]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:03 compute-0 sudo[57398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtpqgrhipukocyenrdkgzhkzstgxruik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976123.0978699-674-196734505874222/AnsiballZ_stat.py'
Nov 24 09:22:03 compute-0 sudo[57398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:03 compute-0 python3.9[57400]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:22:03 compute-0 sudo[57398]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:04 compute-0 sudo[57523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ceipmsjalkjvwkghtkxilrdvuutdknki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976123.0978699-674-196734505874222/AnsiballZ_copy.py'
Nov 24 09:22:04 compute-0 sudo[57523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:04 compute-0 python3.9[57525]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763976123.0978699-674-196734505874222/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:22:04 compute-0 sudo[57523]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:04 compute-0 sudo[57677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmfvaafeybumwgzjpworrbxkdbvjaocn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976124.604797-719-75770248892979/AnsiballZ_stat.py'
Nov 24 09:22:04 compute-0 sudo[57677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:05 compute-0 python3.9[57679]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:22:05 compute-0 sudo[57677]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:05 compute-0 sudo[57802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyldfrdnwibmihsdsuypjwurbsqhmtfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976124.604797-719-75770248892979/AnsiballZ_copy.py'
Nov 24 09:22:05 compute-0 sudo[57802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:05 compute-0 python3.9[57804]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763976124.604797-719-75770248892979/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:22:05 compute-0 sudo[57802]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:07 compute-0 sudo[57956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nazqooklrijrxpfpuykatsfabsbnzgxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976126.759559-782-883170137038/AnsiballZ_lineinfile.py'
Nov 24 09:22:07 compute-0 sudo[57956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:07 compute-0 python3.9[57958]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:22:07 compute-0 sudo[57956]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:08 compute-0 sudo[58110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uywuadlxftjcmrbsaopvyjchdryisiwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976128.4207637-827-223391653140658/AnsiballZ_setup.py'
Nov 24 09:22:08 compute-0 sudo[58110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:08 compute-0 python3.9[58112]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 09:22:09 compute-0 sudo[58110]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:09 compute-0 sudo[58194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxvyzomawiyxjokfjifnphaycrkvrime ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976128.4207637-827-223391653140658/AnsiballZ_systemd.py'
Nov 24 09:22:09 compute-0 sudo[58194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:10 compute-0 python3.9[58196]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:22:10 compute-0 sudo[58194]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:11 compute-0 sudo[58348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppkypaqmwsraiiagcshdgirhotixwnol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976130.8841841-875-219475387271616/AnsiballZ_setup.py'
Nov 24 09:22:11 compute-0 sudo[58348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:11 compute-0 python3.9[58350]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 09:22:11 compute-0 sudo[58348]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:11 compute-0 sudo[58432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tboeulxyxtwifzsnvoxyixynkmhbedli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976130.8841841-875-219475387271616/AnsiballZ_systemd.py'
Nov 24 09:22:11 compute-0 sudo[58432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:12 compute-0 python3.9[58434]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 09:22:12 compute-0 chronyd[830]: chronyd exiting
Nov 24 09:22:12 compute-0 systemd[1]: Stopping NTP client/server...
Nov 24 09:22:12 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Nov 24 09:22:12 compute-0 systemd[1]: Stopped NTP client/server.
Nov 24 09:22:12 compute-0 systemd[1]: Starting NTP client/server...
Nov 24 09:22:12 compute-0 chronyd[58443]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 24 09:22:12 compute-0 chronyd[58443]: Frequency -23.734 +/- 0.128 ppm read from /var/lib/chrony/drift
Nov 24 09:22:12 compute-0 chronyd[58443]: Loaded seccomp filter (level 2)
Nov 24 09:22:12 compute-0 systemd[1]: Started NTP client/server.
Nov 24 09:22:12 compute-0 sudo[58432]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:12 compute-0 sshd-session[53491]: Connection closed by 192.168.122.30 port 57052
Nov 24 09:22:12 compute-0 sshd-session[53488]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:22:12 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Nov 24 09:22:12 compute-0 systemd[1]: session-12.scope: Consumed 25.296s CPU time.
Nov 24 09:22:12 compute-0 systemd-logind[822]: Session 12 logged out. Waiting for processes to exit.
Nov 24 09:22:12 compute-0 systemd-logind[822]: Removed session 12.
Nov 24 09:22:17 compute-0 sshd-session[58469]: Accepted publickey for zuul from 192.168.122.30 port 47664 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 09:22:17 compute-0 systemd-logind[822]: New session 13 of user zuul.
Nov 24 09:22:17 compute-0 systemd[1]: Started Session 13 of User zuul.
Nov 24 09:22:17 compute-0 sshd-session[58469]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:22:18 compute-0 sudo[58622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osyxasclokiekgfzngdvyoaktwjsdbfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976137.9133995-26-277972226804926/AnsiballZ_file.py'
Nov 24 09:22:18 compute-0 sudo[58622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:18 compute-0 python3.9[58624]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:22:18 compute-0 sudo[58622]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:19 compute-0 sudo[58774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enkniwboewedhyryuohzlzmhxroimdnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976138.782866-62-155883661090521/AnsiballZ_stat.py'
Nov 24 09:22:19 compute-0 sudo[58774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:19 compute-0 python3.9[58776]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:22:19 compute-0 sudo[58774]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:19 compute-0 sudo[58897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhhqwnemobirwgugyqepznbzerykqcnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976138.782866-62-155883661090521/AnsiballZ_copy.py'
Nov 24 09:22:19 compute-0 sudo[58897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:20 compute-0 python3.9[58899]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763976138.782866-62-155883661090521/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:22:20 compute-0 sudo[58897]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:20 compute-0 sshd-session[58472]: Connection closed by 192.168.122.30 port 47664
Nov 24 09:22:20 compute-0 sshd-session[58469]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:22:20 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Nov 24 09:22:20 compute-0 systemd[1]: session-13.scope: Consumed 1.551s CPU time.
Nov 24 09:22:20 compute-0 systemd-logind[822]: Session 13 logged out. Waiting for processes to exit.
Nov 24 09:22:20 compute-0 systemd-logind[822]: Removed session 13.
Nov 24 09:22:25 compute-0 sshd-session[58924]: Accepted publickey for zuul from 192.168.122.30 port 47674 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 09:22:25 compute-0 systemd-logind[822]: New session 14 of user zuul.
Nov 24 09:22:25 compute-0 systemd[1]: Started Session 14 of User zuul.
Nov 24 09:22:25 compute-0 sshd-session[58924]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:22:27 compute-0 python3.9[59077]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:22:27 compute-0 sudo[59231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbikvotrrbnwjhebrefqxopcfgfeswba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976147.518082-59-74829881105456/AnsiballZ_file.py'
Nov 24 09:22:27 compute-0 sudo[59231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:28 compute-0 python3.9[59233]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:22:28 compute-0 sudo[59231]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:28 compute-0 sudo[59406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzklzjorezuqwjobfeitxkezrclbnfhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976148.4049044-83-58853762764501/AnsiballZ_stat.py'
Nov 24 09:22:28 compute-0 sudo[59406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:29 compute-0 python3.9[59408]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:22:29 compute-0 sudo[59406]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:29 compute-0 sshd-session[59456]: Connection closed by 205.210.31.225 port 51190
Nov 24 09:22:29 compute-0 sudo[59530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cagvqhdjzkrhdthbbhwonnbszewxodlf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976148.4049044-83-58853762764501/AnsiballZ_copy.py'
Nov 24 09:22:29 compute-0 sudo[59530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:29 compute-0 python3.9[59532]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1763976148.4049044-83-58853762764501/.source.json _original_basename=.4zxo_7o3 follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:22:29 compute-0 sudo[59530]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:30 compute-0 sudo[59682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqgcrqfefdoqyyzkqrjfnkpsqexyqtgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976150.1938639-152-122016087095400/AnsiballZ_stat.py'
Nov 24 09:22:30 compute-0 sudo[59682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:30 compute-0 python3.9[59684]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:22:30 compute-0 sudo[59682]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:30 compute-0 sudo[59805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzylekeecxiarpwiywludhoknhhkfpai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976150.1938639-152-122016087095400/AnsiballZ_copy.py'
Nov 24 09:22:30 compute-0 sudo[59805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:31 compute-0 python3.9[59807]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763976150.1938639-152-122016087095400/.source _original_basename=.4h9sgyd5 follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:22:31 compute-0 sudo[59805]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:31 compute-0 sudo[59957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjkhbagrzghjndzoqurmfftizpzizrsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976151.5144494-200-29971791655914/AnsiballZ_file.py'
Nov 24 09:22:31 compute-0 sudo[59957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:31 compute-0 python3.9[59959]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:22:31 compute-0 sudo[59957]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:32 compute-0 sudo[60109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmntbhicrmprscypsyxsrgrxhstxukaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976152.2675436-224-50958796290855/AnsiballZ_stat.py'
Nov 24 09:22:32 compute-0 sudo[60109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:32 compute-0 python3.9[60111]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:22:32 compute-0 sudo[60109]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:33 compute-0 sudo[60232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdiufqpatkcatodryfdvqepitkjhqoyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976152.2675436-224-50958796290855/AnsiballZ_copy.py'
Nov 24 09:22:33 compute-0 sudo[60232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:33 compute-0 python3.9[60234]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763976152.2675436-224-50958796290855/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:22:33 compute-0 sudo[60232]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:33 compute-0 sudo[60384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbceubvmhaptizrcwihqfejijiypqvue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976153.3969033-224-174727904147641/AnsiballZ_stat.py'
Nov 24 09:22:33 compute-0 sudo[60384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:33 compute-0 python3.9[60386]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:22:33 compute-0 sudo[60384]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:34 compute-0 sudo[60507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvbkmmtzqejzilnflayrfvtejrcfftud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976153.3969033-224-174727904147641/AnsiballZ_copy.py'
Nov 24 09:22:34 compute-0 sudo[60507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:34 compute-0 python3.9[60509]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763976153.3969033-224-174727904147641/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:22:34 compute-0 sudo[60507]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:35 compute-0 sudo[60659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvhndhwuxonyzrjmreftaahzfretjsbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976154.8626547-311-32117289088524/AnsiballZ_file.py'
Nov 24 09:22:35 compute-0 sudo[60659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:35 compute-0 python3.9[60661]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:22:35 compute-0 sudo[60659]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:35 compute-0 sudo[60811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgogjrxzqiucpfynayjzfeygwjppwpvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976155.6485133-335-74297328770272/AnsiballZ_stat.py'
Nov 24 09:22:35 compute-0 sudo[60811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:36 compute-0 python3.9[60813]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:22:36 compute-0 sudo[60811]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:36 compute-0 sudo[60934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxbbyfhdfgzyjudfymmwnqehlsgrqltk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976155.6485133-335-74297328770272/AnsiballZ_copy.py'
Nov 24 09:22:36 compute-0 sudo[60934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:36 compute-0 python3.9[60936]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976155.6485133-335-74297328770272/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:22:36 compute-0 sudo[60934]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:37 compute-0 sudo[61086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhkvdbtllyxpwguuwqrkycltukjokwhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976157.0345771-380-167299803342519/AnsiballZ_stat.py'
Nov 24 09:22:37 compute-0 sudo[61086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:37 compute-0 python3.9[61088]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:22:37 compute-0 sudo[61086]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:37 compute-0 sudo[61209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbmtnasdmscikapdclhytdkngymqhdym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976157.0345771-380-167299803342519/AnsiballZ_copy.py'
Nov 24 09:22:37 compute-0 sudo[61209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:38 compute-0 python3.9[61211]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976157.0345771-380-167299803342519/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:22:38 compute-0 sudo[61209]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:38 compute-0 sudo[61361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fiwpoirbmkgodbchhufhdrncnhlydkjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976158.3833537-425-243175866327810/AnsiballZ_systemd.py'
Nov 24 09:22:38 compute-0 sudo[61361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:39 compute-0 python3.9[61363]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:22:39 compute-0 systemd[1]: Reloading.
Nov 24 09:22:39 compute-0 systemd-sysv-generator[61391]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:22:39 compute-0 systemd-rc-local-generator[61386]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:22:39 compute-0 systemd[1]: Reloading.
Nov 24 09:22:39 compute-0 systemd-rc-local-generator[61424]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:22:39 compute-0 systemd-sysv-generator[61432]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:22:39 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Nov 24 09:22:39 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Nov 24 09:22:39 compute-0 sudo[61361]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:40 compute-0 sudo[61589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmcwgmjfgkiliggbkmejwokmmwmvwsrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976160.0405953-449-231987366208972/AnsiballZ_stat.py'
Nov 24 09:22:40 compute-0 sudo[61589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:40 compute-0 python3.9[61591]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:22:40 compute-0 sudo[61589]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:40 compute-0 sudo[61712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wonnjrcqownmnintcukssqfiwqmgrdid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976160.0405953-449-231987366208972/AnsiballZ_copy.py'
Nov 24 09:22:40 compute-0 sudo[61712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:41 compute-0 python3.9[61714]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976160.0405953-449-231987366208972/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:22:41 compute-0 sudo[61712]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:41 compute-0 sudo[61864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnbfmdbfbontjdjvqkjtmsmlbobumikw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976161.5882409-494-115610247337789/AnsiballZ_stat.py'
Nov 24 09:22:41 compute-0 sudo[61864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:42 compute-0 python3.9[61866]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:22:42 compute-0 sudo[61864]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:42 compute-0 sudo[61987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtzzduqcdfpeojcvyucqkvrucjecdscn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976161.5882409-494-115610247337789/AnsiballZ_copy.py'
Nov 24 09:22:42 compute-0 sudo[61987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:42 compute-0 python3.9[61989]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976161.5882409-494-115610247337789/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:22:42 compute-0 sudo[61987]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:43 compute-0 sudo[62139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ximfrhvufkkjzsfsniwjmrrtjldnaoyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976162.9709735-539-106604293289999/AnsiballZ_systemd.py'
Nov 24 09:22:43 compute-0 sudo[62139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:43 compute-0 python3.9[62141]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:22:43 compute-0 systemd[1]: Reloading.
Nov 24 09:22:43 compute-0 systemd-sysv-generator[62170]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:22:43 compute-0 systemd-rc-local-generator[62165]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:22:43 compute-0 systemd[1]: Reloading.
Nov 24 09:22:43 compute-0 systemd-rc-local-generator[62207]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:22:43 compute-0 systemd-sysv-generator[62211]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:22:44 compute-0 systemd[1]: Starting Create netns directory...
Nov 24 09:22:44 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 24 09:22:44 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 24 09:22:44 compute-0 systemd[1]: Finished Create netns directory.
Nov 24 09:22:44 compute-0 sudo[62139]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:45 compute-0 python3.9[62367]: ansible-ansible.builtin.service_facts Invoked
Nov 24 09:22:45 compute-0 network[62384]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 09:22:45 compute-0 network[62385]: 'network-scripts' will be removed from distribution in near future.
Nov 24 09:22:45 compute-0 network[62386]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 09:22:50 compute-0 sudo[62646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyjybdjwkymgtjjgtbrxsbhncjjozkex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976169.714191-587-109669204015983/AnsiballZ_systemd.py'
Nov 24 09:22:50 compute-0 sudo[62646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:50 compute-0 python3.9[62648]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:22:50 compute-0 systemd[1]: Reloading.
Nov 24 09:22:50 compute-0 systemd-rc-local-generator[62676]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:22:50 compute-0 systemd-sysv-generator[62679]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:22:50 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Nov 24 09:22:50 compute-0 iptables.init[62688]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Nov 24 09:22:51 compute-0 iptables.init[62688]: iptables: Flushing firewall rules: [  OK  ]
Nov 24 09:22:51 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Nov 24 09:22:51 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Nov 24 09:22:51 compute-0 sudo[62646]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:51 compute-0 sudo[62882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrjpdqlqkvczlsmfwgfyfebghuuajygl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976171.2298791-587-45010447646180/AnsiballZ_systemd.py'
Nov 24 09:22:51 compute-0 sudo[62882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:51 compute-0 python3.9[62884]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:22:51 compute-0 sudo[62882]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:52 compute-0 sudo[63036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vziszuiekllfcwuuujwjfiaxgjamahtc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976172.33119-635-190861224874932/AnsiballZ_systemd.py'
Nov 24 09:22:52 compute-0 sudo[63036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:52 compute-0 python3.9[63038]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:22:52 compute-0 systemd[1]: Reloading.
Nov 24 09:22:53 compute-0 systemd-rc-local-generator[63068]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:22:53 compute-0 systemd-sysv-generator[63072]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:22:54 compute-0 systemd[1]: Starting Netfilter Tables...
Nov 24 09:22:54 compute-0 systemd[1]: Finished Netfilter Tables.
Nov 24 09:22:54 compute-0 sudo[63036]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:54 compute-0 sudo[63228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybbxkivrxbrxpiwcxlfbhnmxehfchqmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976174.3809159-659-276752886596016/AnsiballZ_command.py'
Nov 24 09:22:54 compute-0 sudo[63228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:55 compute-0 python3.9[63230]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:22:55 compute-0 sudo[63228]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:56 compute-0 sudo[63381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glntjmklimkpailyzmejnxxxgttzomzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976175.7733474-701-225101579510089/AnsiballZ_stat.py'
Nov 24 09:22:56 compute-0 sudo[63381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:56 compute-0 python3.9[63383]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:22:56 compute-0 sudo[63381]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:56 compute-0 sudo[63506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmlnjylqviocuhtihfbmipldiesadvcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976175.7733474-701-225101579510089/AnsiballZ_copy.py'
Nov 24 09:22:56 compute-0 sudo[63506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:56 compute-0 python3.9[63508]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763976175.7733474-701-225101579510089/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:22:56 compute-0 sudo[63506]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:57 compute-0 sudo[63659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yozgjcllkoibazpbfargaywqymlzgxse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976177.1787653-746-102130909893854/AnsiballZ_systemd.py'
Nov 24 09:22:57 compute-0 sudo[63659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:57 compute-0 python3.9[63661]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 09:22:57 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Nov 24 09:22:57 compute-0 sshd[1005]: Received SIGHUP; restarting.
Nov 24 09:22:57 compute-0 sshd[1005]: Server listening on 0.0.0.0 port 22.
Nov 24 09:22:57 compute-0 sshd[1005]: Server listening on :: port 22.
Nov 24 09:22:57 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Nov 24 09:22:57 compute-0 sudo[63659]: pam_unix(sudo:session): session closed for user root
Nov 24 09:22:59 compute-0 sudo[63815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-neldlmaqxmoyycrnjxjxtquqmfwxuhjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976178.919446-770-103541531401487/AnsiballZ_file.py'
Nov 24 09:22:59 compute-0 sudo[63815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:22:59 compute-0 python3.9[63817]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:22:59 compute-0 sudo[63815]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:00 compute-0 sudo[63967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypkvdoaxxkullwnxkfdinpackjdzdeec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976179.7091594-794-234305958280432/AnsiballZ_stat.py'
Nov 24 09:23:00 compute-0 sudo[63967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:00 compute-0 python3.9[63969]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:23:00 compute-0 sudo[63967]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:00 compute-0 sudo[64090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpujfbulskjceagnbzrobnbfcylwmkhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976179.7091594-794-234305958280432/AnsiballZ_copy.py'
Nov 24 09:23:00 compute-0 sudo[64090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:00 compute-0 python3.9[64092]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976179.7091594-794-234305958280432/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:23:00 compute-0 sudo[64090]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:01 compute-0 sudo[64242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bykgdeiiznebkvecvtippzmfuetmcgnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976181.3100984-848-99034582227822/AnsiballZ_timezone.py'
Nov 24 09:23:01 compute-0 sudo[64242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:01 compute-0 python3.9[64244]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 24 09:23:01 compute-0 systemd[1]: Starting Time & Date Service...
Nov 24 09:23:02 compute-0 systemd[1]: Started Time & Date Service.
Nov 24 09:23:02 compute-0 sudo[64242]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:02 compute-0 sudo[64398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odmjrpaygrtdhogdjtfciylajgafdbfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976182.3918705-875-259212796786599/AnsiballZ_file.py'
Nov 24 09:23:02 compute-0 sudo[64398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:02 compute-0 python3.9[64400]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:23:02 compute-0 sudo[64398]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:03 compute-0 sudo[64550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hadtxpsikfsoyyzmcpkjmnxrqummwdzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976183.0529742-899-212896813529092/AnsiballZ_stat.py'
Nov 24 09:23:03 compute-0 sudo[64550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:03 compute-0 python3.9[64552]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:23:03 compute-0 sudo[64550]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:03 compute-0 sudo[64673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgmnchgqidzujqoifehkvbeqvtinwdng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976183.0529742-899-212896813529092/AnsiballZ_copy.py'
Nov 24 09:23:03 compute-0 sudo[64673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:03 compute-0 python3.9[64675]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763976183.0529742-899-212896813529092/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:23:04 compute-0 sudo[64673]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:04 compute-0 sudo[64825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgydnhpawvkmjyhcrldbsqmmojckjkuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976184.3835588-944-123001467254108/AnsiballZ_stat.py'
Nov 24 09:23:04 compute-0 sudo[64825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:04 compute-0 python3.9[64827]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:23:04 compute-0 sudo[64825]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:05 compute-0 sudo[64948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sitimbrdrhnkfartsqdgxscsijwazyll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976184.3835588-944-123001467254108/AnsiballZ_copy.py'
Nov 24 09:23:05 compute-0 sudo[64948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:05 compute-0 python3.9[64950]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763976184.3835588-944-123001467254108/.source.yaml _original_basename=.00knmpvm follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:23:05 compute-0 sudo[64948]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:05 compute-0 sudo[65100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqcmqsbneafsnlwifzfeozbatjdngehy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976185.6594915-989-229508144411349/AnsiballZ_stat.py'
Nov 24 09:23:05 compute-0 sudo[65100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:06 compute-0 python3.9[65102]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:23:06 compute-0 sudo[65100]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:06 compute-0 sudo[65223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frntdbkzaoolpdxgnelhixpzzydsggxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976185.6594915-989-229508144411349/AnsiballZ_copy.py'
Nov 24 09:23:06 compute-0 sudo[65223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:06 compute-0 python3.9[65225]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976185.6594915-989-229508144411349/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:23:06 compute-0 sudo[65223]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:07 compute-0 sudo[65375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyzeabbrcwlknlgvsfjvnwtdiysnpvlz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976186.955229-1034-123159302184751/AnsiballZ_command.py'
Nov 24 09:23:07 compute-0 sudo[65375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:07 compute-0 python3.9[65377]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:23:07 compute-0 sudo[65375]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:07 compute-0 sudo[65528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eixjwxutcnivsgkaocgcfikeyieuotws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976187.6663535-1058-18310630603093/AnsiballZ_command.py'
Nov 24 09:23:07 compute-0 sudo[65528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:08 compute-0 python3.9[65530]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:23:08 compute-0 sudo[65528]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:08 compute-0 sudo[65681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsbvrcuufmecrtzjaimlavoewtcyrllf ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763976188.3892775-1082-234047912909663/AnsiballZ_edpm_nftables_from_files.py'
Nov 24 09:23:08 compute-0 sudo[65681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:09 compute-0 python3[65683]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 24 09:23:09 compute-0 sudo[65681]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:09 compute-0 sudo[65833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzvypcsfctbknasvgiysihqocvsisqth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976189.233806-1106-92685054364243/AnsiballZ_stat.py'
Nov 24 09:23:09 compute-0 sudo[65833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:09 compute-0 python3.9[65835]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:23:09 compute-0 sudo[65833]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:10 compute-0 sudo[65956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kodlovuflhsqjwrlqbkomhiafkkllhnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976189.233806-1106-92685054364243/AnsiballZ_copy.py'
Nov 24 09:23:10 compute-0 sudo[65956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:10 compute-0 python3.9[65958]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976189.233806-1106-92685054364243/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:23:10 compute-0 sudo[65956]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:10 compute-0 sudo[66108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muiuxalfdkkwncrqpovabkznjxbxxvep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976190.583897-1151-271939748178995/AnsiballZ_stat.py'
Nov 24 09:23:10 compute-0 sudo[66108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:11 compute-0 python3.9[66110]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:23:11 compute-0 sudo[66108]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:11 compute-0 sudo[66231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpbebbhxaqdikvrzzztmhyfxftvbtqza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976190.583897-1151-271939748178995/AnsiballZ_copy.py'
Nov 24 09:23:11 compute-0 sudo[66231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:11 compute-0 python3.9[66233]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976190.583897-1151-271939748178995/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:23:11 compute-0 sudo[66231]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:12 compute-0 sudo[66383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvgxmkxuwqwpfbwbutlufhiusjgemkei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976192.0021338-1196-140557650203553/AnsiballZ_stat.py'
Nov 24 09:23:12 compute-0 sudo[66383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:12 compute-0 python3.9[66385]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:23:12 compute-0 sudo[66383]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:12 compute-0 sudo[66506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjpjyxthzfzafuszjyqdhguqtrzobugm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976192.0021338-1196-140557650203553/AnsiballZ_copy.py'
Nov 24 09:23:12 compute-0 sudo[66506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:12 compute-0 python3.9[66508]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976192.0021338-1196-140557650203553/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:23:12 compute-0 sudo[66506]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:13 compute-0 sudo[66658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qeoxqloiucmeoiaamjuvgwskgqclockb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976193.4413593-1241-189602080776261/AnsiballZ_stat.py'
Nov 24 09:23:13 compute-0 sudo[66658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:13 compute-0 python3.9[66660]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:23:13 compute-0 sudo[66658]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:14 compute-0 sudo[66781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcfyufrzboutwuvakgznfnhupivrbxyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976193.4413593-1241-189602080776261/AnsiballZ_copy.py'
Nov 24 09:23:14 compute-0 sudo[66781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:14 compute-0 python3.9[66783]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976193.4413593-1241-189602080776261/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:23:14 compute-0 sudo[66781]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:15 compute-0 sudo[66933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmrfgtoqcjckgmnkzpkxsrgdgqakpubq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976194.8939-1286-190889176803742/AnsiballZ_stat.py'
Nov 24 09:23:15 compute-0 sudo[66933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:15 compute-0 python3.9[66935]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:23:15 compute-0 sudo[66933]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:15 compute-0 sudo[67056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hugcguwmoxrzjogozshshlindynzqbms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976194.8939-1286-190889176803742/AnsiballZ_copy.py'
Nov 24 09:23:15 compute-0 sudo[67056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:16 compute-0 python3.9[67058]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976194.8939-1286-190889176803742/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:23:16 compute-0 sudo[67056]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:16 compute-0 sudo[67208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjlreicczgaufzdjedekcznrlcdvjqrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976196.3513997-1331-146072900853286/AnsiballZ_file.py'
Nov 24 09:23:16 compute-0 sudo[67208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:16 compute-0 python3.9[67210]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:23:16 compute-0 sudo[67208]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:17 compute-0 sudo[67360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvitazpdhtuevvjdndkkrdesvuemwpfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976197.0771427-1355-246462342064177/AnsiballZ_command.py'
Nov 24 09:23:17 compute-0 sudo[67360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:17 compute-0 python3.9[67362]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:23:17 compute-0 sudo[67360]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:18 compute-0 sudo[67519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxxhypzwtsqsgwyohwuukmskuutoibqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976197.9303186-1379-129021442971921/AnsiballZ_blockinfile.py'
Nov 24 09:23:18 compute-0 sudo[67519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:18 compute-0 python3.9[67521]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:23:18 compute-0 sudo[67519]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:19 compute-0 sudo[67672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtctjvyepxfzscolmzhncarfyawtdwwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976198.9269161-1406-139172950101404/AnsiballZ_file.py'
Nov 24 09:23:19 compute-0 sudo[67672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:19 compute-0 python3.9[67674]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:23:19 compute-0 sudo[67672]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:19 compute-0 sudo[67824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrhaakitmoihflvemxiibamuphiarezc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976199.5876024-1406-60788473269019/AnsiballZ_file.py'
Nov 24 09:23:19 compute-0 sudo[67824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:20 compute-0 python3.9[67826]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:23:20 compute-0 sudo[67824]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:20 compute-0 sudo[67976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvqokxyyouuxwqsoccxzkwhjelwdogui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976200.4032183-1451-134265240324746/AnsiballZ_mount.py'
Nov 24 09:23:20 compute-0 sudo[67976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:21 compute-0 python3.9[67978]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 24 09:23:21 compute-0 sudo[67976]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:21 compute-0 sudo[68129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhjvatuenenyybfkgrgyrgpheruqxiqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976201.3106287-1451-5018186000956/AnsiballZ_mount.py'
Nov 24 09:23:21 compute-0 sudo[68129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:21 compute-0 python3.9[68131]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 24 09:23:21 compute-0 sudo[68129]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:22 compute-0 sshd-session[58927]: Connection closed by 192.168.122.30 port 47674
Nov 24 09:23:22 compute-0 sshd-session[58924]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:23:22 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Nov 24 09:23:22 compute-0 systemd[1]: session-14.scope: Consumed 36.127s CPU time.
Nov 24 09:23:22 compute-0 systemd-logind[822]: Session 14 logged out. Waiting for processes to exit.
Nov 24 09:23:22 compute-0 systemd-logind[822]: Removed session 14.
Nov 24 09:23:28 compute-0 sshd-session[68157]: Accepted publickey for zuul from 192.168.122.30 port 54988 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 09:23:28 compute-0 systemd-logind[822]: New session 15 of user zuul.
Nov 24 09:23:28 compute-0 systemd[1]: Started Session 15 of User zuul.
Nov 24 09:23:28 compute-0 sshd-session[68157]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:23:28 compute-0 sudo[68310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czsajoeincfwzhegbdenuthkdotckpza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976208.3357086-18-76442868158286/AnsiballZ_tempfile.py'
Nov 24 09:23:28 compute-0 sudo[68310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:29 compute-0 python3.9[68312]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 24 09:23:29 compute-0 sudo[68310]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:29 compute-0 sudo[68462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlicpyyzkeoxbcbbhabqhkerhxtrwpcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976209.286185-54-107118164474478/AnsiballZ_stat.py'
Nov 24 09:23:29 compute-0 sudo[68462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:29 compute-0 python3.9[68464]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:23:29 compute-0 sudo[68462]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:30 compute-0 sudo[68614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lygkiugtlrmysqjhoumoflkqjvvimcsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976210.2501893-84-144237206248701/AnsiballZ_setup.py'
Nov 24 09:23:30 compute-0 sudo[68614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:31 compute-0 python3.9[68616]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:23:31 compute-0 sudo[68614]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:32 compute-0 sudo[68766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crqyngvkcvhfpcwfkulwflnvotcegmks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976211.5066724-109-277060729828881/AnsiballZ_blockinfile.py'
Nov 24 09:23:32 compute-0 sudo[68766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:32 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 24 09:23:32 compute-0 python3.9[68768]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDnPh2FYKCqB5Rxe2d73LAea+vmvipLFksP43GM8QFNtdkL9UXsBFKIlbvhCArQ0+q5/EXcOy13rEWVabeuzYdek35bvnCWnqrlaoEFqEV7Y7SDrutMHxHvnLthse/1jj4AvtjvQXG0bKruDgtz2CBksRaKWTEHPZHLOYOwWLGogWVazacOPagjlMQ9UdpYvwfqgKnjMpl6sHCvQC7C0kTNvrYrrhUZqReUWyggx/XcC/YJvSYvMW1wNRhYmypPzEXu8QXt0ywHvCucILZcZqBE1/lKAUCLqDEkB/xpMnKiZ/EmDtyv8AP7H231WeEoaU4BziaD2jSd/H6lr2JJwpKBlrGkti8gQpJHtDytAtbVtrLD5fW+1GkobqN/2GXjNnvzuLB36OhT4nysfJ6BPP3sgaaZ2RJSzP5hI3jfFVn/NYjbaRIoo+tOB50PJeIPj6c5uMX+Qcb2V6EOUwogIRhtwN7A1XHh8dQPCUVYCUmNIq1K7NZ3Hxf+BqhVsSj6SK0=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINu5/fR7YXhb91kwrOd7U+mnimdcm+o61ru6zTYmFIZO
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJFgzeIWa1Ve+dIxs7Pjz8TnBGpgkm/KAIeb7PoVU+QfPqP68TrTBJjwgq/5DOilENFVsFmr+3WdERS0uMWfxXo=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCyBn9mTS8EhHsIKYO0tLgGtKOo5KK33vyjqFzXOs43ZcW8GNKmSQ7DXnq80OCGGkDE9aL5uVEQ82MaYpYE8rZVZGrTF1heqhLe2ModNgcaUA+dBOzScRYEm5JAsj6ajcAc7fiPseazHiC80XQlEo+bwF6XHf/i9t7MHMqQCKdM+qnsEd6JeYe+Zy6X7Web4mN4mbvDaHxjBAdxuR0g0bKoYRjFeeNQyQQ/2Fpsa/i/ZqFVU59TrQ1vm9wLk9wJQd7mBQsdxizekzHGMkE5Ub8VdN43iscVyKKhZWeUOyEK2HASt+n/fHjIsFD65a4GLiHFuJ8DJ4CrWFrwt1RIXLkNFOImjH5kiMO55d/Qogf5F33Mkto3ntPQP/tShtBEDIzc9JCE7vYLFjk/bMSUcK9/u41E8suBkZBHnzXC8+eB6XCoYYNxA+cowaSg5+YCSxL6yON9u34LV+i3jZosNYNivLHjOmOsyGEs/Az6NLkHYzxYCHY042etu9Py2/lONrk=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDX1cMQF3siye3qNUS07EBS+iX+poG1/aIqFR51WsltV
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFy78zaPxoZwc0f5pE0EdJcb6EwSlQGeMhelmYFBlrBeD2fH3vCrxrTbbmmM9DSQFtIo8sNV7/s7CV9dvbvMOzQ=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCYj9G0Ft/Psyl/13EAEebfB7qR7surocLwWTVKKcclTBPrKIFnHkxuGFUee1a6DQGup+ENEdhJN2MOXFv/jskxJUsoILDHuvx17jHKFvMSR7ycfe+1umEqgfKCHGxlLXobZjj7t2PzAveNkTk+zeX8pqLH1q86LI01fH0n3jdSksqEXvxbiDLMspPTM3alGxNI4pztPvN3i+0qfCPD5SL9dhFsP4C8IVTBWAM4g7Qd6LyKhx+MVoEVecLL6jsM8z+zArVsZKFcZOKFpl0MTeWdpNR0b4u0ILO59y38D/dVoM45NRDpIi7HyoS7TsD0XpP+3zP8hGo4M35QU+a9YRmdCaUChLmqjfUprjnQrusAuQfP406rQ3JlgWs3YAwF0IPhvHv57pPWm3xGwKPFpO0Jguw5cQdZZvYk4tS9JvlCz5+Yyfm3+9T+k1KLfcZ+zlvOYKz+BXNiPfk1bF9ML7/KEIyJjGf32o5nEp0H1sH24wrSIroXa+woila4KBTffe8=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFQe/vdPzZywzEntIohbfJ9grfNBp30Atbg8qy8BeQ3c
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPhaUxRkg9RrudtznCKCcwWhf1hoSfCyCfTHlGI62beVEpMD4en9bzfcuYnvB/Qm3vgzgUVMpS53KCL9bmqBfT8=
                                             create=True mode=0644 path=/tmp/ansible.tw4m5u2b state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:23:32 compute-0 sudo[68766]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:32 compute-0 sudo[68920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blapcccenrjygfofkegmumyhqsrfyisl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976212.4884186-133-85026376191465/AnsiballZ_command.py'
Nov 24 09:23:32 compute-0 sudo[68920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:33 compute-0 python3.9[68922]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.tw4m5u2b' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:23:33 compute-0 sudo[68920]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:33 compute-0 sudo[69074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udzfrlufiqkorpbwxjsbgbsalivzremt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976213.3753922-157-258939351485086/AnsiballZ_file.py'
Nov 24 09:23:33 compute-0 sudo[69074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:34 compute-0 python3.9[69076]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.tw4m5u2b state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:23:34 compute-0 sudo[69074]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:34 compute-0 sshd-session[68160]: Connection closed by 192.168.122.30 port 54988
Nov 24 09:23:34 compute-0 sshd-session[68157]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:23:34 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Nov 24 09:23:34 compute-0 systemd[1]: session-15.scope: Consumed 3.805s CPU time.
Nov 24 09:23:34 compute-0 systemd-logind[822]: Session 15 logged out. Waiting for processes to exit.
Nov 24 09:23:34 compute-0 systemd-logind[822]: Removed session 15.
Nov 24 09:23:40 compute-0 sshd-session[69101]: Accepted publickey for zuul from 192.168.122.30 port 54516 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 09:23:40 compute-0 systemd-logind[822]: New session 16 of user zuul.
Nov 24 09:23:40 compute-0 systemd[1]: Started Session 16 of User zuul.
Nov 24 09:23:40 compute-0 sshd-session[69101]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:23:41 compute-0 python3.9[69254]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:23:42 compute-0 sudo[69408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptqdyvzldyyweqhklalwknrhtkcituyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976222.1576664-56-204397080246117/AnsiballZ_systemd.py'
Nov 24 09:23:42 compute-0 sudo[69408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:43 compute-0 python3.9[69410]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 24 09:23:43 compute-0 sudo[69408]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:43 compute-0 sudo[69562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpqnwckndmsfgnwwcrotcbquntzscmqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976223.4348106-80-42075574041808/AnsiballZ_systemd.py'
Nov 24 09:23:43 compute-0 sudo[69562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:44 compute-0 python3.9[69564]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 09:23:44 compute-0 sudo[69562]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:44 compute-0 sudo[69715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrjkmfxjgjihtkvwmqrcbvgazdsgtnze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976224.409328-107-68220483511792/AnsiballZ_command.py'
Nov 24 09:23:44 compute-0 sudo[69715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:45 compute-0 python3.9[69717]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:23:45 compute-0 sudo[69715]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:45 compute-0 sudo[69868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szspzbsakrslrxaogizroycmhxwasrfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976225.3009443-131-230074258050890/AnsiballZ_stat.py'
Nov 24 09:23:45 compute-0 sudo[69868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:45 compute-0 python3.9[69870]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:23:45 compute-0 sudo[69868]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:46 compute-0 sudo[70022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txoyqdwnhkcgbokzhpmakygpihbrpjfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976226.2350054-155-137549199258813/AnsiballZ_command.py'
Nov 24 09:23:46 compute-0 sudo[70022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:46 compute-0 python3.9[70024]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:23:46 compute-0 sudo[70022]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:47 compute-0 sudo[70177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbnfmybseaillvvbrmmykpfhcefabipt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976227.0812144-179-145622799908002/AnsiballZ_file.py'
Nov 24 09:23:47 compute-0 sudo[70177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:47 compute-0 python3.9[70179]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:23:47 compute-0 sudo[70177]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:48 compute-0 sshd-session[69104]: Connection closed by 192.168.122.30 port 54516
Nov 24 09:23:48 compute-0 sshd-session[69101]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:23:48 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Nov 24 09:23:48 compute-0 systemd[1]: session-16.scope: Consumed 4.837s CPU time.
Nov 24 09:23:48 compute-0 systemd-logind[822]: Session 16 logged out. Waiting for processes to exit.
Nov 24 09:23:48 compute-0 systemd-logind[822]: Removed session 16.
Nov 24 09:23:52 compute-0 sshd-session[70205]: Accepted publickey for zuul from 192.168.122.30 port 53792 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 09:23:52 compute-0 systemd-logind[822]: New session 17 of user zuul.
Nov 24 09:23:53 compute-0 systemd[1]: Started Session 17 of User zuul.
Nov 24 09:23:53 compute-0 sshd-session[70205]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:23:54 compute-0 python3.9[70358]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:23:55 compute-0 sudo[70512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmkshziwjdmjijbtpxzywhprvzngliki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976234.7043772-62-188547269524681/AnsiballZ_setup.py'
Nov 24 09:23:55 compute-0 sudo[70512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:55 compute-0 python3.9[70514]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 09:23:55 compute-0 sudo[70512]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:55 compute-0 sudo[70596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aexnxxfubzflqvroadfxjedbyhvybgcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976234.7043772-62-188547269524681/AnsiballZ_dnf.py'
Nov 24 09:23:55 compute-0 sudo[70596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:23:56 compute-0 python3.9[70598]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 24 09:23:57 compute-0 sudo[70596]: pam_unix(sudo:session): session closed for user root
Nov 24 09:23:58 compute-0 python3.9[70749]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:23:59 compute-0 python3.9[70900]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 24 09:24:00 compute-0 python3.9[71050]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:24:00 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 09:24:00 compute-0 python3.9[71201]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:24:01 compute-0 sshd-session[70208]: Connection closed by 192.168.122.30 port 53792
Nov 24 09:24:01 compute-0 sshd-session[70205]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:24:01 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Nov 24 09:24:01 compute-0 systemd[1]: session-17.scope: Consumed 6.015s CPU time.
Nov 24 09:24:01 compute-0 systemd-logind[822]: Session 17 logged out. Waiting for processes to exit.
Nov 24 09:24:01 compute-0 systemd-logind[822]: Removed session 17.
Nov 24 09:24:10 compute-0 sshd-session[71226]: Accepted publickey for zuul from 38.129.56.127 port 43098 ssh2: RSA SHA256:UBnduE29/r4JICQE22jchpBfdroBtCYqENielfKVzAM
Nov 24 09:24:10 compute-0 systemd-logind[822]: New session 18 of user zuul.
Nov 24 09:24:10 compute-0 systemd[1]: Started Session 18 of User zuul.
Nov 24 09:24:10 compute-0 sshd-session[71226]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:24:10 compute-0 sudo[71302]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxpqsubbxtphzgydhdxrbfphlauqsqcz ; /usr/bin/python3'
Nov 24 09:24:10 compute-0 sudo[71302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:24:11 compute-0 useradd[71306]: new group: name=ceph-admin, GID=42478
Nov 24 09:24:11 compute-0 useradd[71306]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Nov 24 09:24:11 compute-0 sudo[71302]: pam_unix(sudo:session): session closed for user root
Nov 24 09:24:11 compute-0 sudo[71388]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqjwjebcuzmtjkddbqvftpfiikuvjgsv ; /usr/bin/python3'
Nov 24 09:24:11 compute-0 sudo[71388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:24:11 compute-0 sudo[71388]: pam_unix(sudo:session): session closed for user root
Nov 24 09:24:12 compute-0 sudo[71461]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lepxjqbtmzaaroppahwyflcpcvivxthd ; /usr/bin/python3'
Nov 24 09:24:12 compute-0 sudo[71461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:24:12 compute-0 sudo[71461]: pam_unix(sudo:session): session closed for user root
Nov 24 09:24:12 compute-0 sudo[71511]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwvfntqzihyhjqteqvlvrrmukynqccgw ; /usr/bin/python3'
Nov 24 09:24:12 compute-0 sudo[71511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:24:12 compute-0 sudo[71511]: pam_unix(sudo:session): session closed for user root
Nov 24 09:24:13 compute-0 sudo[71537]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kryqpxsyscqblqmklkyimokpwrdpunrp ; /usr/bin/python3'
Nov 24 09:24:13 compute-0 sudo[71537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:24:13 compute-0 sudo[71537]: pam_unix(sudo:session): session closed for user root
Nov 24 09:24:13 compute-0 sudo[71563]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpjndwjzowwepwugevkasyjpnjubxyhe ; /usr/bin/python3'
Nov 24 09:24:13 compute-0 sudo[71563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:24:13 compute-0 sudo[71563]: pam_unix(sudo:session): session closed for user root
Nov 24 09:24:14 compute-0 sudo[71589]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gttrfyizwvnqabuuaxxrqmczegjkrkzy ; /usr/bin/python3'
Nov 24 09:24:14 compute-0 sudo[71589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:24:14 compute-0 sudo[71589]: pam_unix(sudo:session): session closed for user root
Nov 24 09:24:14 compute-0 sudo[71667]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwnanijvdquxzlnzkrcbzkervrhdjpgh ; /usr/bin/python3'
Nov 24 09:24:14 compute-0 sudo[71667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:24:14 compute-0 sudo[71667]: pam_unix(sudo:session): session closed for user root
Nov 24 09:24:14 compute-0 sudo[71740]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvvshqcybhczydqpfhlismktppibemmv ; /usr/bin/python3'
Nov 24 09:24:14 compute-0 sudo[71740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:24:15 compute-0 sudo[71740]: pam_unix(sudo:session): session closed for user root
Nov 24 09:24:15 compute-0 sudo[71842]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oemzofgoljsebshhnatuhdvtsdxrulyb ; /usr/bin/python3'
Nov 24 09:24:15 compute-0 sudo[71842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:24:15 compute-0 sudo[71842]: pam_unix(sudo:session): session closed for user root
Nov 24 09:24:16 compute-0 sudo[71915]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkrqyqbutpvneeihknqviencuuakikcu ; /usr/bin/python3'
Nov 24 09:24:16 compute-0 sudo[71915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:24:16 compute-0 sudo[71915]: pam_unix(sudo:session): session closed for user root
Nov 24 09:24:16 compute-0 sudo[71965]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uiazzxflsctpnapbesyvaxexireexxta ; /usr/bin/python3'
Nov 24 09:24:16 compute-0 sudo[71965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:24:17 compute-0 python3[71967]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:24:18 compute-0 sudo[71965]: pam_unix(sudo:session): session closed for user root
Nov 24 09:24:18 compute-0 sudo[72060]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azmfdviieueyxuukqvwvithuboykvavy ; /usr/bin/python3'
Nov 24 09:24:18 compute-0 sudo[72060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:24:19 compute-0 python3[72062]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 24 09:24:20 compute-0 sudo[72060]: pam_unix(sudo:session): session closed for user root
Nov 24 09:24:20 compute-0 sudo[72087]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwdgeffonlyyzsetxbzdmwvnbgwmpdmn ; /usr/bin/python3'
Nov 24 09:24:20 compute-0 sudo[72087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:24:20 compute-0 python3[72089]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 24 09:24:20 compute-0 sudo[72087]: pam_unix(sudo:session): session closed for user root
Nov 24 09:24:20 compute-0 chronyd[58443]: Selected source 23.133.168.246 (pool.ntp.org)
Nov 24 09:24:21 compute-0 sudo[72113]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xipjbfppzvvpllhlxnolhutcvgvpkppj ; /usr/bin/python3'
Nov 24 09:24:21 compute-0 sudo[72113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:24:21 compute-0 python3[72115]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:24:21 compute-0 kernel: loop: module loaded
Nov 24 09:24:21 compute-0 kernel: loop3: detected capacity change from 0 to 41943040
Nov 24 09:24:21 compute-0 sudo[72113]: pam_unix(sudo:session): session closed for user root
Nov 24 09:24:21 compute-0 sudo[72148]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grcsezvhlolmjihzrsfkkmcsgmvpypcs ; /usr/bin/python3'
Nov 24 09:24:21 compute-0 sudo[72148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:24:21 compute-0 python3[72150]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:24:21 compute-0 lvm[72153]: PV /dev/loop3 not used.
Nov 24 09:24:21 compute-0 lvm[72162]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 09:24:22 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Nov 24 09:24:22 compute-0 sudo[72148]: pam_unix(sudo:session): session closed for user root
Nov 24 09:24:22 compute-0 lvm[72164]:   1 logical volume(s) in volume group "ceph_vg0" now active
Nov 24 09:24:22 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Nov 24 09:24:22 compute-0 sudo[72240]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjarvcgskysstvkhkjxfkeylhsswwikv ; /usr/bin/python3'
Nov 24 09:24:22 compute-0 sudo[72240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:24:22 compute-0 python3[72242]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 09:24:22 compute-0 sudo[72240]: pam_unix(sudo:session): session closed for user root
Nov 24 09:24:22 compute-0 sudo[72313]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrqulkjsenjdwukrhnpqxibvasruwoqw ; /usr/bin/python3'
Nov 24 09:24:22 compute-0 sudo[72313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:24:23 compute-0 python3[72315]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763976262.2683966-36786-88868199497821/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:24:23 compute-0 sudo[72313]: pam_unix(sudo:session): session closed for user root
Nov 24 09:24:23 compute-0 sudo[72363]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzrqlpcfcbddiliutafzgttmafwuheii ; /usr/bin/python3'
Nov 24 09:24:23 compute-0 sudo[72363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:24:23 compute-0 python3[72365]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:24:23 compute-0 systemd[1]: Reloading.
Nov 24 09:24:23 compute-0 systemd-sysv-generator[72395]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:24:23 compute-0 systemd-rc-local-generator[72388]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:24:24 compute-0 systemd[1]: Starting Ceph OSD losetup...
Nov 24 09:24:24 compute-0 bash[72405]: /dev/loop3: [64513]:4194934 (/var/lib/ceph-osd-0.img)
Nov 24 09:24:24 compute-0 systemd[1]: Finished Ceph OSD losetup.
Nov 24 09:24:24 compute-0 lvm[72406]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 09:24:24 compute-0 lvm[72406]: VG ceph_vg0 finished
Nov 24 09:24:24 compute-0 sudo[72363]: pam_unix(sudo:session): session closed for user root
Nov 24 09:24:26 compute-0 python3[72430]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:24:29 compute-0 sudo[72522]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxwyxecuxdnwhagvibyzdfzvsfrgeesf ; /usr/bin/python3'
Nov 24 09:24:29 compute-0 sudo[72522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:24:29 compute-0 python3[72524]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-squid'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 24 09:24:31 compute-0 sudo[72522]: pam_unix(sudo:session): session closed for user root
Nov 24 09:24:32 compute-0 sudo[72580]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abwgasfrzeshbyetlobtwksslmushsnf ; /usr/bin/python3'
Nov 24 09:24:32 compute-0 sudo[72580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:24:32 compute-0 python3[72582]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 24 09:24:36 compute-0 groupadd[72592]: group added to /etc/group: name=cephadm, GID=992
Nov 24 09:24:36 compute-0 groupadd[72592]: group added to /etc/gshadow: name=cephadm
Nov 24 09:24:36 compute-0 groupadd[72592]: new group: name=cephadm, GID=992
Nov 24 09:24:36 compute-0 useradd[72599]: new user: name=cephadm, UID=992, GID=992, home=/var/lib/cephadm, shell=/bin/bash, from=none
Nov 24 09:24:36 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 09:24:36 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 24 09:24:36 compute-0 sudo[72580]: pam_unix(sudo:session): session closed for user root
Nov 24 09:24:37 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 09:24:37 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 24 09:24:37 compute-0 systemd[1]: run-r886a6c7a139e4dfd8b57ec32b4c06e3e.service: Deactivated successfully.
Nov 24 09:24:37 compute-0 sudo[72695]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfnxmhmzmazeyozojytkwijhjvebrurb ; /usr/bin/python3'
Nov 24 09:24:37 compute-0 sudo[72695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:24:37 compute-0 python3[72697]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 24 09:24:37 compute-0 sudo[72695]: pam_unix(sudo:session): session closed for user root
Nov 24 09:24:37 compute-0 sudo[72723]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pluockcdgiporjtrfkwqupoiekqqxtxq ; /usr/bin/python3'
Nov 24 09:24:37 compute-0 sudo[72723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:24:37 compute-0 python3[72725]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:24:37 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 09:24:37 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 09:24:38 compute-0 sudo[72723]: pam_unix(sudo:session): session closed for user root
Nov 24 09:24:38 compute-0 sudo[72786]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcjpmjhzbkimdepgaiwacvhjlertxfbr ; /usr/bin/python3'
Nov 24 09:24:38 compute-0 sudo[72786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:24:38 compute-0 python3[72788]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:24:38 compute-0 sudo[72786]: pam_unix(sudo:session): session closed for user root
Nov 24 09:24:38 compute-0 sudo[72812]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqibfgdivdofutcgxdgvzdyztcynunjt ; /usr/bin/python3'
Nov 24 09:24:38 compute-0 sudo[72812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:24:38 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 09:24:39 compute-0 python3[72814]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:24:39 compute-0 sudo[72812]: pam_unix(sudo:session): session closed for user root
Nov 24 09:24:39 compute-0 sudo[72890]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpmqorebzoqgrzvdifdoadkiecapxurx ; /usr/bin/python3'
Nov 24 09:24:39 compute-0 sudo[72890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:24:39 compute-0 python3[72892]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 09:24:39 compute-0 sudo[72890]: pam_unix(sudo:session): session closed for user root
Nov 24 09:24:39 compute-0 sudo[72963]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryiwthlmrpqmflspozzvbwlpzbilwdqu ; /usr/bin/python3'
Nov 24 09:24:39 compute-0 sudo[72963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:24:40 compute-0 python3[72965]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763976279.450331-36978-195469038017051/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=a2c84611a4e46cfce32a90c112eae0345cab6abb backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:24:40 compute-0 sudo[72963]: pam_unix(sudo:session): session closed for user root
Nov 24 09:24:41 compute-0 sudo[73065]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-duuxilhkkokwqjercdbtltcgbhvlezeb ; /usr/bin/python3'
Nov 24 09:24:41 compute-0 sudo[73065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:24:41 compute-0 python3[73067]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 09:24:41 compute-0 sudo[73065]: pam_unix(sudo:session): session closed for user root
Nov 24 09:24:41 compute-0 sudo[73138]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmfyaefjymlhifzrtsrzhbgsljopyfdn ; /usr/bin/python3'
Nov 24 09:24:41 compute-0 sudo[73138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:24:41 compute-0 python3[73140]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763976281.0118651-36997-137582133436304/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:24:41 compute-0 sudo[73138]: pam_unix(sudo:session): session closed for user root
Nov 24 09:24:42 compute-0 sudo[73188]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-meuvmfccqhjjtpdqmnyyjbmxfbqwnden ; /usr/bin/python3'
Nov 24 09:24:42 compute-0 sudo[73188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:24:42 compute-0 python3[73190]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 24 09:24:42 compute-0 sudo[73188]: pam_unix(sudo:session): session closed for user root
Nov 24 09:24:42 compute-0 sudo[73216]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivridjqbckiemhvslflzhnninnfoschl ; /usr/bin/python3'
Nov 24 09:24:42 compute-0 sudo[73216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:24:42 compute-0 python3[73218]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 24 09:24:42 compute-0 sudo[73216]: pam_unix(sudo:session): session closed for user root
Nov 24 09:24:42 compute-0 sudo[73244]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qllbevmudpjzcvlihhvxwtmfdvgrfbnd ; /usr/bin/python3'
Nov 24 09:24:42 compute-0 sudo[73244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:24:42 compute-0 python3[73246]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 24 09:24:42 compute-0 sudo[73244]: pam_unix(sudo:session): session closed for user root
Nov 24 09:24:43 compute-0 sudo[73272]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kahjwdiajtlibaipnvzrhtsaetinlizz ; /usr/bin/python3'
Nov 24 09:24:43 compute-0 sudo[73272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:24:43 compute-0 python3[73274]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 --config /home/ceph-admin/assimilate_ceph.conf \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100
                                           _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:24:43 compute-0 sshd-session[73278]: Accepted publickey for ceph-admin from 192.168.122.100 port 49234 ssh2: RSA SHA256:d901dNHY28a6fGfVJZBiZ/6DokdrVSFZakqDQ7cQMIA
Nov 24 09:24:43 compute-0 systemd-logind[822]: New session 19 of user ceph-admin.
Nov 24 09:24:43 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Nov 24 09:24:43 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 24 09:24:43 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 24 09:24:43 compute-0 systemd[1]: Starting User Manager for UID 42477...
Nov 24 09:24:43 compute-0 systemd[73282]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 09:24:43 compute-0 systemd[73282]: Queued start job for default target Main User Target.
Nov 24 09:24:43 compute-0 systemd[73282]: Created slice User Application Slice.
Nov 24 09:24:43 compute-0 systemd[73282]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 24 09:24:43 compute-0 systemd[73282]: Started Daily Cleanup of User's Temporary Directories.
Nov 24 09:24:43 compute-0 systemd[73282]: Reached target Paths.
Nov 24 09:24:43 compute-0 systemd[73282]: Reached target Timers.
Nov 24 09:24:43 compute-0 systemd[73282]: Starting D-Bus User Message Bus Socket...
Nov 24 09:24:43 compute-0 systemd[73282]: Starting Create User's Volatile Files and Directories...
Nov 24 09:24:43 compute-0 systemd[73282]: Finished Create User's Volatile Files and Directories.
Nov 24 09:24:43 compute-0 systemd[73282]: Listening on D-Bus User Message Bus Socket.
Nov 24 09:24:43 compute-0 systemd[73282]: Reached target Sockets.
Nov 24 09:24:43 compute-0 systemd[73282]: Reached target Basic System.
Nov 24 09:24:43 compute-0 systemd[73282]: Reached target Main User Target.
Nov 24 09:24:43 compute-0 systemd[73282]: Startup finished in 140ms.
Nov 24 09:24:43 compute-0 systemd[1]: Started User Manager for UID 42477.
Nov 24 09:24:43 compute-0 systemd[1]: Started Session 19 of User ceph-admin.
Nov 24 09:24:43 compute-0 sshd-session[73278]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 09:24:43 compute-0 sudo[73299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/echo
Nov 24 09:24:43 compute-0 sudo[73299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:24:43 compute-0 sudo[73299]: pam_unix(sudo:session): session closed for user root
Nov 24 09:24:43 compute-0 sshd-session[73298]: Received disconnect from 192.168.122.100 port 49234:11: disconnected by user
Nov 24 09:24:43 compute-0 sshd-session[73298]: Disconnected from user ceph-admin 192.168.122.100 port 49234
Nov 24 09:24:43 compute-0 sshd-session[73278]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 24 09:24:44 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Nov 24 09:24:44 compute-0 systemd-logind[822]: Session 19 logged out. Waiting for processes to exit.
Nov 24 09:24:44 compute-0 systemd-logind[822]: Removed session 19.
Nov 24 09:24:44 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 09:24:44 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 09:24:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat162915547-lower\x2dmapped.mount: Deactivated successfully.
Nov 24 09:24:54 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Nov 24 09:24:54 compute-0 systemd[73282]: Activating special unit Exit the Session...
Nov 24 09:24:54 compute-0 systemd[73282]: Stopped target Main User Target.
Nov 24 09:24:54 compute-0 systemd[73282]: Stopped target Basic System.
Nov 24 09:24:54 compute-0 systemd[73282]: Stopped target Paths.
Nov 24 09:24:54 compute-0 systemd[73282]: Stopped target Sockets.
Nov 24 09:24:54 compute-0 systemd[73282]: Stopped target Timers.
Nov 24 09:24:54 compute-0 systemd[73282]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 24 09:24:54 compute-0 systemd[73282]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 24 09:24:54 compute-0 systemd[73282]: Closed D-Bus User Message Bus Socket.
Nov 24 09:24:54 compute-0 systemd[73282]: Stopped Create User's Volatile Files and Directories.
Nov 24 09:24:54 compute-0 systemd[73282]: Removed slice User Application Slice.
Nov 24 09:24:54 compute-0 systemd[73282]: Reached target Shutdown.
Nov 24 09:24:54 compute-0 systemd[73282]: Finished Exit the Session.
Nov 24 09:24:54 compute-0 systemd[73282]: Reached target Exit the Session.
Nov 24 09:24:54 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Nov 24 09:24:54 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Nov 24 09:24:54 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Nov 24 09:24:54 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Nov 24 09:24:54 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Nov 24 09:24:54 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Nov 24 09:24:54 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Nov 24 09:25:00 compute-0 podman[73375]: 2025-11-24 09:25:00.595367816 +0000 UTC m=+16.330196934 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:25:00 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 09:25:00 compute-0 podman[73437]: 2025-11-24 09:25:00.67447334 +0000 UTC m=+0.042303615 container create 2e360e597798708859cc4f91747375245f8e019ec78015d8b6f0a41363045ed8 (image=quay.io/ceph/ceph:v19, name=funny_euler, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:25:00 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Nov 24 09:25:00 compute-0 systemd[1]: Started libpod-conmon-2e360e597798708859cc4f91747375245f8e019ec78015d8b6f0a41363045ed8.scope.
Nov 24 09:25:00 compute-0 podman[73437]: 2025-11-24 09:25:00.65607179 +0000 UTC m=+0.023902095 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:25:00 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:25:00 compute-0 podman[73437]: 2025-11-24 09:25:00.80577252 +0000 UTC m=+0.173602845 container init 2e360e597798708859cc4f91747375245f8e019ec78015d8b6f0a41363045ed8 (image=quay.io/ceph/ceph:v19, name=funny_euler, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:25:00 compute-0 podman[73437]: 2025-11-24 09:25:00.815718813 +0000 UTC m=+0.183549088 container start 2e360e597798708859cc4f91747375245f8e019ec78015d8b6f0a41363045ed8 (image=quay.io/ceph/ceph:v19, name=funny_euler, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:25:00 compute-0 podman[73437]: 2025-11-24 09:25:00.819220869 +0000 UTC m=+0.187051154 container attach 2e360e597798708859cc4f91747375245f8e019ec78015d8b6f0a41363045ed8 (image=quay.io/ceph/ceph:v19, name=funny_euler, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 24 09:25:00 compute-0 funny_euler[73453]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Nov 24 09:25:00 compute-0 systemd[1]: libpod-2e360e597798708859cc4f91747375245f8e019ec78015d8b6f0a41363045ed8.scope: Deactivated successfully.
Nov 24 09:25:00 compute-0 podman[73437]: 2025-11-24 09:25:00.928645704 +0000 UTC m=+0.296475979 container died 2e360e597798708859cc4f91747375245f8e019ec78015d8b6f0a41363045ed8 (image=quay.io/ceph/ceph:v19, name=funny_euler, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Nov 24 09:25:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3fb2deb1bf4600f13e0772f3157491df728220a9a01e6eb793cf2513c249c99-merged.mount: Deactivated successfully.
Nov 24 09:25:00 compute-0 podman[73437]: 2025-11-24 09:25:00.961417735 +0000 UTC m=+0.329248010 container remove 2e360e597798708859cc4f91747375245f8e019ec78015d8b6f0a41363045ed8 (image=quay.io/ceph/ceph:v19, name=funny_euler, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True)
Nov 24 09:25:00 compute-0 systemd[1]: libpod-conmon-2e360e597798708859cc4f91747375245f8e019ec78015d8b6f0a41363045ed8.scope: Deactivated successfully.
Nov 24 09:25:01 compute-0 podman[73469]: 2025-11-24 09:25:01.024562408 +0000 UTC m=+0.044550439 container create f5657906a27764611d5c81b21f8caa34a6a42831be13387cc9dffad5bc286743 (image=quay.io/ceph/ceph:v19, name=focused_chandrasekhar, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:25:01 compute-0 systemd[1]: Started libpod-conmon-f5657906a27764611d5c81b21f8caa34a6a42831be13387cc9dffad5bc286743.scope.
Nov 24 09:25:01 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:25:01 compute-0 podman[73469]: 2025-11-24 09:25:01.079475192 +0000 UTC m=+0.099463283 container init f5657906a27764611d5c81b21f8caa34a6a42831be13387cc9dffad5bc286743 (image=quay.io/ceph/ceph:v19, name=focused_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:25:01 compute-0 podman[73469]: 2025-11-24 09:25:01.086475543 +0000 UTC m=+0.106463574 container start f5657906a27764611d5c81b21f8caa34a6a42831be13387cc9dffad5bc286743 (image=quay.io/ceph/ceph:v19, name=focused_chandrasekhar, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid)
Nov 24 09:25:01 compute-0 focused_chandrasekhar[73485]: 167 167
Nov 24 09:25:01 compute-0 systemd[1]: libpod-f5657906a27764611d5c81b21f8caa34a6a42831be13387cc9dffad5bc286743.scope: Deactivated successfully.
Nov 24 09:25:01 compute-0 podman[73469]: 2025-11-24 09:25:01.089814694 +0000 UTC m=+0.109802775 container attach f5657906a27764611d5c81b21f8caa34a6a42831be13387cc9dffad5bc286743 (image=quay.io/ceph/ceph:v19, name=focused_chandrasekhar, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:25:01 compute-0 podman[73469]: 2025-11-24 09:25:01.090125531 +0000 UTC m=+0.110113572 container died f5657906a27764611d5c81b21f8caa34a6a42831be13387cc9dffad5bc286743 (image=quay.io/ceph/ceph:v19, name=focused_chandrasekhar, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Nov 24 09:25:01 compute-0 podman[73469]: 2025-11-24 09:25:01.004881647 +0000 UTC m=+0.024869728 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:25:01 compute-0 podman[73469]: 2025-11-24 09:25:01.122585825 +0000 UTC m=+0.142573856 container remove f5657906a27764611d5c81b21f8caa34a6a42831be13387cc9dffad5bc286743 (image=quay.io/ceph/ceph:v19, name=focused_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:25:01 compute-0 systemd[1]: libpod-conmon-f5657906a27764611d5c81b21f8caa34a6a42831be13387cc9dffad5bc286743.scope: Deactivated successfully.
Nov 24 09:25:01 compute-0 podman[73502]: 2025-11-24 09:25:01.175479018 +0000 UTC m=+0.036436952 container create c79849d668eefa06588389c752a48973596a02ec100d2220154e61eee0fbdb5f (image=quay.io/ceph/ceph:v19, name=eloquent_swartz, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:25:01 compute-0 systemd[1]: Started libpod-conmon-c79849d668eefa06588389c752a48973596a02ec100d2220154e61eee0fbdb5f.scope.
Nov 24 09:25:01 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:25:01 compute-0 podman[73502]: 2025-11-24 09:25:01.22585512 +0000 UTC m=+0.086813074 container init c79849d668eefa06588389c752a48973596a02ec100d2220154e61eee0fbdb5f (image=quay.io/ceph/ceph:v19, name=eloquent_swartz, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:25:01 compute-0 podman[73502]: 2025-11-24 09:25:01.230154575 +0000 UTC m=+0.091112509 container start c79849d668eefa06588389c752a48973596a02ec100d2220154e61eee0fbdb5f (image=quay.io/ceph/ceph:v19, name=eloquent_swartz, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:25:01 compute-0 podman[73502]: 2025-11-24 09:25:01.233862156 +0000 UTC m=+0.094820110 container attach c79849d668eefa06588389c752a48973596a02ec100d2220154e61eee0fbdb5f (image=quay.io/ceph/ceph:v19, name=eloquent_swartz, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 24 09:25:01 compute-0 eloquent_swartz[73518]: AQBtJCRpR13aDhAAWJb4Ru20WDkqe1s9DZBoSQ==
Nov 24 09:25:01 compute-0 systemd[1]: libpod-c79849d668eefa06588389c752a48973596a02ec100d2220154e61eee0fbdb5f.scope: Deactivated successfully.
Nov 24 09:25:01 compute-0 podman[73502]: 2025-11-24 09:25:01.252767457 +0000 UTC m=+0.113725391 container died c79849d668eefa06588389c752a48973596a02ec100d2220154e61eee0fbdb5f (image=quay.io/ceph/ceph:v19, name=eloquent_swartz, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:25:01 compute-0 podman[73502]: 2025-11-24 09:25:01.15880472 +0000 UTC m=+0.019762674 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:25:01 compute-0 podman[73502]: 2025-11-24 09:25:01.285231002 +0000 UTC m=+0.146188936 container remove c79849d668eefa06588389c752a48973596a02ec100d2220154e61eee0fbdb5f (image=quay.io/ceph/ceph:v19, name=eloquent_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:25:01 compute-0 systemd[1]: libpod-conmon-c79849d668eefa06588389c752a48973596a02ec100d2220154e61eee0fbdb5f.scope: Deactivated successfully.
Nov 24 09:25:01 compute-0 podman[73538]: 2025-11-24 09:25:01.34856311 +0000 UTC m=+0.044333295 container create 38a463cb3826af4fff430f4d039d0d8d4d1e776612f4d638dd6f5c7a8ad04ba3 (image=quay.io/ceph/ceph:v19, name=upbeat_gould, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Nov 24 09:25:01 compute-0 systemd[1]: Started libpod-conmon-38a463cb3826af4fff430f4d039d0d8d4d1e776612f4d638dd6f5c7a8ad04ba3.scope.
Nov 24 09:25:01 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:25:01 compute-0 podman[73538]: 2025-11-24 09:25:01.409620992 +0000 UTC m=+0.105391227 container init 38a463cb3826af4fff430f4d039d0d8d4d1e776612f4d638dd6f5c7a8ad04ba3 (image=quay.io/ceph/ceph:v19, name=upbeat_gould, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 09:25:01 compute-0 podman[73538]: 2025-11-24 09:25:01.416968362 +0000 UTC m=+0.112738567 container start 38a463cb3826af4fff430f4d039d0d8d4d1e776612f4d638dd6f5c7a8ad04ba3 (image=quay.io/ceph/ceph:v19, name=upbeat_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Nov 24 09:25:01 compute-0 podman[73538]: 2025-11-24 09:25:01.421773429 +0000 UTC m=+0.117543654 container attach 38a463cb3826af4fff430f4d039d0d8d4d1e776612f4d638dd6f5c7a8ad04ba3 (image=quay.io/ceph/ceph:v19, name=upbeat_gould, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:25:01 compute-0 podman[73538]: 2025-11-24 09:25:01.327213408 +0000 UTC m=+0.022983633 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:25:01 compute-0 upbeat_gould[73554]: AQBtJCRpoh0AGhAAAOrLWKdqUMFvqh9iKYJ4TQ==
Nov 24 09:25:01 compute-0 systemd[1]: libpod-38a463cb3826af4fff430f4d039d0d8d4d1e776612f4d638dd6f5c7a8ad04ba3.scope: Deactivated successfully.
Nov 24 09:25:01 compute-0 podman[73538]: 2025-11-24 09:25:01.439718328 +0000 UTC m=+0.135488533 container died 38a463cb3826af4fff430f4d039d0d8d4d1e776612f4d638dd6f5c7a8ad04ba3 (image=quay.io/ceph/ceph:v19, name=upbeat_gould, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:25:01 compute-0 podman[73538]: 2025-11-24 09:25:01.469896526 +0000 UTC m=+0.165666721 container remove 38a463cb3826af4fff430f4d039d0d8d4d1e776612f4d638dd6f5c7a8ad04ba3 (image=quay.io/ceph/ceph:v19, name=upbeat_gould, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Nov 24 09:25:01 compute-0 systemd[1]: libpod-conmon-38a463cb3826af4fff430f4d039d0d8d4d1e776612f4d638dd6f5c7a8ad04ba3.scope: Deactivated successfully.
Nov 24 09:25:01 compute-0 podman[73576]: 2025-11-24 09:25:01.533626484 +0000 UTC m=+0.043603877 container create eb1a080d0f52be6af34c2148b8536f2e821de7aa9fee04b8e09c9766b0314f18 (image=quay.io/ceph/ceph:v19, name=sweet_turing, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Nov 24 09:25:01 compute-0 systemd[1]: Started libpod-conmon-eb1a080d0f52be6af34c2148b8536f2e821de7aa9fee04b8e09c9766b0314f18.scope.
Nov 24 09:25:01 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:25:01 compute-0 podman[73576]: 2025-11-24 09:25:01.517749056 +0000 UTC m=+0.027726489 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:25:01 compute-0 podman[73576]: 2025-11-24 09:25:01.62588435 +0000 UTC m=+0.135861773 container init eb1a080d0f52be6af34c2148b8536f2e821de7aa9fee04b8e09c9766b0314f18 (image=quay.io/ceph/ceph:v19, name=sweet_turing, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 24 09:25:01 compute-0 podman[73576]: 2025-11-24 09:25:01.631295362 +0000 UTC m=+0.141272755 container start eb1a080d0f52be6af34c2148b8536f2e821de7aa9fee04b8e09c9766b0314f18 (image=quay.io/ceph/ceph:v19, name=sweet_turing, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:25:01 compute-0 sweet_turing[73593]: AQBtJCRp4pqzJhAA7T0d18odr+R/PXngnJp3/A==
Nov 24 09:25:01 compute-0 systemd[1]: libpod-eb1a080d0f52be6af34c2148b8536f2e821de7aa9fee04b8e09c9766b0314f18.scope: Deactivated successfully.
Nov 24 09:25:01 compute-0 podman[73576]: 2025-11-24 09:25:01.712582959 +0000 UTC m=+0.222560412 container attach eb1a080d0f52be6af34c2148b8536f2e821de7aa9fee04b8e09c9766b0314f18 (image=quay.io/ceph/ceph:v19, name=sweet_turing, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Nov 24 09:25:01 compute-0 podman[73576]: 2025-11-24 09:25:01.713144533 +0000 UTC m=+0.223121986 container died eb1a080d0f52be6af34c2148b8536f2e821de7aa9fee04b8e09c9766b0314f18 (image=quay.io/ceph/ceph:v19, name=sweet_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:25:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-05ccbf597fd453173bfec4f4e58fc7b2bdae78afdb3f84061993c2c8d61e3077-merged.mount: Deactivated successfully.
Nov 24 09:25:03 compute-0 podman[73576]: 2025-11-24 09:25:03.226459389 +0000 UTC m=+1.736436782 container remove eb1a080d0f52be6af34c2148b8536f2e821de7aa9fee04b8e09c9766b0314f18 (image=quay.io/ceph/ceph:v19, name=sweet_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:25:03 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 09:25:03 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 09:25:03 compute-0 systemd[1]: libpod-conmon-eb1a080d0f52be6af34c2148b8536f2e821de7aa9fee04b8e09c9766b0314f18.scope: Deactivated successfully.
Nov 24 09:25:03 compute-0 podman[73613]: 2025-11-24 09:25:03.312298347 +0000 UTC m=+0.063543964 container create 3862e5e396163a41431306561856552d4506af6e802d58d96d1a968f2be9ea71 (image=quay.io/ceph/ceph:v19, name=loving_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Nov 24 09:25:03 compute-0 systemd[1]: Started libpod-conmon-3862e5e396163a41431306561856552d4506af6e802d58d96d1a968f2be9ea71.scope.
Nov 24 09:25:03 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:25:03 compute-0 podman[73613]: 2025-11-24 09:25:03.271202283 +0000 UTC m=+0.022447930 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:25:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aea094d1cffd651fc9632078a57a448296d833acaecb7ae44db06d84f5dbcbbd/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:03 compute-0 podman[73613]: 2025-11-24 09:25:03.378597059 +0000 UTC m=+0.129842706 container init 3862e5e396163a41431306561856552d4506af6e802d58d96d1a968f2be9ea71 (image=quay.io/ceph/ceph:v19, name=loving_swartz, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Nov 24 09:25:03 compute-0 podman[73613]: 2025-11-24 09:25:03.384562305 +0000 UTC m=+0.135807932 container start 3862e5e396163a41431306561856552d4506af6e802d58d96d1a968f2be9ea71 (image=quay.io/ceph/ceph:v19, name=loving_swartz, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 24 09:25:03 compute-0 podman[73613]: 2025-11-24 09:25:03.388699466 +0000 UTC m=+0.139945093 container attach 3862e5e396163a41431306561856552d4506af6e802d58d96d1a968f2be9ea71 (image=quay.io/ceph/ceph:v19, name=loving_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 24 09:25:03 compute-0 loving_swartz[73630]: /usr/bin/monmaptool: monmap file /tmp/monmap
Nov 24 09:25:03 compute-0 loving_swartz[73630]: setting min_mon_release = quincy
Nov 24 09:25:03 compute-0 loving_swartz[73630]: /usr/bin/monmaptool: set fsid to 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:25:03 compute-0 loving_swartz[73630]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Nov 24 09:25:03 compute-0 systemd[1]: libpod-3862e5e396163a41431306561856552d4506af6e802d58d96d1a968f2be9ea71.scope: Deactivated successfully.
Nov 24 09:25:03 compute-0 podman[73613]: 2025-11-24 09:25:03.420046082 +0000 UTC m=+0.171291709 container died 3862e5e396163a41431306561856552d4506af6e802d58d96d1a968f2be9ea71 (image=quay.io/ceph/ceph:v19, name=loving_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 24 09:25:03 compute-0 podman[73613]: 2025-11-24 09:25:03.460082681 +0000 UTC m=+0.211328308 container remove 3862e5e396163a41431306561856552d4506af6e802d58d96d1a968f2be9ea71 (image=quay.io/ceph/ceph:v19, name=loving_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Nov 24 09:25:03 compute-0 systemd[1]: libpod-conmon-3862e5e396163a41431306561856552d4506af6e802d58d96d1a968f2be9ea71.scope: Deactivated successfully.
Nov 24 09:25:03 compute-0 podman[73649]: 2025-11-24 09:25:03.524479895 +0000 UTC m=+0.041355552 container create f22c8812a94326ea6c9e6e1b5a2fc72f353ceed027814224dc2fb36613ec88ed (image=quay.io/ceph/ceph:v19, name=intelligent_ishizaka, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:25:03 compute-0 systemd[1]: Started libpod-conmon-f22c8812a94326ea6c9e6e1b5a2fc72f353ceed027814224dc2fb36613ec88ed.scope.
Nov 24 09:25:03 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:25:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2393570eee4c8749bcd872a363e608d56d3e1a958ec2fe27ecd9ce0b59196b2/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2393570eee4c8749bcd872a363e608d56d3e1a958ec2fe27ecd9ce0b59196b2/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2393570eee4c8749bcd872a363e608d56d3e1a958ec2fe27ecd9ce0b59196b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2393570eee4c8749bcd872a363e608d56d3e1a958ec2fe27ecd9ce0b59196b2/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:03 compute-0 podman[73649]: 2025-11-24 09:25:03.586941032 +0000 UTC m=+0.103816719 container init f22c8812a94326ea6c9e6e1b5a2fc72f353ceed027814224dc2fb36613ec88ed (image=quay.io/ceph/ceph:v19, name=intelligent_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Nov 24 09:25:03 compute-0 podman[73649]: 2025-11-24 09:25:03.594587999 +0000 UTC m=+0.111463656 container start f22c8812a94326ea6c9e6e1b5a2fc72f353ceed027814224dc2fb36613ec88ed (image=quay.io/ceph/ceph:v19, name=intelligent_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:25:03 compute-0 podman[73649]: 2025-11-24 09:25:03.598853763 +0000 UTC m=+0.115729450 container attach f22c8812a94326ea6c9e6e1b5a2fc72f353ceed027814224dc2fb36613ec88ed (image=quay.io/ceph/ceph:v19, name=intelligent_ishizaka, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Nov 24 09:25:03 compute-0 podman[73649]: 2025-11-24 09:25:03.505391478 +0000 UTC m=+0.022267155 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:25:03 compute-0 systemd[1]: libpod-f22c8812a94326ea6c9e6e1b5a2fc72f353ceed027814224dc2fb36613ec88ed.scope: Deactivated successfully.
Nov 24 09:25:03 compute-0 podman[73649]: 2025-11-24 09:25:03.694286206 +0000 UTC m=+0.211161863 container died f22c8812a94326ea6c9e6e1b5a2fc72f353ceed027814224dc2fb36613ec88ed (image=quay.io/ceph/ceph:v19, name=intelligent_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:25:03 compute-0 podman[73649]: 2025-11-24 09:25:03.730303497 +0000 UTC m=+0.247179154 container remove f22c8812a94326ea6c9e6e1b5a2fc72f353ceed027814224dc2fb36613ec88ed (image=quay.io/ceph/ceph:v19, name=intelligent_ishizaka, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:25:03 compute-0 systemd[1]: libpod-conmon-f22c8812a94326ea6c9e6e1b5a2fc72f353ceed027814224dc2fb36613ec88ed.scope: Deactivated successfully.
Nov 24 09:25:03 compute-0 systemd[1]: Reloading.
Nov 24 09:25:03 compute-0 systemd-rc-local-generator[73731]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:25:03 compute-0 systemd-sysv-generator[73736]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:25:03 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 09:25:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-aea094d1cffd651fc9632078a57a448296d833acaecb7ae44db06d84f5dbcbbd-merged.mount: Deactivated successfully.
Nov 24 09:25:04 compute-0 systemd[1]: Reloading.
Nov 24 09:25:04 compute-0 systemd-rc-local-generator[73767]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:25:04 compute-0 systemd-sysv-generator[73771]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:25:04 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Nov 24 09:25:04 compute-0 systemd[1]: Reloading.
Nov 24 09:25:04 compute-0 systemd-rc-local-generator[73809]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:25:04 compute-0 systemd-sysv-generator[73812]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:25:04 compute-0 systemd[1]: Reached target Ceph cluster 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:25:04 compute-0 systemd[1]: Reloading.
Nov 24 09:25:04 compute-0 systemd-rc-local-generator[73845]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:25:04 compute-0 systemd-sysv-generator[73848]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:25:04 compute-0 systemd[1]: Reloading.
Nov 24 09:25:04 compute-0 systemd-rc-local-generator[73888]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:25:04 compute-0 systemd-sysv-generator[73892]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:25:05 compute-0 systemd[1]: Created slice Slice /system/ceph-84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:25:05 compute-0 systemd[1]: Reached target System Time Set.
Nov 24 09:25:05 compute-0 systemd[1]: Reached target System Time Synchronized.
Nov 24 09:25:05 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:25:05 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 09:25:05 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 09:25:05 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 09:25:05 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 09:25:05 compute-0 podman[73946]: 2025-11-24 09:25:05.382352616 +0000 UTC m=+0.056617126 container create 3f99d71539f1c13063cdee200646932d532e06f8c1ddd6b829d2c13b40b37a92 (image=quay.io/ceph/ceph:v19, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mon-compute-0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:25:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b754d76f02f68a4b713d8995dc3cfdc9529a47473b7ef3ffd977658129abe167/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b754d76f02f68a4b713d8995dc3cfdc9529a47473b7ef3ffd977658129abe167/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b754d76f02f68a4b713d8995dc3cfdc9529a47473b7ef3ffd977658129abe167/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b754d76f02f68a4b713d8995dc3cfdc9529a47473b7ef3ffd977658129abe167/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:05 compute-0 podman[73946]: 2025-11-24 09:25:05.352521056 +0000 UTC m=+0.026785596 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:25:05 compute-0 podman[73946]: 2025-11-24 09:25:05.449015935 +0000 UTC m=+0.123280455 container init 3f99d71539f1c13063cdee200646932d532e06f8c1ddd6b829d2c13b40b37a92 (image=quay.io/ceph/ceph:v19, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mon-compute-0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:25:05 compute-0 podman[73946]: 2025-11-24 09:25:05.455214907 +0000 UTC m=+0.129479437 container start 3f99d71539f1c13063cdee200646932d532e06f8c1ddd6b829d2c13b40b37a92 (image=quay.io/ceph/ceph:v19, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mon-compute-0, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:25:05 compute-0 bash[73946]: 3f99d71539f1c13063cdee200646932d532e06f8c1ddd6b829d2c13b40b37a92
Nov 24 09:25:05 compute-0 systemd[1]: Started Ceph mon.compute-0 for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:25:05 compute-0 ceph-mon[73966]: set uid:gid to 167:167 (ceph:ceph)
Nov 24 09:25:05 compute-0 ceph-mon[73966]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Nov 24 09:25:05 compute-0 ceph-mon[73966]: pidfile_write: ignore empty --pid-file
Nov 24 09:25:05 compute-0 ceph-mon[73966]: load: jerasure load: lrc 
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb: RocksDB version: 7.9.2
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb: Git sha 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb: Compile date 2025-07-17 03:12:14
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb: DB SUMMARY
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb: DB Session ID:  HLSYM47P1YEZ5KYZUQTZ
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb: CURRENT file:  CURRENT
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb: IDENTITY file:  IDENTITY
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                         Options.error_if_exists: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                       Options.create_if_missing: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                         Options.paranoid_checks: 1
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                                     Options.env: 0x55655c777c20
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                                Options.info_log: 0x55655d1aed60
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                Options.max_file_opening_threads: 16
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                              Options.statistics: (nil)
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                               Options.use_fsync: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                       Options.max_log_file_size: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                         Options.allow_fallocate: 1
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                        Options.use_direct_reads: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:          Options.create_missing_column_families: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                              Options.db_log_dir: 
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                                 Options.wal_dir: 
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                   Options.advise_random_on_open: 1
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                    Options.write_buffer_manager: 0x55655d1b3900
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                            Options.rate_limiter: (nil)
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                  Options.unordered_write: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                               Options.row_cache: None
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                              Options.wal_filter: None
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:             Options.allow_ingest_behind: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:             Options.two_write_queues: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:             Options.manual_wal_flush: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:             Options.wal_compression: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:             Options.atomic_flush: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                 Options.log_readahead_size: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:             Options.allow_data_in_errors: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:             Options.db_host_id: __hostname__
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:             Options.max_background_jobs: 2
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:             Options.max_background_compactions: -1
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:             Options.max_subcompactions: 1
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:             Options.max_total_wal_size: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                          Options.max_open_files: -1
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                          Options.bytes_per_sync: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:       Options.compaction_readahead_size: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                  Options.max_background_flushes: -1
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb: Compression algorithms supported:
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:         kZSTD supported: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:         kXpressCompression supported: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:         kBZip2Compression supported: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:         kLZ4Compression supported: 1
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:         kZlibCompression supported: 1
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:         kLZ4HCCompression supported: 1
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:         kSnappyCompression supported: 1
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:           Options.merge_operator: 
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:        Options.compaction_filter: None
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55655d1ae500)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55655d1d3350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:        Options.write_buffer_size: 33554432
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:  Options.max_write_buffer_number: 2
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:          Options.compression: NoCompression
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:             Options.num_levels: 7
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                           Options.bloom_locality: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                               Options.ttl: 2592000
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                       Options.enable_blob_files: false
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                           Options.min_blob_size: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 42aa12d2-c531-4ddc-8c4c-bc0b5971346b
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976305502296, "job": 1, "event": "recovery_started", "wal_files": [4]}
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976305504310, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976305, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "42aa12d2-c531-4ddc-8c4c-bc0b5971346b", "db_session_id": "HLSYM47P1YEZ5KYZUQTZ", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976305504409, "job": 1, "event": "recovery_finished"}
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55655d1d4e00
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb: DB pointer 0x55655d2de000
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 09:25:05 compute-0 ceph-mon[73966]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.14 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.14 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55655d1d3350#2 capacity: 512.00 MB usage: 1.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.95 KB,0.000181794%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 24 09:25:05 compute-0 ceph-mon[73966]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:25:05 compute-0 ceph-mon[73966]: mon.compute-0@-1(???) e0 preinit fsid 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:25:05 compute-0 ceph-mon[73966]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Nov 24 09:25:05 compute-0 ceph-mon[73966]: mon.compute-0@0(probing) e0 win_standalone_election
Nov 24 09:25:05 compute-0 ceph-mon[73966]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Nov 24 09:25:05 compute-0 ceph-mon[73966]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 24 09:25:05 compute-0 ceph-mon[73966]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 24 09:25:05 compute-0 ceph-mon[73966]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Nov 24 09:25:05 compute-0 ceph-mon[73966]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Nov 24 09:25:05 compute-0 ceph-mon[73966]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Nov 24 09:25:05 compute-0 ceph-mon[73966]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Nov 24 09:25:05 compute-0 ceph-mon[73966]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 24 09:25:05 compute-0 ceph-mon[73966]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Nov 24 09:25:05 compute-0 ceph-mon[73966]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: mon.compute-0@0(probing) e1 win_standalone_election
Nov 24 09:25:05 compute-0 ceph-mon[73966]: paxos.0).electionLogic(2) init, last seen epoch 2
Nov 24 09:25:05 compute-0 ceph-mon[73966]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 24 09:25:05 compute-0 ceph-mon[73966]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 24 09:25:05 compute-0 ceph-mon[73966]: log_channel(cluster) log [DBG] : monmap epoch 1
Nov 24 09:25:05 compute-0 ceph-mon[73966]: log_channel(cluster) log [DBG] : fsid 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:25:05 compute-0 ceph-mon[73966]: log_channel(cluster) log [DBG] : last_changed 2025-11-24T09:25:03.414609+0000
Nov 24 09:25:05 compute-0 ceph-mon[73966]: log_channel(cluster) log [DBG] : created 2025-11-24T09:25:03.414609+0000
Nov 24 09:25:05 compute-0 ceph-mon[73966]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Nov 24 09:25:05 compute-0 ceph-mon[73966]: log_channel(cluster) log [DBG] : election_strategy: 1
Nov 24 09:25:05 compute-0 ceph-mon[73966]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 24 09:25:05 compute-0 ceph-mon[73966]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v19,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025,kernel_version=5.14.0-639.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,os=Linux}
Nov 24 09:25:05 compute-0 ceph-mon[73966]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Nov 24 09:25:05 compute-0 ceph-mon[73966]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Nov 24 09:25:05 compute-0 ceph-mon[73966]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Nov 24 09:25:05 compute-0 ceph-mon[73966]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Nov 24 09:25:05 compute-0 ceph-mon[73966]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 24 09:25:05 compute-0 ceph-mon[73966]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout}
Nov 24 09:25:05 compute-0 ceph-mon[73966]: mon.compute-0@0(leader).mds e1 new map
Nov 24 09:25:05 compute-0 ceph-mon[73966]: mon.compute-0@0(leader).mds e1 print_map
                                           e1
                                           btime 2025-11-24T09:25:05:540478+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Nov 24 09:25:05 compute-0 ceph-mon[73966]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Nov 24 09:25:05 compute-0 ceph-mon[73966]: log_channel(cluster) log [DBG] : fsmap 
Nov 24 09:25:05 compute-0 ceph-mon[73966]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 24 09:25:05 compute-0 ceph-mon[73966]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 24 09:25:05 compute-0 ceph-mon[73966]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Nov 24 09:25:05 compute-0 ceph-mon[73966]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Nov 24 09:25:05 compute-0 ceph-mon[73966]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 24 09:25:05 compute-0 ceph-mon[73966]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 24 09:25:05 compute-0 ceph-mon[73966]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 24 09:25:05 compute-0 ceph-mon[73966]: mkfs 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:25:05 compute-0 ceph-mon[73966]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Nov 24 09:25:05 compute-0 ceph-mon[73966]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Nov 24 09:25:05 compute-0 ceph-mon[73966]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Nov 24 09:25:05 compute-0 ceph-mon[73966]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 24 09:25:05 compute-0 podman[73967]: 2025-11-24 09:25:05.56292211 +0000 UTC m=+0.063379171 container create db50205a9cdc04dc3fdd1a9db0aa9d5fff107c22b1a74767bf756fbfe0d722fa (image=quay.io/ceph/ceph:v19, name=stoic_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 09:25:05 compute-0 systemd[1]: Started libpod-conmon-db50205a9cdc04dc3fdd1a9db0aa9d5fff107c22b1a74767bf756fbfe0d722fa.scope.
Nov 24 09:25:05 compute-0 podman[73967]: 2025-11-24 09:25:05.537582321 +0000 UTC m=+0.038039432 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:25:05 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:25:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/923b4222a939b71aab860450e032acb59b268d77b1db032fa0be114adcfa236a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/923b4222a939b71aab860450e032acb59b268d77b1db032fa0be114adcfa236a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/923b4222a939b71aab860450e032acb59b268d77b1db032fa0be114adcfa236a/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:05 compute-0 podman[73967]: 2025-11-24 09:25:05.65576898 +0000 UTC m=+0.156226071 container init db50205a9cdc04dc3fdd1a9db0aa9d5fff107c22b1a74767bf756fbfe0d722fa (image=quay.io/ceph/ceph:v19, name=stoic_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 24 09:25:05 compute-0 podman[73967]: 2025-11-24 09:25:05.664399021 +0000 UTC m=+0.164856112 container start db50205a9cdc04dc3fdd1a9db0aa9d5fff107c22b1a74767bf756fbfe0d722fa (image=quay.io/ceph/ceph:v19, name=stoic_chatterjee, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:25:05 compute-0 podman[73967]: 2025-11-24 09:25:05.668282446 +0000 UTC m=+0.168739537 container attach db50205a9cdc04dc3fdd1a9db0aa9d5fff107c22b1a74767bf756fbfe0d722fa (image=quay.io/ceph/ceph:v19, name=stoic_chatterjee, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:25:05 compute-0 ceph-mon[73966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Nov 24 09:25:05 compute-0 ceph-mon[73966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1433431111' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 24 09:25:05 compute-0 stoic_chatterjee[74021]:   cluster:
Nov 24 09:25:05 compute-0 stoic_chatterjee[74021]:     id:     84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:25:05 compute-0 stoic_chatterjee[74021]:     health: HEALTH_OK
Nov 24 09:25:05 compute-0 stoic_chatterjee[74021]:  
Nov 24 09:25:05 compute-0 stoic_chatterjee[74021]:   services:
Nov 24 09:25:05 compute-0 stoic_chatterjee[74021]:     mon: 1 daemons, quorum compute-0 (age 0.322823s)
Nov 24 09:25:05 compute-0 stoic_chatterjee[74021]:     mgr: no daemons active
Nov 24 09:25:05 compute-0 stoic_chatterjee[74021]:     osd: 0 osds: 0 up, 0 in
Nov 24 09:25:05 compute-0 stoic_chatterjee[74021]:  
Nov 24 09:25:05 compute-0 stoic_chatterjee[74021]:   data:
Nov 24 09:25:05 compute-0 stoic_chatterjee[74021]:     pools:   0 pools, 0 pgs
Nov 24 09:25:05 compute-0 stoic_chatterjee[74021]:     objects: 0 objects, 0 B
Nov 24 09:25:05 compute-0 stoic_chatterjee[74021]:     usage:   0 B used, 0 B / 0 B avail
Nov 24 09:25:05 compute-0 stoic_chatterjee[74021]:     pgs:     
Nov 24 09:25:05 compute-0 stoic_chatterjee[74021]:  
Nov 24 09:25:05 compute-0 systemd[1]: libpod-db50205a9cdc04dc3fdd1a9db0aa9d5fff107c22b1a74767bf756fbfe0d722fa.scope: Deactivated successfully.
Nov 24 09:25:05 compute-0 podman[74047]: 2025-11-24 09:25:05.925355111 +0000 UTC m=+0.029103383 container died db50205a9cdc04dc3fdd1a9db0aa9d5fff107c22b1a74767bf756fbfe0d722fa (image=quay.io/ceph/ceph:v19, name=stoic_chatterjee, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Nov 24 09:25:05 compute-0 podman[74047]: 2025-11-24 09:25:05.961505384 +0000 UTC m=+0.065253616 container remove db50205a9cdc04dc3fdd1a9db0aa9d5fff107c22b1a74767bf756fbfe0d722fa (image=quay.io/ceph/ceph:v19, name=stoic_chatterjee, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:25:05 compute-0 systemd[1]: libpod-conmon-db50205a9cdc04dc3fdd1a9db0aa9d5fff107c22b1a74767bf756fbfe0d722fa.scope: Deactivated successfully.
Nov 24 09:25:06 compute-0 podman[74062]: 2025-11-24 09:25:06.040727071 +0000 UTC m=+0.047771349 container create 8b9ac56cd623ffec7e54841504c11fda773731265cd38dffe1262c1a19229eed (image=quay.io/ceph/ceph:v19, name=unruffled_leakey, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:25:06 compute-0 systemd[1]: Started libpod-conmon-8b9ac56cd623ffec7e54841504c11fda773731265cd38dffe1262c1a19229eed.scope.
Nov 24 09:25:06 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:25:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6b28f82e5e6656dbfbaaf56f01794ad6cc429327f114e03404050b415d28b16/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6b28f82e5e6656dbfbaaf56f01794ad6cc429327f114e03404050b415d28b16/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6b28f82e5e6656dbfbaaf56f01794ad6cc429327f114e03404050b415d28b16/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6b28f82e5e6656dbfbaaf56f01794ad6cc429327f114e03404050b415d28b16/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:06 compute-0 podman[74062]: 2025-11-24 09:25:06.018057427 +0000 UTC m=+0.025101745 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:25:06 compute-0 podman[74062]: 2025-11-24 09:25:06.137132148 +0000 UTC m=+0.144176476 container init 8b9ac56cd623ffec7e54841504c11fda773731265cd38dffe1262c1a19229eed (image=quay.io/ceph/ceph:v19, name=unruffled_leakey, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:25:06 compute-0 podman[74062]: 2025-11-24 09:25:06.143073313 +0000 UTC m=+0.150117591 container start 8b9ac56cd623ffec7e54841504c11fda773731265cd38dffe1262c1a19229eed (image=quay.io/ceph/ceph:v19, name=unruffled_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid)
Nov 24 09:25:06 compute-0 podman[74062]: 2025-11-24 09:25:06.149934361 +0000 UTC m=+0.156978659 container attach 8b9ac56cd623ffec7e54841504c11fda773731265cd38dffe1262c1a19229eed (image=quay.io/ceph/ceph:v19, name=unruffled_leakey, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 24 09:25:06 compute-0 ceph-mon[73966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Nov 24 09:25:06 compute-0 ceph-mon[73966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3370692579' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 24 09:25:06 compute-0 ceph-mon[73966]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3370692579' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 24 09:25:06 compute-0 unruffled_leakey[74080]: 
Nov 24 09:25:06 compute-0 unruffled_leakey[74080]: [global]
Nov 24 09:25:06 compute-0 unruffled_leakey[74080]:         fsid = 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:25:06 compute-0 unruffled_leakey[74080]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Nov 24 09:25:06 compute-0 systemd[1]: libpod-8b9ac56cd623ffec7e54841504c11fda773731265cd38dffe1262c1a19229eed.scope: Deactivated successfully.
Nov 24 09:25:06 compute-0 podman[74106]: 2025-11-24 09:25:06.40270359 +0000 UTC m=+0.034939804 container died 8b9ac56cd623ffec7e54841504c11fda773731265cd38dffe1262c1a19229eed (image=quay.io/ceph/ceph:v19, name=unruffled_leakey, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:25:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6b28f82e5e6656dbfbaaf56f01794ad6cc429327f114e03404050b415d28b16-merged.mount: Deactivated successfully.
Nov 24 09:25:06 compute-0 podman[74106]: 2025-11-24 09:25:06.445742392 +0000 UTC m=+0.077978526 container remove 8b9ac56cd623ffec7e54841504c11fda773731265cd38dffe1262c1a19229eed (image=quay.io/ceph/ceph:v19, name=unruffled_leakey, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Nov 24 09:25:06 compute-0 systemd[1]: libpod-conmon-8b9ac56cd623ffec7e54841504c11fda773731265cd38dffe1262c1a19229eed.scope: Deactivated successfully.
Nov 24 09:25:06 compute-0 podman[74121]: 2025-11-24 09:25:06.523016741 +0000 UTC m=+0.047392038 container create 1fcab8c5c424fdc180a9f6159186c63ce806a320b1c3954f3ce389325e051b38 (image=quay.io/ceph/ceph:v19, name=beautiful_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Nov 24 09:25:06 compute-0 systemd[1]: Started libpod-conmon-1fcab8c5c424fdc180a9f6159186c63ce806a320b1c3954f3ce389325e051b38.scope.
Nov 24 09:25:06 compute-0 ceph-mon[73966]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 24 09:25:06 compute-0 ceph-mon[73966]: monmap epoch 1
Nov 24 09:25:06 compute-0 ceph-mon[73966]: fsid 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:25:06 compute-0 ceph-mon[73966]: last_changed 2025-11-24T09:25:03.414609+0000
Nov 24 09:25:06 compute-0 ceph-mon[73966]: created 2025-11-24T09:25:03.414609+0000
Nov 24 09:25:06 compute-0 ceph-mon[73966]: min_mon_release 19 (squid)
Nov 24 09:25:06 compute-0 ceph-mon[73966]: election_strategy: 1
Nov 24 09:25:06 compute-0 ceph-mon[73966]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Nov 24 09:25:06 compute-0 ceph-mon[73966]: fsmap 
Nov 24 09:25:06 compute-0 ceph-mon[73966]: osdmap e1: 0 total, 0 up, 0 in
Nov 24 09:25:06 compute-0 ceph-mon[73966]: mgrmap e1: no daemons active
Nov 24 09:25:06 compute-0 ceph-mon[73966]: from='client.? 192.168.122.100:0/1433431111' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 24 09:25:06 compute-0 ceph-mon[73966]: from='client.? 192.168.122.100:0/3370692579' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 24 09:25:06 compute-0 ceph-mon[73966]: from='client.? 192.168.122.100:0/3370692579' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 24 09:25:06 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:25:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3e30d16d8acbf417af719b0845cf679e2a497e6eccb24b25bb0fd002a68d7f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3e30d16d8acbf417af719b0845cf679e2a497e6eccb24b25bb0fd002a68d7f8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3e30d16d8acbf417af719b0845cf679e2a497e6eccb24b25bb0fd002a68d7f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3e30d16d8acbf417af719b0845cf679e2a497e6eccb24b25bb0fd002a68d7f8/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:06 compute-0 podman[74121]: 2025-11-24 09:25:06.501443454 +0000 UTC m=+0.025818761 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:25:06 compute-0 podman[74121]: 2025-11-24 09:25:06.609962158 +0000 UTC m=+0.134337465 container init 1fcab8c5c424fdc180a9f6159186c63ce806a320b1c3954f3ce389325e051b38 (image=quay.io/ceph/ceph:v19, name=beautiful_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 24 09:25:06 compute-0 podman[74121]: 2025-11-24 09:25:06.618743792 +0000 UTC m=+0.143119119 container start 1fcab8c5c424fdc180a9f6159186c63ce806a320b1c3954f3ce389325e051b38 (image=quay.io/ceph/ceph:v19, name=beautiful_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Nov 24 09:25:06 compute-0 podman[74121]: 2025-11-24 09:25:06.628946531 +0000 UTC m=+0.153321838 container attach 1fcab8c5c424fdc180a9f6159186c63ce806a320b1c3954f3ce389325e051b38 (image=quay.io/ceph/ceph:v19, name=beautiful_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:25:06 compute-0 ceph-mon[73966]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:25:06 compute-0 ceph-mon[73966]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2015385508' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:25:06 compute-0 systemd[1]: libpod-1fcab8c5c424fdc180a9f6159186c63ce806a320b1c3954f3ce389325e051b38.scope: Deactivated successfully.
Nov 24 09:25:06 compute-0 podman[74121]: 2025-11-24 09:25:06.825736723 +0000 UTC m=+0.350112010 container died 1fcab8c5c424fdc180a9f6159186c63ce806a320b1c3954f3ce389325e051b38 (image=quay.io/ceph/ceph:v19, name=beautiful_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1)
Nov 24 09:25:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3e30d16d8acbf417af719b0845cf679e2a497e6eccb24b25bb0fd002a68d7f8-merged.mount: Deactivated successfully.
Nov 24 09:25:06 compute-0 podman[74121]: 2025-11-24 09:25:06.868440516 +0000 UTC m=+0.392815803 container remove 1fcab8c5c424fdc180a9f6159186c63ce806a320b1c3954f3ce389325e051b38 (image=quay.io/ceph/ceph:v19, name=beautiful_elbakyan, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 09:25:06 compute-0 systemd[1]: libpod-conmon-1fcab8c5c424fdc180a9f6159186c63ce806a320b1c3954f3ce389325e051b38.scope: Deactivated successfully.
Nov 24 09:25:06 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:25:07 compute-0 ceph-mon[73966]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Nov 24 09:25:07 compute-0 ceph-mon[73966]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Nov 24 09:25:07 compute-0 ceph-mon[73966]: mon.compute-0@0(leader) e1 shutdown
Nov 24 09:25:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mon-compute-0[73962]: 2025-11-24T09:25:07.102+0000 7f5c14449640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Nov 24 09:25:07 compute-0 ceph-mon[73966]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 24 09:25:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mon-compute-0[73962]: 2025-11-24T09:25:07.102+0000 7f5c14449640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Nov 24 09:25:07 compute-0 ceph-mon[73966]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 24 09:25:07 compute-0 podman[74207]: 2025-11-24 09:25:07.183656963 +0000 UTC m=+0.122743382 container died 3f99d71539f1c13063cdee200646932d532e06f8c1ddd6b829d2c13b40b37a92 (image=quay.io/ceph/ceph:v19, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mon-compute-0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default)
Nov 24 09:25:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-b754d76f02f68a4b713d8995dc3cfdc9529a47473b7ef3ffd977658129abe167-merged.mount: Deactivated successfully.
Nov 24 09:25:07 compute-0 podman[74207]: 2025-11-24 09:25:07.219619292 +0000 UTC m=+0.158705761 container remove 3f99d71539f1c13063cdee200646932d532e06f8c1ddd6b829d2c13b40b37a92 (image=quay.io/ceph/ceph:v19, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:25:07 compute-0 bash[74207]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mon-compute-0
Nov 24 09:25:07 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 09:25:07 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 09:25:07 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 09:25:07 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@mon.compute-0.service: Deactivated successfully.
Nov 24 09:25:07 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:25:07 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@mon.compute-0.service: Consumed 1.010s CPU time.
Nov 24 09:25:07 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:25:07 compute-0 podman[74310]: 2025-11-24 09:25:07.576058597 +0000 UTC m=+0.032375854 container create 926e81c0f890a1c1ac5ebf5b0a3fc7d39273a3029701ecf933d5ab782a4c6bc4 (image=quay.io/ceph/ceph:v19, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 24 09:25:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b21431eeaaea85f187d9d03ae8c583761979d46092ddb13ed1b184b8f7bdc67/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b21431eeaaea85f187d9d03ae8c583761979d46092ddb13ed1b184b8f7bdc67/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b21431eeaaea85f187d9d03ae8c583761979d46092ddb13ed1b184b8f7bdc67/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b21431eeaaea85f187d9d03ae8c583761979d46092ddb13ed1b184b8f7bdc67/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:07 compute-0 podman[74310]: 2025-11-24 09:25:07.628912178 +0000 UTC m=+0.085229465 container init 926e81c0f890a1c1ac5ebf5b0a3fc7d39273a3029701ecf933d5ab782a4c6bc4 (image=quay.io/ceph/ceph:v19, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid)
Nov 24 09:25:07 compute-0 podman[74310]: 2025-11-24 09:25:07.63594477 +0000 UTC m=+0.092262027 container start 926e81c0f890a1c1ac5ebf5b0a3fc7d39273a3029701ecf933d5ab782a4c6bc4 (image=quay.io/ceph/ceph:v19, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1)
Nov 24 09:25:07 compute-0 bash[74310]: 926e81c0f890a1c1ac5ebf5b0a3fc7d39273a3029701ecf933d5ab782a4c6bc4
Nov 24 09:25:07 compute-0 podman[74310]: 2025-11-24 09:25:07.561838888 +0000 UTC m=+0.018156165 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:25:07 compute-0 systemd[1]: Started Ceph mon.compute-0 for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:25:07 compute-0 ceph-mon[74331]: set uid:gid to 167:167 (ceph:ceph)
Nov 24 09:25:07 compute-0 ceph-mon[74331]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Nov 24 09:25:07 compute-0 ceph-mon[74331]: pidfile_write: ignore empty --pid-file
Nov 24 09:25:07 compute-0 ceph-mon[74331]: load: jerasure load: lrc 
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb: RocksDB version: 7.9.2
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb: Git sha 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb: Compile date 2025-07-17 03:12:14
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb: DB SUMMARY
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb: DB Session ID:  RORHLERH15LC1QL8D0I4
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb: CURRENT file:  CURRENT
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb: IDENTITY file:  IDENTITY
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 58743 ; 
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                         Options.error_if_exists: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                       Options.create_if_missing: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                         Options.paranoid_checks: 1
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                                     Options.env: 0x55b8766e3c20
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                                Options.info_log: 0x55b877959ac0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                Options.max_file_opening_threads: 16
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                              Options.statistics: (nil)
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                               Options.use_fsync: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                       Options.max_log_file_size: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                         Options.allow_fallocate: 1
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                        Options.use_direct_reads: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:          Options.create_missing_column_families: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                              Options.db_log_dir: 
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                                 Options.wal_dir: 
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                   Options.advise_random_on_open: 1
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                    Options.write_buffer_manager: 0x55b87795d900
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                            Options.rate_limiter: (nil)
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                  Options.unordered_write: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                               Options.row_cache: None
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                              Options.wal_filter: None
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:             Options.allow_ingest_behind: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:             Options.two_write_queues: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:             Options.manual_wal_flush: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:             Options.wal_compression: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:             Options.atomic_flush: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                 Options.log_readahead_size: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:             Options.allow_data_in_errors: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:             Options.db_host_id: __hostname__
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:             Options.max_background_jobs: 2
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:             Options.max_background_compactions: -1
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:             Options.max_subcompactions: 1
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:             Options.max_total_wal_size: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                          Options.max_open_files: -1
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                          Options.bytes_per_sync: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:       Options.compaction_readahead_size: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                  Options.max_background_flushes: -1
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb: Compression algorithms supported:
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:         kZSTD supported: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:         kXpressCompression supported: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:         kBZip2Compression supported: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:         kLZ4Compression supported: 1
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:         kZlibCompression supported: 1
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:         kLZ4HCCompression supported: 1
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:         kSnappyCompression supported: 1
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:           Options.merge_operator: 
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:        Options.compaction_filter: None
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b877958aa0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b87797d350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:        Options.write_buffer_size: 33554432
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:  Options.max_write_buffer_number: 2
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:          Options.compression: NoCompression
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:             Options.num_levels: 7
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                           Options.bloom_locality: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                               Options.ttl: 2592000
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                       Options.enable_blob_files: false
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                           Options.min_blob_size: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 42aa12d2-c531-4ddc-8c4c-bc0b5971346b
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976307674541, "job": 1, "event": "recovery_started", "wal_files": [9]}
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976307678908, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 58494, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 137, "table_properties": {"data_size": 56968, "index_size": 168, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3182, "raw_average_key_size": 30, "raw_value_size": 54485, "raw_average_value_size": 523, "num_data_blocks": 9, "num_entries": 104, "num_filter_entries": 104, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976307, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "42aa12d2-c531-4ddc-8c4c-bc0b5971346b", "db_session_id": "RORHLERH15LC1QL8D0I4", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976307679005, "job": 1, "event": "recovery_finished"}
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55b87797ee00
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb: DB pointer 0x55b877a88000
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 09:25:07 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0   59.02 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     13.6      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0   59.02 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     13.6      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     13.6      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.6      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 3.79 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 3.79 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b87797d350#2 capacity: 512.00 MB usage: 0.84 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(2,0.95 KB,0.000181794%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 24 09:25:07 compute-0 ceph-mon[74331]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:25:07 compute-0 ceph-mon[74331]: mon.compute-0@-1(???) e1 preinit fsid 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:25:07 compute-0 ceph-mon[74331]: mon.compute-0@-1(???).mds e1 new map
Nov 24 09:25:07 compute-0 ceph-mon[74331]: mon.compute-0@-1(???).mds e1 print_map
                                           e1
                                           btime 2025-11-24T09:25:05:540478+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Nov 24 09:25:07 compute-0 ceph-mon[74331]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Nov 24 09:25:07 compute-0 ceph-mon[74331]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 24 09:25:07 compute-0 ceph-mon[74331]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 24 09:25:07 compute-0 ceph-mon[74331]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 24 09:25:07 compute-0 ceph-mon[74331]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Nov 24 09:25:07 compute-0 ceph-mon[74331]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Nov 24 09:25:07 compute-0 ceph-mon[74331]: mon.compute-0@0(probing) e1 win_standalone_election
Nov 24 09:25:07 compute-0 ceph-mon[74331]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Nov 24 09:25:07 compute-0 ceph-mon[74331]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 24 09:25:07 compute-0 ceph-mon[74331]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 24 09:25:07 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : monmap epoch 1
Nov 24 09:25:07 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : fsid 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:25:07 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : last_changed 2025-11-24T09:25:03.414609+0000
Nov 24 09:25:07 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : created 2025-11-24T09:25:03.414609+0000
Nov 24 09:25:07 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Nov 24 09:25:07 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : election_strategy: 1
Nov 24 09:25:07 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 24 09:25:07 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : fsmap 
Nov 24 09:25:07 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Nov 24 09:25:07 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Nov 24 09:25:07 compute-0 podman[74332]: 2025-11-24 09:25:07.707840368 +0000 UTC m=+0.039185649 container create f0ec4b446a31de8201bbf55cc741704cad6a693fb5cf38d06db3e6af208c0408 (image=quay.io/ceph/ceph:v19, name=musing_swanson, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:25:07 compute-0 systemd[1]: Started libpod-conmon-f0ec4b446a31de8201bbf55cc741704cad6a693fb5cf38d06db3e6af208c0408.scope.
Nov 24 09:25:07 compute-0 ceph-mon[74331]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 24 09:25:07 compute-0 ceph-mon[74331]: monmap epoch 1
Nov 24 09:25:07 compute-0 ceph-mon[74331]: fsid 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:25:07 compute-0 ceph-mon[74331]: last_changed 2025-11-24T09:25:03.414609+0000
Nov 24 09:25:07 compute-0 ceph-mon[74331]: created 2025-11-24T09:25:03.414609+0000
Nov 24 09:25:07 compute-0 ceph-mon[74331]: min_mon_release 19 (squid)
Nov 24 09:25:07 compute-0 ceph-mon[74331]: election_strategy: 1
Nov 24 09:25:07 compute-0 ceph-mon[74331]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Nov 24 09:25:07 compute-0 ceph-mon[74331]: fsmap 
Nov 24 09:25:07 compute-0 ceph-mon[74331]: osdmap e1: 0 total, 0 up, 0 in
Nov 24 09:25:07 compute-0 ceph-mon[74331]: mgrmap e1: no daemons active
Nov 24 09:25:07 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:25:07 compute-0 podman[74332]: 2025-11-24 09:25:07.693185509 +0000 UTC m=+0.024530810 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:25:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90573808b732ec1f28d4191bfad60bcc3e5bee7bab62c887249bd2b5735af02c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90573808b732ec1f28d4191bfad60bcc3e5bee7bab62c887249bd2b5735af02c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90573808b732ec1f28d4191bfad60bcc3e5bee7bab62c887249bd2b5735af02c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:07 compute-0 podman[74332]: 2025-11-24 09:25:07.801623631 +0000 UTC m=+0.132968912 container init f0ec4b446a31de8201bbf55cc741704cad6a693fb5cf38d06db3e6af208c0408 (image=quay.io/ceph/ceph:v19, name=musing_swanson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Nov 24 09:25:07 compute-0 podman[74332]: 2025-11-24 09:25:07.81058817 +0000 UTC m=+0.141933451 container start f0ec4b446a31de8201bbf55cc741704cad6a693fb5cf38d06db3e6af208c0408 (image=quay.io/ceph/ceph:v19, name=musing_swanson, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Nov 24 09:25:07 compute-0 podman[74332]: 2025-11-24 09:25:07.817383646 +0000 UTC m=+0.148728947 container attach f0ec4b446a31de8201bbf55cc741704cad6a693fb5cf38d06db3e6af208c0408 (image=quay.io/ceph/ceph:v19, name=musing_swanson, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:25:08 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Nov 24 09:25:08 compute-0 systemd[1]: libpod-f0ec4b446a31de8201bbf55cc741704cad6a693fb5cf38d06db3e6af208c0408.scope: Deactivated successfully.
Nov 24 09:25:08 compute-0 podman[74332]: 2025-11-24 09:25:08.028767624 +0000 UTC m=+0.360112905 container died f0ec4b446a31de8201bbf55cc741704cad6a693fb5cf38d06db3e6af208c0408 (image=quay.io/ceph/ceph:v19, name=musing_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 09:25:08 compute-0 podman[74332]: 2025-11-24 09:25:08.076678025 +0000 UTC m=+0.408023296 container remove f0ec4b446a31de8201bbf55cc741704cad6a693fb5cf38d06db3e6af208c0408 (image=quay.io/ceph/ceph:v19, name=musing_swanson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:25:08 compute-0 systemd[1]: libpod-conmon-f0ec4b446a31de8201bbf55cc741704cad6a693fb5cf38d06db3e6af208c0408.scope: Deactivated successfully.
Nov 24 09:25:08 compute-0 podman[74422]: 2025-11-24 09:25:08.161907699 +0000 UTC m=+0.058757118 container create 13386794809c44573a21008d83794a708f07193336eb4bde0ed66da29aec2cb4 (image=quay.io/ceph/ceph:v19, name=hopeful_curran, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Nov 24 09:25:08 compute-0 systemd[1]: Started libpod-conmon-13386794809c44573a21008d83794a708f07193336eb4bde0ed66da29aec2cb4.scope.
Nov 24 09:25:08 compute-0 podman[74422]: 2025-11-24 09:25:08.130584443 +0000 UTC m=+0.027433882 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:25:08 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:25:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/367a812e0972e2e93c655633a4b3fa2022677c1837c3f4d2d7dcde5f142b6206/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/367a812e0972e2e93c655633a4b3fa2022677c1837c3f4d2d7dcde5f142b6206/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/367a812e0972e2e93c655633a4b3fa2022677c1837c3f4d2d7dcde5f142b6206/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:08 compute-0 podman[74422]: 2025-11-24 09:25:08.252905083 +0000 UTC m=+0.149754522 container init 13386794809c44573a21008d83794a708f07193336eb4bde0ed66da29aec2cb4 (image=quay.io/ceph/ceph:v19, name=hopeful_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Nov 24 09:25:08 compute-0 podman[74422]: 2025-11-24 09:25:08.260167781 +0000 UTC m=+0.157017210 container start 13386794809c44573a21008d83794a708f07193336eb4bde0ed66da29aec2cb4 (image=quay.io/ceph/ceph:v19, name=hopeful_curran, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:25:08 compute-0 podman[74422]: 2025-11-24 09:25:08.264122368 +0000 UTC m=+0.160971787 container attach 13386794809c44573a21008d83794a708f07193336eb4bde0ed66da29aec2cb4 (image=quay.io/ceph/ceph:v19, name=hopeful_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:25:08 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Nov 24 09:25:08 compute-0 systemd[1]: libpod-13386794809c44573a21008d83794a708f07193336eb4bde0ed66da29aec2cb4.scope: Deactivated successfully.
Nov 24 09:25:08 compute-0 podman[74422]: 2025-11-24 09:25:08.484917376 +0000 UTC m=+0.381766795 container died 13386794809c44573a21008d83794a708f07193336eb4bde0ed66da29aec2cb4 (image=quay.io/ceph/ceph:v19, name=hopeful_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Nov 24 09:25:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-367a812e0972e2e93c655633a4b3fa2022677c1837c3f4d2d7dcde5f142b6206-merged.mount: Deactivated successfully.
Nov 24 09:25:08 compute-0 podman[74422]: 2025-11-24 09:25:08.523424577 +0000 UTC m=+0.420273976 container remove 13386794809c44573a21008d83794a708f07193336eb4bde0ed66da29aec2cb4 (image=quay.io/ceph/ceph:v19, name=hopeful_curran, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Nov 24 09:25:08 compute-0 systemd[1]: libpod-conmon-13386794809c44573a21008d83794a708f07193336eb4bde0ed66da29aec2cb4.scope: Deactivated successfully.
Nov 24 09:25:08 compute-0 systemd[1]: Reloading.
Nov 24 09:25:08 compute-0 systemd-rc-local-generator[74507]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:25:08 compute-0 systemd-sysv-generator[74510]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:25:08 compute-0 systemd[1]: Reloading.
Nov 24 09:25:08 compute-0 systemd-rc-local-generator[74549]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:25:08 compute-0 systemd-sysv-generator[74553]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:25:09 compute-0 systemd[1]: Starting Ceph mgr.compute-0.mauvni for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:25:09 compute-0 podman[74606]: 2025-11-24 09:25:09.388754512 +0000 UTC m=+0.060864439 container create df5dc55b63c9ada4d56d709305e136eceedd1e9a85a899e9e20f67ea39dc4670 (image=quay.io/ceph/ceph:v19, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Nov 24 09:25:09 compute-0 podman[74606]: 2025-11-24 09:25:09.357016887 +0000 UTC m=+0.029126894 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:25:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9bf2c19c780cdf25c8b7ca680331b8fcbd127749d4374f0c424aadf8139afee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9bf2c19c780cdf25c8b7ca680331b8fcbd127749d4374f0c424aadf8139afee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9bf2c19c780cdf25c8b7ca680331b8fcbd127749d4374f0c424aadf8139afee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9bf2c19c780cdf25c8b7ca680331b8fcbd127749d4374f0c424aadf8139afee/merged/var/lib/ceph/mgr/ceph-compute-0.mauvni supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:09 compute-0 podman[74606]: 2025-11-24 09:25:09.485025706 +0000 UTC m=+0.157135713 container init df5dc55b63c9ada4d56d709305e136eceedd1e9a85a899e9e20f67ea39dc4670 (image=quay.io/ceph/ceph:v19, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Nov 24 09:25:09 compute-0 podman[74606]: 2025-11-24 09:25:09.492518398 +0000 UTC m=+0.164628345 container start df5dc55b63c9ada4d56d709305e136eceedd1e9a85a899e9e20f67ea39dc4670 (image=quay.io/ceph/ceph:v19, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 09:25:09 compute-0 bash[74606]: df5dc55b63c9ada4d56d709305e136eceedd1e9a85a899e9e20f67ea39dc4670
Nov 24 09:25:09 compute-0 systemd[1]: Started Ceph mgr.compute-0.mauvni for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:25:09 compute-0 ceph-mgr[74626]: set uid:gid to 167:167 (ceph:ceph)
Nov 24 09:25:09 compute-0 ceph-mgr[74626]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Nov 24 09:25:09 compute-0 ceph-mgr[74626]: pidfile_write: ignore empty --pid-file
Nov 24 09:25:09 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'alerts'
Nov 24 09:25:09 compute-0 podman[74627]: 2025-11-24 09:25:09.610602016 +0000 UTC m=+0.057863336 container create 367e24d8cc3becc91b7140f7244e0703bb7b51ffb069bb465e4650e2beedfbb1 (image=quay.io/ceph/ceph:v19, name=unruffled_heisenberg, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True)
Nov 24 09:25:09 compute-0 systemd[1]: Started libpod-conmon-367e24d8cc3becc91b7140f7244e0703bb7b51ffb069bb465e4650e2beedfbb1.scope.
Nov 24 09:25:09 compute-0 podman[74627]: 2025-11-24 09:25:09.590229887 +0000 UTC m=+0.037491257 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:25:09 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:25:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/466d32c1eb20b825c539b66dd6161b29aa53c6ec1cd5db7a063693accbe13657/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/466d32c1eb20b825c539b66dd6161b29aa53c6ec1cd5db7a063693accbe13657/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/466d32c1eb20b825c539b66dd6161b29aa53c6ec1cd5db7a063693accbe13657/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:09 compute-0 ceph-mgr[74626]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 24 09:25:09 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'balancer'
Nov 24 09:25:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:25:09.713+0000 7fcebfb6e140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 24 09:25:09 compute-0 podman[74627]: 2025-11-24 09:25:09.729497173 +0000 UTC m=+0.176758583 container init 367e24d8cc3becc91b7140f7244e0703bb7b51ffb069bb465e4650e2beedfbb1 (image=quay.io/ceph/ceph:v19, name=unruffled_heisenberg, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True)
Nov 24 09:25:09 compute-0 podman[74627]: 2025-11-24 09:25:09.745583436 +0000 UTC m=+0.192844756 container start 367e24d8cc3becc91b7140f7244e0703bb7b51ffb069bb465e4650e2beedfbb1 (image=quay.io/ceph/ceph:v19, name=unruffled_heisenberg, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:25:09 compute-0 podman[74627]: 2025-11-24 09:25:09.749090752 +0000 UTC m=+0.196352102 container attach 367e24d8cc3becc91b7140f7244e0703bb7b51ffb069bb465e4650e2beedfbb1 (image=quay.io/ceph/ceph:v19, name=unruffled_heisenberg, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:25:09 compute-0 ceph-mgr[74626]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 24 09:25:09 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'cephadm'
Nov 24 09:25:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:25:09.795+0000 7fcebfb6e140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 24 09:25:09 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Nov 24 09:25:09 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3022763435' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]: 
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]: {
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:     "fsid": "84a084c3-61a7-5de7-8207-1f88efa59a64",
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:     "health": {
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:         "status": "HEALTH_OK",
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:         "checks": {},
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:         "mutes": []
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:     },
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:     "election_epoch": 5,
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:     "quorum": [
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:         0
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:     ],
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:     "quorum_names": [
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:         "compute-0"
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:     ],
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:     "quorum_age": 2,
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:     "monmap": {
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:         "epoch": 1,
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:         "min_mon_release_name": "squid",
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:         "num_mons": 1
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:     },
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:     "osdmap": {
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:         "epoch": 1,
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:         "num_osds": 0,
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:         "num_up_osds": 0,
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:         "osd_up_since": 0,
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:         "num_in_osds": 0,
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:         "osd_in_since": 0,
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:         "num_remapped_pgs": 0
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:     },
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:     "pgmap": {
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:         "pgs_by_state": [],
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:         "num_pgs": 0,
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:         "num_pools": 0,
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:         "num_objects": 0,
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:         "data_bytes": 0,
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:         "bytes_used": 0,
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:         "bytes_avail": 0,
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:         "bytes_total": 0
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:     },
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:     "fsmap": {
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:         "epoch": 1,
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:         "btime": "2025-11-24T09:25:05:540478+0000",
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:         "by_rank": [],
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:         "up:standby": 0
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:     },
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:     "mgrmap": {
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:         "available": false,
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:         "num_standbys": 0,
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:         "modules": [
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:             "iostat",
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:             "nfs",
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:             "restful"
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:         ],
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:         "services": {}
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:     },
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:     "servicemap": {
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:         "epoch": 1,
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:         "modified": "2025-11-24T09:25:05.542309+0000",
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:         "services": {}
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:     },
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]:     "progress_events": {}
Nov 24 09:25:09 compute-0 unruffled_heisenberg[74663]: }
Nov 24 09:25:09 compute-0 systemd[1]: libpod-367e24d8cc3becc91b7140f7244e0703bb7b51ffb069bb465e4650e2beedfbb1.scope: Deactivated successfully.
Nov 24 09:25:10 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3022763435' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 09:25:10 compute-0 podman[74689]: 2025-11-24 09:25:10.021406779 +0000 UTC m=+0.033777227 container died 367e24d8cc3becc91b7140f7244e0703bb7b51ffb069bb465e4650e2beedfbb1 (image=quay.io/ceph/ceph:v19, name=unruffled_heisenberg, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:25:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-466d32c1eb20b825c539b66dd6161b29aa53c6ec1cd5db7a063693accbe13657-merged.mount: Deactivated successfully.
Nov 24 09:25:10 compute-0 podman[74689]: 2025-11-24 09:25:10.070404197 +0000 UTC m=+0.082774565 container remove 367e24d8cc3becc91b7140f7244e0703bb7b51ffb069bb465e4650e2beedfbb1 (image=quay.io/ceph/ceph:v19, name=unruffled_heisenberg, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 24 09:25:10 compute-0 systemd[1]: libpod-conmon-367e24d8cc3becc91b7140f7244e0703bb7b51ffb069bb465e4650e2beedfbb1.scope: Deactivated successfully.
Nov 24 09:25:10 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'crash'
Nov 24 09:25:10 compute-0 ceph-mgr[74626]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 24 09:25:10 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'dashboard'
Nov 24 09:25:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:25:10.617+0000 7fcebfb6e140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 24 09:25:11 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'devicehealth'
Nov 24 09:25:11 compute-0 ceph-mgr[74626]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 24 09:25:11 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'diskprediction_local'
Nov 24 09:25:11 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:25:11.274+0000 7fcebfb6e140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 24 09:25:11 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 24 09:25:11 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 24 09:25:11 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]:   from numpy import show_config as show_numpy_config
Nov 24 09:25:11 compute-0 ceph-mgr[74626]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 24 09:25:11 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'influx'
Nov 24 09:25:11 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:25:11.441+0000 7fcebfb6e140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 24 09:25:11 compute-0 ceph-mgr[74626]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 24 09:25:11 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'insights'
Nov 24 09:25:11 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:25:11.512+0000 7fcebfb6e140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 24 09:25:11 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'iostat'
Nov 24 09:25:11 compute-0 ceph-mgr[74626]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 24 09:25:11 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'k8sevents'
Nov 24 09:25:11 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:25:11.648+0000 7fcebfb6e140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 24 09:25:12 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'localpool'
Nov 24 09:25:12 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'mds_autoscaler'
Nov 24 09:25:12 compute-0 podman[74715]: 2025-11-24 09:25:12.169669728 +0000 UTC m=+0.056786349 container create 0506c6bcbd836ef8be750bc89563cac6c4bc97f1f90b79cd873db003625ade57 (image=quay.io/ceph/ceph:v19, name=nervous_elion, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:25:12 compute-0 systemd[1]: Started libpod-conmon-0506c6bcbd836ef8be750bc89563cac6c4bc97f1f90b79cd873db003625ade57.scope.
Nov 24 09:25:12 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:25:12 compute-0 podman[74715]: 2025-11-24 09:25:12.139423209 +0000 UTC m=+0.026539820 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:25:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aab59cd73f7d46d0f2b912018feaff03fc4705d7fe8a88b55204a853db01803a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aab59cd73f7d46d0f2b912018feaff03fc4705d7fe8a88b55204a853db01803a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aab59cd73f7d46d0f2b912018feaff03fc4705d7fe8a88b55204a853db01803a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:12 compute-0 podman[74715]: 2025-11-24 09:25:12.260963651 +0000 UTC m=+0.148080262 container init 0506c6bcbd836ef8be750bc89563cac6c4bc97f1f90b79cd873db003625ade57 (image=quay.io/ceph/ceph:v19, name=nervous_elion, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:25:12 compute-0 podman[74715]: 2025-11-24 09:25:12.26667371 +0000 UTC m=+0.153790301 container start 0506c6bcbd836ef8be750bc89563cac6c4bc97f1f90b79cd873db003625ade57 (image=quay.io/ceph/ceph:v19, name=nervous_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:25:12 compute-0 podman[74715]: 2025-11-24 09:25:12.27117072 +0000 UTC m=+0.158287321 container attach 0506c6bcbd836ef8be750bc89563cac6c4bc97f1f90b79cd873db003625ade57 (image=quay.io/ceph/ceph:v19, name=nervous_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default)
Nov 24 09:25:12 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'mirroring'
Nov 24 09:25:12 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'nfs'
Nov 24 09:25:12 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Nov 24 09:25:12 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1977727937' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 09:25:12 compute-0 nervous_elion[74731]: 
Nov 24 09:25:12 compute-0 nervous_elion[74731]: {
Nov 24 09:25:12 compute-0 nervous_elion[74731]:     "fsid": "84a084c3-61a7-5de7-8207-1f88efa59a64",
Nov 24 09:25:12 compute-0 nervous_elion[74731]:     "health": {
Nov 24 09:25:12 compute-0 nervous_elion[74731]:         "status": "HEALTH_OK",
Nov 24 09:25:12 compute-0 nervous_elion[74731]:         "checks": {},
Nov 24 09:25:12 compute-0 nervous_elion[74731]:         "mutes": []
Nov 24 09:25:12 compute-0 nervous_elion[74731]:     },
Nov 24 09:25:12 compute-0 nervous_elion[74731]:     "election_epoch": 5,
Nov 24 09:25:12 compute-0 nervous_elion[74731]:     "quorum": [
Nov 24 09:25:12 compute-0 nervous_elion[74731]:         0
Nov 24 09:25:12 compute-0 nervous_elion[74731]:     ],
Nov 24 09:25:12 compute-0 nervous_elion[74731]:     "quorum_names": [
Nov 24 09:25:12 compute-0 nervous_elion[74731]:         "compute-0"
Nov 24 09:25:12 compute-0 nervous_elion[74731]:     ],
Nov 24 09:25:12 compute-0 nervous_elion[74731]:     "quorum_age": 4,
Nov 24 09:25:12 compute-0 nervous_elion[74731]:     "monmap": {
Nov 24 09:25:12 compute-0 nervous_elion[74731]:         "epoch": 1,
Nov 24 09:25:12 compute-0 nervous_elion[74731]:         "min_mon_release_name": "squid",
Nov 24 09:25:12 compute-0 nervous_elion[74731]:         "num_mons": 1
Nov 24 09:25:12 compute-0 nervous_elion[74731]:     },
Nov 24 09:25:12 compute-0 nervous_elion[74731]:     "osdmap": {
Nov 24 09:25:12 compute-0 nervous_elion[74731]:         "epoch": 1,
Nov 24 09:25:12 compute-0 nervous_elion[74731]:         "num_osds": 0,
Nov 24 09:25:12 compute-0 nervous_elion[74731]:         "num_up_osds": 0,
Nov 24 09:25:12 compute-0 nervous_elion[74731]:         "osd_up_since": 0,
Nov 24 09:25:12 compute-0 nervous_elion[74731]:         "num_in_osds": 0,
Nov 24 09:25:12 compute-0 nervous_elion[74731]:         "osd_in_since": 0,
Nov 24 09:25:12 compute-0 nervous_elion[74731]:         "num_remapped_pgs": 0
Nov 24 09:25:12 compute-0 nervous_elion[74731]:     },
Nov 24 09:25:12 compute-0 nervous_elion[74731]:     "pgmap": {
Nov 24 09:25:12 compute-0 nervous_elion[74731]:         "pgs_by_state": [],
Nov 24 09:25:12 compute-0 nervous_elion[74731]:         "num_pgs": 0,
Nov 24 09:25:12 compute-0 nervous_elion[74731]:         "num_pools": 0,
Nov 24 09:25:12 compute-0 nervous_elion[74731]:         "num_objects": 0,
Nov 24 09:25:12 compute-0 nervous_elion[74731]:         "data_bytes": 0,
Nov 24 09:25:12 compute-0 nervous_elion[74731]:         "bytes_used": 0,
Nov 24 09:25:12 compute-0 nervous_elion[74731]:         "bytes_avail": 0,
Nov 24 09:25:12 compute-0 nervous_elion[74731]:         "bytes_total": 0
Nov 24 09:25:12 compute-0 nervous_elion[74731]:     },
Nov 24 09:25:12 compute-0 nervous_elion[74731]:     "fsmap": {
Nov 24 09:25:12 compute-0 nervous_elion[74731]:         "epoch": 1,
Nov 24 09:25:12 compute-0 nervous_elion[74731]:         "btime": "2025-11-24T09:25:05:540478+0000",
Nov 24 09:25:12 compute-0 nervous_elion[74731]:         "by_rank": [],
Nov 24 09:25:12 compute-0 nervous_elion[74731]:         "up:standby": 0
Nov 24 09:25:12 compute-0 nervous_elion[74731]:     },
Nov 24 09:25:12 compute-0 nervous_elion[74731]:     "mgrmap": {
Nov 24 09:25:12 compute-0 nervous_elion[74731]:         "available": false,
Nov 24 09:25:12 compute-0 nervous_elion[74731]:         "num_standbys": 0,
Nov 24 09:25:12 compute-0 nervous_elion[74731]:         "modules": [
Nov 24 09:25:12 compute-0 nervous_elion[74731]:             "iostat",
Nov 24 09:25:12 compute-0 nervous_elion[74731]:             "nfs",
Nov 24 09:25:12 compute-0 nervous_elion[74731]:             "restful"
Nov 24 09:25:12 compute-0 nervous_elion[74731]:         ],
Nov 24 09:25:12 compute-0 nervous_elion[74731]:         "services": {}
Nov 24 09:25:12 compute-0 nervous_elion[74731]:     },
Nov 24 09:25:12 compute-0 nervous_elion[74731]:     "servicemap": {
Nov 24 09:25:12 compute-0 nervous_elion[74731]:         "epoch": 1,
Nov 24 09:25:12 compute-0 nervous_elion[74731]:         "modified": "2025-11-24T09:25:05.542309+0000",
Nov 24 09:25:12 compute-0 nervous_elion[74731]:         "services": {}
Nov 24 09:25:12 compute-0 nervous_elion[74731]:     },
Nov 24 09:25:12 compute-0 nervous_elion[74731]:     "progress_events": {}
Nov 24 09:25:12 compute-0 nervous_elion[74731]: }
Nov 24 09:25:12 compute-0 systemd[1]: libpod-0506c6bcbd836ef8be750bc89563cac6c4bc97f1f90b79cd873db003625ade57.scope: Deactivated successfully.
Nov 24 09:25:12 compute-0 podman[74715]: 2025-11-24 09:25:12.477391501 +0000 UTC m=+0.364508092 container died 0506c6bcbd836ef8be750bc89563cac6c4bc97f1f90b79cd873db003625ade57 (image=quay.io/ceph/ceph:v19, name=nervous_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:25:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-aab59cd73f7d46d0f2b912018feaff03fc4705d7fe8a88b55204a853db01803a-merged.mount: Deactivated successfully.
Nov 24 09:25:12 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1977727937' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 09:25:12 compute-0 podman[74715]: 2025-11-24 09:25:12.515512153 +0000 UTC m=+0.402628734 container remove 0506c6bcbd836ef8be750bc89563cac6c4bc97f1f90b79cd873db003625ade57 (image=quay.io/ceph/ceph:v19, name=nervous_elion, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:25:12 compute-0 systemd[1]: libpod-conmon-0506c6bcbd836ef8be750bc89563cac6c4bc97f1f90b79cd873db003625ade57.scope: Deactivated successfully.
Nov 24 09:25:12 compute-0 ceph-mgr[74626]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 24 09:25:12 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'orchestrator'
Nov 24 09:25:12 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:25:12.683+0000 7fcebfb6e140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 24 09:25:12 compute-0 ceph-mgr[74626]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 24 09:25:12 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'osd_perf_query'
Nov 24 09:25:12 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:25:12.912+0000 7fcebfb6e140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 24 09:25:12 compute-0 ceph-mgr[74626]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 24 09:25:12 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'osd_support'
Nov 24 09:25:12 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:25:12.997+0000 7fcebfb6e140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 24 09:25:13 compute-0 ceph-mgr[74626]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 24 09:25:13 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'pg_autoscaler'
Nov 24 09:25:13 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:25:13.065+0000 7fcebfb6e140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 24 09:25:13 compute-0 ceph-mgr[74626]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 24 09:25:13 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'progress'
Nov 24 09:25:13 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:25:13.145+0000 7fcebfb6e140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 24 09:25:13 compute-0 ceph-mgr[74626]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 24 09:25:13 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'prometheus'
Nov 24 09:25:13 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:25:13.217+0000 7fcebfb6e140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 24 09:25:13 compute-0 ceph-mgr[74626]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 24 09:25:13 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'rbd_support'
Nov 24 09:25:13 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:25:13.587+0000 7fcebfb6e140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 24 09:25:13 compute-0 ceph-mgr[74626]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 24 09:25:13 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'restful'
Nov 24 09:25:13 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:25:13.685+0000 7fcebfb6e140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 24 09:25:13 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'rgw'
Nov 24 09:25:14 compute-0 ceph-mgr[74626]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 24 09:25:14 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'rook'
Nov 24 09:25:14 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:25:14.147+0000 7fcebfb6e140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 24 09:25:14 compute-0 podman[74769]: 2025-11-24 09:25:14.576498199 +0000 UTC m=+0.040919311 container create 41c6fdb52e3217f2a256074119d5c5fb2a7ca76bdbe5035ed919848bb7d8e1f3 (image=quay.io/ceph/ceph:v19, name=fervent_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:25:14 compute-0 systemd[1]: Started libpod-conmon-41c6fdb52e3217f2a256074119d5c5fb2a7ca76bdbe5035ed919848bb7d8e1f3.scope.
Nov 24 09:25:14 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:25:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/153a524b1eecf306b2c76a1b46a034251677e0b6ea31dc387255d6ab1f50cca9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/153a524b1eecf306b2c76a1b46a034251677e0b6ea31dc387255d6ab1f50cca9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/153a524b1eecf306b2c76a1b46a034251677e0b6ea31dc387255d6ab1f50cca9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:14 compute-0 podman[74769]: 2025-11-24 09:25:14.558210912 +0000 UTC m=+0.022632034 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:25:14 compute-0 podman[74769]: 2025-11-24 09:25:14.664994503 +0000 UTC m=+0.129415635 container init 41c6fdb52e3217f2a256074119d5c5fb2a7ca76bdbe5035ed919848bb7d8e1f3 (image=quay.io/ceph/ceph:v19, name=fervent_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:25:14 compute-0 podman[74769]: 2025-11-24 09:25:14.676055293 +0000 UTC m=+0.140476415 container start 41c6fdb52e3217f2a256074119d5c5fb2a7ca76bdbe5035ed919848bb7d8e1f3 (image=quay.io/ceph/ceph:v19, name=fervent_lehmann, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Nov 24 09:25:14 compute-0 podman[74769]: 2025-11-24 09:25:14.679531138 +0000 UTC m=+0.143952270 container attach 41c6fdb52e3217f2a256074119d5c5fb2a7ca76bdbe5035ed919848bb7d8e1f3 (image=quay.io/ceph/ceph:v19, name=fervent_lehmann, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:25:14 compute-0 ceph-mgr[74626]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 24 09:25:14 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'selftest'
Nov 24 09:25:14 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:25:14.761+0000 7fcebfb6e140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 24 09:25:14 compute-0 ceph-mgr[74626]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 24 09:25:14 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'snap_schedule'
Nov 24 09:25:14 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:25:14.837+0000 7fcebfb6e140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 24 09:25:14 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Nov 24 09:25:14 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2714161988' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]: 
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]: {
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:     "fsid": "84a084c3-61a7-5de7-8207-1f88efa59a64",
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:     "health": {
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:         "status": "HEALTH_OK",
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:         "checks": {},
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:         "mutes": []
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:     },
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:     "election_epoch": 5,
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:     "quorum": [
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:         0
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:     ],
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:     "quorum_names": [
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:         "compute-0"
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:     ],
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:     "quorum_age": 7,
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:     "monmap": {
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:         "epoch": 1,
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:         "min_mon_release_name": "squid",
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:         "num_mons": 1
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:     },
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:     "osdmap": {
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:         "epoch": 1,
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:         "num_osds": 0,
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:         "num_up_osds": 0,
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:         "osd_up_since": 0,
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:         "num_in_osds": 0,
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:         "osd_in_since": 0,
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:         "num_remapped_pgs": 0
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:     },
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:     "pgmap": {
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:         "pgs_by_state": [],
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:         "num_pgs": 0,
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:         "num_pools": 0,
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:         "num_objects": 0,
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:         "data_bytes": 0,
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:         "bytes_used": 0,
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:         "bytes_avail": 0,
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:         "bytes_total": 0
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:     },
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:     "fsmap": {
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:         "epoch": 1,
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:         "btime": "2025-11-24T09:25:05:540478+0000",
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:         "by_rank": [],
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:         "up:standby": 0
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:     },
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:     "mgrmap": {
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:         "available": false,
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:         "num_standbys": 0,
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:         "modules": [
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:             "iostat",
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:             "nfs",
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:             "restful"
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:         ],
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:         "services": {}
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:     },
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:     "servicemap": {
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:         "epoch": 1,
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:         "modified": "2025-11-24T09:25:05.542309+0000",
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:         "services": {}
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:     },
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]:     "progress_events": {}
Nov 24 09:25:14 compute-0 fervent_lehmann[74786]: }
Nov 24 09:25:14 compute-0 systemd[1]: libpod-41c6fdb52e3217f2a256074119d5c5fb2a7ca76bdbe5035ed919848bb7d8e1f3.scope: Deactivated successfully.
Nov 24 09:25:14 compute-0 podman[74769]: 2025-11-24 09:25:14.903075594 +0000 UTC m=+0.367496726 container died 41c6fdb52e3217f2a256074119d5c5fb2a7ca76bdbe5035ed919848bb7d8e1f3 (image=quay.io/ceph/ceph:v19, name=fervent_lehmann, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Nov 24 09:25:14 compute-0 ceph-mgr[74626]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 24 09:25:14 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'stats'
Nov 24 09:25:14 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:25:14.922+0000 7fcebfb6e140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 24 09:25:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-153a524b1eecf306b2c76a1b46a034251677e0b6ea31dc387255d6ab1f50cca9-merged.mount: Deactivated successfully.
Nov 24 09:25:14 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2714161988' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 09:25:14 compute-0 podman[74769]: 2025-11-24 09:25:14.945863249 +0000 UTC m=+0.410284361 container remove 41c6fdb52e3217f2a256074119d5c5fb2a7ca76bdbe5035ed919848bb7d8e1f3 (image=quay.io/ceph/ceph:v19, name=fervent_lehmann, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 09:25:14 compute-0 systemd[1]: libpod-conmon-41c6fdb52e3217f2a256074119d5c5fb2a7ca76bdbe5035ed919848bb7d8e1f3.scope: Deactivated successfully.
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'status'
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'telegraf'
Nov 24 09:25:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:25:15.077+0000 7fcebfb6e140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'telemetry'
Nov 24 09:25:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:25:15.147+0000 7fcebfb6e140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'test_orchestrator'
Nov 24 09:25:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:25:15.315+0000 7fcebfb6e140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'volumes'
Nov 24 09:25:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:25:15.556+0000 7fcebfb6e140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'zabbix'
Nov 24 09:25:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:25:15.836+0000 7fcebfb6e140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 24 09:25:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:25:15.904+0000 7fcebfb6e140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: ms_deliver_dispatch: unhandled message 0x5616d7cd29c0 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Nov 24 09:25:15 compute-0 ceph-mon[74331]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.mauvni
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: mgr handle_mgr_map Activating!
Nov 24 09:25:15 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.mauvni(active, starting, since 0.0117178s)
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: mgr handle_mgr_map I am now activating
Nov 24 09:25:15 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Nov 24 09:25:15 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1472554930' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 24 09:25:15 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).mds e1 all = 1
Nov 24 09:25:15 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Nov 24 09:25:15 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1472554930' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 24 09:25:15 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Nov 24 09:25:15 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1472554930' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 24 09:25:15 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Nov 24 09:25:15 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1472554930' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 24 09:25:15 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.mauvni", "id": "compute-0.mauvni"} v 0)
Nov 24 09:25:15 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1472554930' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr metadata", "who": "compute-0.mauvni", "id": "compute-0.mauvni"}]: dispatch
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: balancer
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: crash
Nov 24 09:25:15 compute-0 ceph-mon[74331]: log_channel(cluster) log [INF] : Manager daemon compute-0.mauvni is now available
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: [balancer INFO root] Starting
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: devicehealth
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: [devicehealth INFO root] Starting
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: [balancer INFO root] Optimize plan auto_2025-11-24_09:25:15
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: [balancer INFO root] do_upmap
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: [balancer INFO root] No pools available
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: iostat
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: nfs
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: orchestrator
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: pg_autoscaler
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: progress
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: [progress INFO root] Loading...
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: [progress INFO root] No stored events to load
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: [progress INFO root] Loaded [] historic events
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: [progress INFO root] Loaded OSDMap, ready.
Nov 24 09:25:15 compute-0 ceph-mon[74331]: Activating manager daemon compute-0.mauvni
Nov 24 09:25:15 compute-0 ceph-mon[74331]: mgrmap e2: compute-0.mauvni(active, starting, since 0.0117178s)
Nov 24 09:25:15 compute-0 ceph-mon[74331]: from='mgr.14102 192.168.122.100:0/1472554930' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 24 09:25:15 compute-0 ceph-mon[74331]: from='mgr.14102 192.168.122.100:0/1472554930' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 24 09:25:15 compute-0 ceph-mon[74331]: from='mgr.14102 192.168.122.100:0/1472554930' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 24 09:25:15 compute-0 ceph-mon[74331]: from='mgr.14102 192.168.122.100:0/1472554930' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 24 09:25:15 compute-0 ceph-mon[74331]: from='mgr.14102 192.168.122.100:0/1472554930' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr metadata", "who": "compute-0.mauvni", "id": "compute-0.mauvni"}]: dispatch
Nov 24 09:25:15 compute-0 ceph-mon[74331]: Manager daemon compute-0.mauvni is now available
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: [rbd_support INFO root] recovery thread starting
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: [rbd_support INFO root] starting setup
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: rbd_support
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: restful
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: [restful INFO root] server_addr: :: server_port: 8003
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: [restful WARNING root] server not running: no certificate configured
Nov 24 09:25:15 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mauvni/mirror_snapshot_schedule"} v 0)
Nov 24 09:25:15 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1472554930' entity='mgr.compute-0.mauvni' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mauvni/mirror_snapshot_schedule"}]: dispatch
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: status
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: telemetry
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 24 09:25:15 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: [rbd_support INFO root] PerfHandler: starting
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TaskHandler: starting
Nov 24 09:25:15 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mauvni/trash_purge_schedule"} v 0)
Nov 24 09:25:15 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1472554930' entity='mgr.compute-0.mauvni' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mauvni/trash_purge_schedule"}]: dispatch
Nov 24 09:25:15 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1472554930' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:15 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: [rbd_support INFO root] setup complete
Nov 24 09:25:15 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: volumes
Nov 24 09:25:15 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1472554930' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:15 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Nov 24 09:25:15 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1472554930' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:16 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.mauvni(active, since 1.02499s)
Nov 24 09:25:16 compute-0 ceph-mon[74331]: from='mgr.14102 192.168.122.100:0/1472554930' entity='mgr.compute-0.mauvni' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mauvni/mirror_snapshot_schedule"}]: dispatch
Nov 24 09:25:16 compute-0 ceph-mon[74331]: from='mgr.14102 192.168.122.100:0/1472554930' entity='mgr.compute-0.mauvni' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mauvni/trash_purge_schedule"}]: dispatch
Nov 24 09:25:16 compute-0 ceph-mon[74331]: from='mgr.14102 192.168.122.100:0/1472554930' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:16 compute-0 ceph-mon[74331]: from='mgr.14102 192.168.122.100:0/1472554930' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:16 compute-0 ceph-mon[74331]: from='mgr.14102 192.168.122.100:0/1472554930' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:16 compute-0 ceph-mon[74331]: mgrmap e3: compute-0.mauvni(active, since 1.02499s)
Nov 24 09:25:17 compute-0 podman[74904]: 2025-11-24 09:25:17.026398883 +0000 UTC m=+0.051636533 container create 030ff72599ef41eb7a5b9a2f6d599cf319c247e4e47ccabf54fe45d3fe18bd11 (image=quay.io/ceph/ceph:v19, name=strange_mclaren, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Nov 24 09:25:17 compute-0 systemd[1]: Started libpod-conmon-030ff72599ef41eb7a5b9a2f6d599cf319c247e4e47ccabf54fe45d3fe18bd11.scope.
Nov 24 09:25:17 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:25:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/deae2edca48250419f24937488e265189af7316f5ab2166291c846b5d4e2b86c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/deae2edca48250419f24937488e265189af7316f5ab2166291c846b5d4e2b86c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/deae2edca48250419f24937488e265189af7316f5ab2166291c846b5d4e2b86c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:17 compute-0 podman[74904]: 2025-11-24 09:25:17.09333266 +0000 UTC m=+0.118570290 container init 030ff72599ef41eb7a5b9a2f6d599cf319c247e4e47ccabf54fe45d3fe18bd11 (image=quay.io/ceph/ceph:v19, name=strange_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 09:25:17 compute-0 podman[74904]: 2025-11-24 09:25:17.003961165 +0000 UTC m=+0.029198825 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:25:17 compute-0 podman[74904]: 2025-11-24 09:25:17.10439621 +0000 UTC m=+0.129633830 container start 030ff72599ef41eb7a5b9a2f6d599cf319c247e4e47ccabf54fe45d3fe18bd11 (image=quay.io/ceph/ceph:v19, name=strange_mclaren, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 09:25:17 compute-0 podman[74904]: 2025-11-24 09:25:17.108257605 +0000 UTC m=+0.133495225 container attach 030ff72599ef41eb7a5b9a2f6d599cf319c247e4e47ccabf54fe45d3fe18bd11 (image=quay.io/ceph/ceph:v19, name=strange_mclaren, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:25:17 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Nov 24 09:25:17 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/470414884' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 09:25:17 compute-0 strange_mclaren[74920]: 
Nov 24 09:25:17 compute-0 strange_mclaren[74920]: {
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:     "fsid": "84a084c3-61a7-5de7-8207-1f88efa59a64",
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:     "health": {
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:         "status": "HEALTH_OK",
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:         "checks": {},
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:         "mutes": []
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:     },
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:     "election_epoch": 5,
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:     "quorum": [
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:         0
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:     ],
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:     "quorum_names": [
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:         "compute-0"
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:     ],
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:     "quorum_age": 9,
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:     "monmap": {
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:         "epoch": 1,
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:         "min_mon_release_name": "squid",
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:         "num_mons": 1
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:     },
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:     "osdmap": {
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:         "epoch": 1,
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:         "num_osds": 0,
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:         "num_up_osds": 0,
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:         "osd_up_since": 0,
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:         "num_in_osds": 0,
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:         "osd_in_since": 0,
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:         "num_remapped_pgs": 0
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:     },
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:     "pgmap": {
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:         "pgs_by_state": [],
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:         "num_pgs": 0,
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:         "num_pools": 0,
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:         "num_objects": 0,
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:         "data_bytes": 0,
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:         "bytes_used": 0,
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:         "bytes_avail": 0,
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:         "bytes_total": 0
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:     },
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:     "fsmap": {
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:         "epoch": 1,
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:         "btime": "2025-11-24T09:25:05:540478+0000",
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:         "by_rank": [],
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:         "up:standby": 0
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:     },
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:     "mgrmap": {
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:         "available": true,
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:         "num_standbys": 0,
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:         "modules": [
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:             "iostat",
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:             "nfs",
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:             "restful"
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:         ],
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:         "services": {}
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:     },
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:     "servicemap": {
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:         "epoch": 1,
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:         "modified": "2025-11-24T09:25:05.542309+0000",
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:         "services": {}
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:     },
Nov 24 09:25:17 compute-0 strange_mclaren[74920]:     "progress_events": {}
Nov 24 09:25:17 compute-0 strange_mclaren[74920]: }
Nov 24 09:25:17 compute-0 systemd[1]: libpod-030ff72599ef41eb7a5b9a2f6d599cf319c247e4e47ccabf54fe45d3fe18bd11.scope: Deactivated successfully.
Nov 24 09:25:17 compute-0 podman[74904]: 2025-11-24 09:25:17.529880323 +0000 UTC m=+0.555117953 container died 030ff72599ef41eb7a5b9a2f6d599cf319c247e4e47ccabf54fe45d3fe18bd11 (image=quay.io/ceph/ceph:v19, name=strange_mclaren, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 09:25:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-deae2edca48250419f24937488e265189af7316f5ab2166291c846b5d4e2b86c-merged.mount: Deactivated successfully.
Nov 24 09:25:17 compute-0 podman[74904]: 2025-11-24 09:25:17.565140724 +0000 UTC m=+0.590378334 container remove 030ff72599ef41eb7a5b9a2f6d599cf319c247e4e47ccabf54fe45d3fe18bd11 (image=quay.io/ceph/ceph:v19, name=strange_mclaren, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 09:25:17 compute-0 systemd[1]: libpod-conmon-030ff72599ef41eb7a5b9a2f6d599cf319c247e4e47ccabf54fe45d3fe18bd11.scope: Deactivated successfully.
Nov 24 09:25:17 compute-0 podman[74958]: 2025-11-24 09:25:17.618175871 +0000 UTC m=+0.034817072 container create c74230e2a8d122de70546337faa29ddbdb896c19a54870c69a9f9a9667bda09b (image=quay.io/ceph/ceph:v19, name=zen_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Nov 24 09:25:17 compute-0 systemd[1]: Started libpod-conmon-c74230e2a8d122de70546337faa29ddbdb896c19a54870c69a9f9a9667bda09b.scope.
Nov 24 09:25:17 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:25:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48edab97ca2a34cedebfb1979b58ab8c9ee89812621035cb8af618e79aeab726/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48edab97ca2a34cedebfb1979b58ab8c9ee89812621035cb8af618e79aeab726/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48edab97ca2a34cedebfb1979b58ab8c9ee89812621035cb8af618e79aeab726/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48edab97ca2a34cedebfb1979b58ab8c9ee89812621035cb8af618e79aeab726/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:17 compute-0 podman[74958]: 2025-11-24 09:25:17.683638331 +0000 UTC m=+0.100279532 container init c74230e2a8d122de70546337faa29ddbdb896c19a54870c69a9f9a9667bda09b (image=quay.io/ceph/ceph:v19, name=zen_morse, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 09:25:17 compute-0 podman[74958]: 2025-11-24 09:25:17.689512055 +0000 UTC m=+0.106153256 container start c74230e2a8d122de70546337faa29ddbdb896c19a54870c69a9f9a9667bda09b (image=quay.io/ceph/ceph:v19, name=zen_morse, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Nov 24 09:25:17 compute-0 podman[74958]: 2025-11-24 09:25:17.69258076 +0000 UTC m=+0.109221961 container attach c74230e2a8d122de70546337faa29ddbdb896c19a54870c69a9f9a9667bda09b (image=quay.io/ceph/ceph:v19, name=zen_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Nov 24 09:25:17 compute-0 podman[74958]: 2025-11-24 09:25:17.60300741 +0000 UTC m=+0.019648631 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:25:17 compute-0 ceph-mgr[74626]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 24 09:25:17 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.mauvni(active, since 2s)
Nov 24 09:25:17 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/470414884' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 09:25:18 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Nov 24 09:25:18 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/417032496' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 24 09:25:18 compute-0 zen_morse[74975]: 
Nov 24 09:25:18 compute-0 zen_morse[74975]: [global]
Nov 24 09:25:18 compute-0 zen_morse[74975]:         fsid = 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:25:18 compute-0 zen_morse[74975]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Nov 24 09:25:18 compute-0 systemd[1]: libpod-c74230e2a8d122de70546337faa29ddbdb896c19a54870c69a9f9a9667bda09b.scope: Deactivated successfully.
Nov 24 09:25:18 compute-0 podman[74958]: 2025-11-24 09:25:18.035517224 +0000 UTC m=+0.452158425 container died c74230e2a8d122de70546337faa29ddbdb896c19a54870c69a9f9a9667bda09b (image=quay.io/ceph/ceph:v19, name=zen_morse, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:25:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-48edab97ca2a34cedebfb1979b58ab8c9ee89812621035cb8af618e79aeab726-merged.mount: Deactivated successfully.
Nov 24 09:25:18 compute-0 podman[74958]: 2025-11-24 09:25:18.070162771 +0000 UTC m=+0.486803982 container remove c74230e2a8d122de70546337faa29ddbdb896c19a54870c69a9f9a9667bda09b (image=quay.io/ceph/ceph:v19, name=zen_morse, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Nov 24 09:25:18 compute-0 systemd[1]: libpod-conmon-c74230e2a8d122de70546337faa29ddbdb896c19a54870c69a9f9a9667bda09b.scope: Deactivated successfully.
Nov 24 09:25:18 compute-0 podman[75013]: 2025-11-24 09:25:18.137709692 +0000 UTC m=+0.043499634 container create 796b7a93b82f0e6203617177644d62d5f17415b2379f95c588e7a4a62fc264ba (image=quay.io/ceph/ceph:v19, name=elegant_archimedes, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:25:18 compute-0 systemd[1]: Started libpod-conmon-796b7a93b82f0e6203617177644d62d5f17415b2379f95c588e7a4a62fc264ba.scope.
Nov 24 09:25:18 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:25:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81045d23c2e99a6698fefa560e856a1d53a44364870597bb267b3c9869a5d72a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81045d23c2e99a6698fefa560e856a1d53a44364870597bb267b3c9869a5d72a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81045d23c2e99a6698fefa560e856a1d53a44364870597bb267b3c9869a5d72a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:18 compute-0 podman[75013]: 2025-11-24 09:25:18.1204439 +0000 UTC m=+0.026233872 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:25:18 compute-0 podman[75013]: 2025-11-24 09:25:18.216463687 +0000 UTC m=+0.122253649 container init 796b7a93b82f0e6203617177644d62d5f17415b2379f95c588e7a4a62fc264ba (image=quay.io/ceph/ceph:v19, name=elegant_archimedes, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 24 09:25:18 compute-0 podman[75013]: 2025-11-24 09:25:18.222517175 +0000 UTC m=+0.128307137 container start 796b7a93b82f0e6203617177644d62d5f17415b2379f95c588e7a4a62fc264ba (image=quay.io/ceph/ceph:v19, name=elegant_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:25:18 compute-0 podman[75013]: 2025-11-24 09:25:18.225568241 +0000 UTC m=+0.131358203 container attach 796b7a93b82f0e6203617177644d62d5f17415b2379f95c588e7a4a62fc264ba (image=quay.io/ceph/ceph:v19, name=elegant_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:25:18 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Nov 24 09:25:18 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2993287807' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Nov 24 09:25:18 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2993287807' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Nov 24 09:25:18 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.mauvni(active, since 3s)
Nov 24 09:25:18 compute-0 ceph-mon[74331]: mgrmap e4: compute-0.mauvni(active, since 2s)
Nov 24 09:25:18 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/417032496' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 24 09:25:18 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2993287807' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Nov 24 09:25:18 compute-0 systemd[1]: libpod-796b7a93b82f0e6203617177644d62d5f17415b2379f95c588e7a4a62fc264ba.scope: Deactivated successfully.
Nov 24 09:25:18 compute-0 conmon[75029]: conmon 796b7a93b82f0e620361 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-796b7a93b82f0e6203617177644d62d5f17415b2379f95c588e7a4a62fc264ba.scope/container/memory.events
Nov 24 09:25:18 compute-0 podman[75013]: 2025-11-24 09:25:18.990489821 +0000 UTC m=+0.896279773 container died 796b7a93b82f0e6203617177644d62d5f17415b2379f95c588e7a4a62fc264ba (image=quay.io/ceph/ceph:v19, name=elegant_archimedes, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:25:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-81045d23c2e99a6698fefa560e856a1d53a44364870597bb267b3c9869a5d72a-merged.mount: Deactivated successfully.
Nov 24 09:25:19 compute-0 podman[75013]: 2025-11-24 09:25:19.028266174 +0000 UTC m=+0.934056116 container remove 796b7a93b82f0e6203617177644d62d5f17415b2379f95c588e7a4a62fc264ba (image=quay.io/ceph/ceph:v19, name=elegant_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:25:19 compute-0 systemd[1]: libpod-conmon-796b7a93b82f0e6203617177644d62d5f17415b2379f95c588e7a4a62fc264ba.scope: Deactivated successfully.
Nov 24 09:25:19 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ignoring --setuser ceph since I am not root
Nov 24 09:25:19 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ignoring --setgroup ceph since I am not root
Nov 24 09:25:19 compute-0 ceph-mgr[74626]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Nov 24 09:25:19 compute-0 ceph-mgr[74626]: pidfile_write: ignore empty --pid-file
Nov 24 09:25:19 compute-0 podman[75068]: 2025-11-24 09:25:19.096905342 +0000 UTC m=+0.045903253 container create 65c62bbb06161b254734ce97a83632bfc20ceb7fedc9d76855b99c62663e12b2 (image=quay.io/ceph/ceph:v19, name=stupefied_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 09:25:19 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'alerts'
Nov 24 09:25:19 compute-0 systemd[1]: Started libpod-conmon-65c62bbb06161b254734ce97a83632bfc20ceb7fedc9d76855b99c62663e12b2.scope.
Nov 24 09:25:19 compute-0 podman[75068]: 2025-11-24 09:25:19.077891447 +0000 UTC m=+0.026889378 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:25:19 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:25:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/539b0a8a02c36a75d3be6af6b806fe1f44f2b7e7f83550bac0c5dc45438b9ced/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/539b0a8a02c36a75d3be6af6b806fe1f44f2b7e7f83550bac0c5dc45438b9ced/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/539b0a8a02c36a75d3be6af6b806fe1f44f2b7e7f83550bac0c5dc45438b9ced/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:19 compute-0 podman[75068]: 2025-11-24 09:25:19.190755737 +0000 UTC m=+0.139753658 container init 65c62bbb06161b254734ce97a83632bfc20ceb7fedc9d76855b99c62663e12b2 (image=quay.io/ceph/ceph:v19, name=stupefied_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:25:19 compute-0 podman[75068]: 2025-11-24 09:25:19.199853729 +0000 UTC m=+0.148851630 container start 65c62bbb06161b254734ce97a83632bfc20ceb7fedc9d76855b99c62663e12b2 (image=quay.io/ceph/ceph:v19, name=stupefied_chandrasekhar, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Nov 24 09:25:19 compute-0 ceph-mgr[74626]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 24 09:25:19 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'balancer'
Nov 24 09:25:19 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:25:19.201+0000 7fa4acd8d140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 24 09:25:19 compute-0 podman[75068]: 2025-11-24 09:25:19.203090368 +0000 UTC m=+0.152088259 container attach 65c62bbb06161b254734ce97a83632bfc20ceb7fedc9d76855b99c62663e12b2 (image=quay.io/ceph/ceph:v19, name=stupefied_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:25:19 compute-0 ceph-mgr[74626]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 24 09:25:19 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'cephadm'
Nov 24 09:25:19 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:25:19.309+0000 7fa4acd8d140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 24 09:25:19 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Nov 24 09:25:19 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3762001347' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 24 09:25:19 compute-0 stupefied_chandrasekhar[75104]: {
Nov 24 09:25:19 compute-0 stupefied_chandrasekhar[75104]:     "epoch": 5,
Nov 24 09:25:19 compute-0 stupefied_chandrasekhar[75104]:     "available": true,
Nov 24 09:25:19 compute-0 stupefied_chandrasekhar[75104]:     "active_name": "compute-0.mauvni",
Nov 24 09:25:19 compute-0 stupefied_chandrasekhar[75104]:     "num_standby": 0
Nov 24 09:25:19 compute-0 stupefied_chandrasekhar[75104]: }
Nov 24 09:25:19 compute-0 systemd[1]: libpod-65c62bbb06161b254734ce97a83632bfc20ceb7fedc9d76855b99c62663e12b2.scope: Deactivated successfully.
Nov 24 09:25:19 compute-0 podman[75068]: 2025-11-24 09:25:19.645784691 +0000 UTC m=+0.594782622 container died 65c62bbb06161b254734ce97a83632bfc20ceb7fedc9d76855b99c62663e12b2 (image=quay.io/ceph/ceph:v19, name=stupefied_chandrasekhar, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Nov 24 09:25:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-539b0a8a02c36a75d3be6af6b806fe1f44f2b7e7f83550bac0c5dc45438b9ced-merged.mount: Deactivated successfully.
Nov 24 09:25:19 compute-0 podman[75068]: 2025-11-24 09:25:19.698766746 +0000 UTC m=+0.647764647 container remove 65c62bbb06161b254734ce97a83632bfc20ceb7fedc9d76855b99c62663e12b2 (image=quay.io/ceph/ceph:v19, name=stupefied_chandrasekhar, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:25:19 compute-0 systemd[1]: libpod-conmon-65c62bbb06161b254734ce97a83632bfc20ceb7fedc9d76855b99c62663e12b2.scope: Deactivated successfully.
Nov 24 09:25:19 compute-0 podman[75146]: 2025-11-24 09:25:19.754355405 +0000 UTC m=+0.036309479 container create eed3d43a8b2c59103916e92034773a12920857f55ef900e6df1cda5b1e75d8a7 (image=quay.io/ceph/ceph:v19, name=gracious_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 09:25:19 compute-0 systemd[1]: Started libpod-conmon-eed3d43a8b2c59103916e92034773a12920857f55ef900e6df1cda5b1e75d8a7.scope.
Nov 24 09:25:19 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:25:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3514c96fd1dfa6e4752d7b57bbfeff52432187c167eaf3a167ffc9d5d35ce092/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3514c96fd1dfa6e4752d7b57bbfeff52432187c167eaf3a167ffc9d5d35ce092/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3514c96fd1dfa6e4752d7b57bbfeff52432187c167eaf3a167ffc9d5d35ce092/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:19 compute-0 podman[75146]: 2025-11-24 09:25:19.813386059 +0000 UTC m=+0.095340123 container init eed3d43a8b2c59103916e92034773a12920857f55ef900e6df1cda5b1e75d8a7 (image=quay.io/ceph/ceph:v19, name=gracious_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 24 09:25:19 compute-0 podman[75146]: 2025-11-24 09:25:19.817909989 +0000 UTC m=+0.099864053 container start eed3d43a8b2c59103916e92034773a12920857f55ef900e6df1cda5b1e75d8a7 (image=quay.io/ceph/ceph:v19, name=gracious_pascal, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:25:19 compute-0 podman[75146]: 2025-11-24 09:25:19.82079318 +0000 UTC m=+0.102747254 container attach eed3d43a8b2c59103916e92034773a12920857f55ef900e6df1cda5b1e75d8a7 (image=quay.io/ceph/ceph:v19, name=gracious_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Nov 24 09:25:19 compute-0 podman[75146]: 2025-11-24 09:25:19.737227316 +0000 UTC m=+0.019181400 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:25:19 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2993287807' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Nov 24 09:25:19 compute-0 ceph-mon[74331]: mgrmap e5: compute-0.mauvni(active, since 3s)
Nov 24 09:25:19 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3762001347' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 24 09:25:20 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'crash'
Nov 24 09:25:20 compute-0 ceph-mgr[74626]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 24 09:25:20 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'dashboard'
Nov 24 09:25:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:25:20.137+0000 7fa4acd8d140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 24 09:25:20 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'devicehealth'
Nov 24 09:25:20 compute-0 ceph-mgr[74626]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 24 09:25:20 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'diskprediction_local'
Nov 24 09:25:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:25:20.772+0000 7fa4acd8d140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 24 09:25:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 24 09:25:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 24 09:25:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]:   from numpy import show_config as show_numpy_config
Nov 24 09:25:20 compute-0 ceph-mgr[74626]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 24 09:25:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:25:20.947+0000 7fa4acd8d140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 24 09:25:20 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'influx'
Nov 24 09:25:21 compute-0 ceph-mgr[74626]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 24 09:25:21 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'insights'
Nov 24 09:25:21 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:25:21.021+0000 7fa4acd8d140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 24 09:25:21 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'iostat'
Nov 24 09:25:21 compute-0 ceph-mgr[74626]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 24 09:25:21 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'k8sevents'
Nov 24 09:25:21 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:25:21.159+0000 7fa4acd8d140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 24 09:25:21 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'localpool'
Nov 24 09:25:21 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'mds_autoscaler'
Nov 24 09:25:21 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'mirroring'
Nov 24 09:25:21 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'nfs'
Nov 24 09:25:22 compute-0 ceph-mgr[74626]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 24 09:25:22 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'orchestrator'
Nov 24 09:25:22 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:25:22.203+0000 7fa4acd8d140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 24 09:25:22 compute-0 ceph-mgr[74626]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 24 09:25:22 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'osd_perf_query'
Nov 24 09:25:22 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:25:22.442+0000 7fa4acd8d140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 24 09:25:22 compute-0 ceph-mgr[74626]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 24 09:25:22 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'osd_support'
Nov 24 09:25:22 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:25:22.516+0000 7fa4acd8d140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 24 09:25:22 compute-0 ceph-mgr[74626]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 24 09:25:22 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'pg_autoscaler'
Nov 24 09:25:22 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:25:22.583+0000 7fa4acd8d140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 24 09:25:22 compute-0 ceph-mgr[74626]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 24 09:25:22 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'progress'
Nov 24 09:25:22 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:25:22.671+0000 7fa4acd8d140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 24 09:25:22 compute-0 ceph-mgr[74626]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 24 09:25:22 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'prometheus'
Nov 24 09:25:22 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:25:22.753+0000 7fa4acd8d140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 24 09:25:23 compute-0 ceph-mgr[74626]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 24 09:25:23 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'rbd_support'
Nov 24 09:25:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:25:23.128+0000 7fa4acd8d140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 24 09:25:23 compute-0 ceph-mgr[74626]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 24 09:25:23 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'restful'
Nov 24 09:25:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:25:23.234+0000 7fa4acd8d140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 24 09:25:23 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'rgw'
Nov 24 09:25:23 compute-0 ceph-mgr[74626]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 24 09:25:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:25:23.686+0000 7fa4acd8d140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 24 09:25:23 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'rook'
Nov 24 09:25:24 compute-0 ceph-mgr[74626]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 24 09:25:24 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'selftest'
Nov 24 09:25:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:25:24.308+0000 7fa4acd8d140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 24 09:25:24 compute-0 ceph-mgr[74626]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 24 09:25:24 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'snap_schedule'
Nov 24 09:25:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:25:24.382+0000 7fa4acd8d140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 24 09:25:24 compute-0 ceph-mgr[74626]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 24 09:25:24 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'stats'
Nov 24 09:25:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:25:24.471+0000 7fa4acd8d140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 24 09:25:24 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'status'
Nov 24 09:25:24 compute-0 ceph-mgr[74626]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 24 09:25:24 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'telegraf'
Nov 24 09:25:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:25:24.627+0000 7fa4acd8d140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 24 09:25:24 compute-0 ceph-mgr[74626]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 24 09:25:24 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'telemetry'
Nov 24 09:25:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:25:24.700+0000 7fa4acd8d140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 24 09:25:24 compute-0 ceph-mgr[74626]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 24 09:25:24 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'test_orchestrator'
Nov 24 09:25:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:25:24.871+0000 7fa4acd8d140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'volumes'
Nov 24 09:25:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:25:25.111+0000 7fa4acd8d140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'zabbix'
Nov 24 09:25:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:25:25.403+0000 7fa4acd8d140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 24 09:25:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:25:25.474+0000 7fa4acd8d140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 24 09:25:25 compute-0 ceph-mon[74331]: log_channel(cluster) log [INF] : Active manager daemon compute-0.mauvni restarted
Nov 24 09:25:25 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Nov 24 09:25:25 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: ms_deliver_dispatch: unhandled message 0x55e84abe6d00 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Nov 24 09:25:25 compute-0 ceph-mon[74331]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.mauvni
Nov 24 09:25:25 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 24 09:25:25 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 24 09:25:25 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: mgr handle_mgr_map Activating!
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: mgr handle_mgr_map I am now activating
Nov 24 09:25:25 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Nov 24 09:25:25 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.mauvni(active, starting, since 0.0173431s)
Nov 24 09:25:25 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Nov 24 09:25:25 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 24 09:25:25 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.mauvni", "id": "compute-0.mauvni"} v 0)
Nov 24 09:25:25 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr metadata", "who": "compute-0.mauvni", "id": "compute-0.mauvni"}]: dispatch
Nov 24 09:25:25 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Nov 24 09:25:25 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 24 09:25:25 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).mds e1 all = 1
Nov 24 09:25:25 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Nov 24 09:25:25 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 24 09:25:25 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Nov 24 09:25:25 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: balancer
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: [balancer INFO root] Starting
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:25:25 compute-0 ceph-mon[74331]: log_channel(cluster) log [INF] : Manager daemon compute-0.mauvni is now available
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: [balancer INFO root] Optimize plan auto_2025-11-24_09:25:25
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: [balancer INFO root] do_upmap
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: [balancer INFO root] No pools available
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Nov 24 09:25:25 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Nov 24 09:25:25 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:25 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Nov 24 09:25:25 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: cephadm
Nov 24 09:25:25 compute-0 ceph-mon[74331]: Active manager daemon compute-0.mauvni restarted
Nov 24 09:25:25 compute-0 ceph-mon[74331]: Activating manager daemon compute-0.mauvni
Nov 24 09:25:25 compute-0 ceph-mon[74331]: osdmap e2: 0 total, 0 up, 0 in
Nov 24 09:25:25 compute-0 ceph-mon[74331]: mgrmap e6: compute-0.mauvni(active, starting, since 0.0173431s)
Nov 24 09:25:25 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 24 09:25:25 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr metadata", "who": "compute-0.mauvni", "id": "compute-0.mauvni"}]: dispatch
Nov 24 09:25:25 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 24 09:25:25 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 24 09:25:25 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 24 09:25:25 compute-0 ceph-mon[74331]: Manager daemon compute-0.mauvni is now available
Nov 24 09:25:25 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: crash
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: devicehealth
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: iostat
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: [devicehealth INFO root] Starting
Nov 24 09:25:25 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Nov 24 09:25:25 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: nfs
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: orchestrator
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: pg_autoscaler
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: progress
Nov 24 09:25:25 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Nov 24 09:25:25 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: [progress INFO root] Loading...
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: [progress INFO root] No stored events to load
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: [progress INFO root] Loaded [] historic events
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: [progress INFO root] Loaded OSDMap, ready.
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: [rbd_support INFO root] recovery thread starting
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: [rbd_support INFO root] starting setup
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: rbd_support
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: restful
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: [restful INFO root] server_addr: :: server_port: 8003
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: status
Nov 24 09:25:25 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mauvni/mirror_snapshot_schedule"} v 0)
Nov 24 09:25:25 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mauvni/mirror_snapshot_schedule"}]: dispatch
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: telemetry
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: [restful WARNING root] server not running: no certificate configured
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: [rbd_support INFO root] PerfHandler: starting
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TaskHandler: starting
Nov 24 09:25:25 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mauvni/trash_purge_schedule"} v 0)
Nov 24 09:25:25 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mauvni/trash_purge_schedule"}]: dispatch
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: [rbd_support INFO root] setup complete
Nov 24 09:25:25 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: volumes
Nov 24 09:25:26 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.agent_endpoint_root_cert}] v 0)
Nov 24 09:25:26 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:26 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.agent_endpoint_key}] v 0)
Nov 24 09:25:26 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:26 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Nov 24 09:25:26 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.mauvni(active, since 1.03159s)
Nov 24 09:25:26 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Nov 24 09:25:26 compute-0 gracious_pascal[75168]: {
Nov 24 09:25:26 compute-0 gracious_pascal[75168]:     "mgrmap_epoch": 7,
Nov 24 09:25:26 compute-0 gracious_pascal[75168]:     "initialized": true
Nov 24 09:25:26 compute-0 gracious_pascal[75168]: }
Nov 24 09:25:26 compute-0 ceph-mon[74331]: Found migration_current of "None". Setting to last migration.
Nov 24 09:25:26 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:26 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 24 09:25:26 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 24 09:25:26 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mauvni/mirror_snapshot_schedule"}]: dispatch
Nov 24 09:25:26 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mauvni/trash_purge_schedule"}]: dispatch
Nov 24 09:25:26 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:26 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:26 compute-0 ceph-mon[74331]: mgrmap e7: compute-0.mauvni(active, since 1.03159s)
Nov 24 09:25:26 compute-0 systemd[1]: libpod-eed3d43a8b2c59103916e92034773a12920857f55ef900e6df1cda5b1e75d8a7.scope: Deactivated successfully.
Nov 24 09:25:26 compute-0 podman[75146]: 2025-11-24 09:25:26.565765969 +0000 UTC m=+6.847720033 container died eed3d43a8b2c59103916e92034773a12920857f55ef900e6df1cda5b1e75d8a7 (image=quay.io/ceph/ceph:v19, name=gracious_pascal, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Nov 24 09:25:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-3514c96fd1dfa6e4752d7b57bbfeff52432187c167eaf3a167ffc9d5d35ce092-merged.mount: Deactivated successfully.
Nov 24 09:25:26 compute-0 podman[75146]: 2025-11-24 09:25:26.608245951 +0000 UTC m=+6.890200015 container remove eed3d43a8b2c59103916e92034773a12920857f55ef900e6df1cda5b1e75d8a7 (image=quay.io/ceph/ceph:v19, name=gracious_pascal, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 24 09:25:26 compute-0 systemd[1]: libpod-conmon-eed3d43a8b2c59103916e92034773a12920857f55ef900e6df1cda5b1e75d8a7.scope: Deactivated successfully.
Nov 24 09:25:26 compute-0 podman[75316]: 2025-11-24 09:25:26.677876974 +0000 UTC m=+0.048584043 container create f406d2f2aa4b859caa58ae3e79502b284db1f9fa0805c2311376cbb70ad04cf3 (image=quay.io/ceph/ceph:v19, name=sharp_elgamal, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:25:26 compute-0 systemd[1]: Started libpod-conmon-f406d2f2aa4b859caa58ae3e79502b284db1f9fa0805c2311376cbb70ad04cf3.scope.
Nov 24 09:25:26 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:25:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bca6fbac674edd6dde59fc57d27bbeb1b2b26f9371a9d301e73bf7e751cdf7a7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bca6fbac674edd6dde59fc57d27bbeb1b2b26f9371a9d301e73bf7e751cdf7a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bca6fbac674edd6dde59fc57d27bbeb1b2b26f9371a9d301e73bf7e751cdf7a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:26 compute-0 podman[75316]: 2025-11-24 09:25:26.655324996 +0000 UTC m=+0.026032115 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:25:26 compute-0 podman[75316]: 2025-11-24 09:25:26.764423936 +0000 UTC m=+0.135131005 container init f406d2f2aa4b859caa58ae3e79502b284db1f9fa0805c2311376cbb70ad04cf3 (image=quay.io/ceph/ceph:v19, name=sharp_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Nov 24 09:25:26 compute-0 podman[75316]: 2025-11-24 09:25:26.770917448 +0000 UTC m=+0.141624477 container start f406d2f2aa4b859caa58ae3e79502b284db1f9fa0805c2311376cbb70ad04cf3 (image=quay.io/ceph/ceph:v19, name=sharp_elgamal, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:25:26 compute-0 podman[75316]: 2025-11-24 09:25:26.782131865 +0000 UTC m=+0.152838944 container attach f406d2f2aa4b859caa58ae3e79502b284db1f9fa0805c2311376cbb70ad04cf3 (image=quay.io/ceph/ceph:v19, name=sharp_elgamal, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 09:25:27 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.14134 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 09:25:27 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Nov 24 09:25:27 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:27 compute-0 ceph-mgr[74626]: [cephadm INFO cherrypy.error] [24/Nov/2025:09:25:27] ENGINE Bus STARTING
Nov 24 09:25:27 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : [24/Nov/2025:09:25:27] ENGINE Bus STARTING
Nov 24 09:25:27 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Nov 24 09:25:27 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 24 09:25:27 compute-0 systemd[1]: libpod-f406d2f2aa4b859caa58ae3e79502b284db1f9fa0805c2311376cbb70ad04cf3.scope: Deactivated successfully.
Nov 24 09:25:27 compute-0 podman[75357]: 2025-11-24 09:25:27.275706722 +0000 UTC m=+0.035679853 container died f406d2f2aa4b859caa58ae3e79502b284db1f9fa0805c2311376cbb70ad04cf3 (image=quay.io/ceph/ceph:v19, name=sharp_elgamal, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:25:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-bca6fbac674edd6dde59fc57d27bbeb1b2b26f9371a9d301e73bf7e751cdf7a7-merged.mount: Deactivated successfully.
Nov 24 09:25:27 compute-0 podman[75357]: 2025-11-24 09:25:27.310079554 +0000 UTC m=+0.070052665 container remove f406d2f2aa4b859caa58ae3e79502b284db1f9fa0805c2311376cbb70ad04cf3 (image=quay.io/ceph/ceph:v19, name=sharp_elgamal, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 09:25:27 compute-0 systemd[1]: libpod-conmon-f406d2f2aa4b859caa58ae3e79502b284db1f9fa0805c2311376cbb70ad04cf3.scope: Deactivated successfully.
Nov 24 09:25:27 compute-0 ceph-mgr[74626]: [cephadm INFO cherrypy.error] [24/Nov/2025:09:25:27] ENGINE Serving on https://192.168.122.100:7150
Nov 24 09:25:27 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : [24/Nov/2025:09:25:27] ENGINE Serving on https://192.168.122.100:7150
Nov 24 09:25:27 compute-0 ceph-mgr[74626]: [cephadm INFO cherrypy.error] [24/Nov/2025:09:25:27] ENGINE Client ('192.168.122.100', 58872) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 24 09:25:27 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : [24/Nov/2025:09:25:27] ENGINE Client ('192.168.122.100', 58872) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 24 09:25:27 compute-0 podman[75381]: 2025-11-24 09:25:27.388011872 +0000 UTC m=+0.046759848 container create 44d0404743c5891cff8acfc2733b728d096f1e08e2f207661005db803da0e730 (image=quay.io/ceph/ceph:v19, name=vibrant_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:25:27 compute-0 systemd[1]: Started libpod-conmon-44d0404743c5891cff8acfc2733b728d096f1e08e2f207661005db803da0e730.scope.
Nov 24 09:25:27 compute-0 ceph-mgr[74626]: [cephadm INFO cherrypy.error] [24/Nov/2025:09:25:27] ENGINE Serving on http://192.168.122.100:8765
Nov 24 09:25:27 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : [24/Nov/2025:09:25:27] ENGINE Serving on http://192.168.122.100:8765
Nov 24 09:25:27 compute-0 ceph-mgr[74626]: [cephadm INFO cherrypy.error] [24/Nov/2025:09:25:27] ENGINE Bus STARTED
Nov 24 09:25:27 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : [24/Nov/2025:09:25:27] ENGINE Bus STARTED
Nov 24 09:25:27 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Nov 24 09:25:27 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 24 09:25:27 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:25:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13795b91769ec8ebe5d229f65bff80aa35ddda3e31e9bdb317c11764fc616310/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13795b91769ec8ebe5d229f65bff80aa35ddda3e31e9bdb317c11764fc616310/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13795b91769ec8ebe5d229f65bff80aa35ddda3e31e9bdb317c11764fc616310/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:27 compute-0 podman[75381]: 2025-11-24 09:25:27.370058649 +0000 UTC m=+0.028806635 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:25:27 compute-0 podman[75381]: 2025-11-24 09:25:27.464585608 +0000 UTC m=+0.123333574 container init 44d0404743c5891cff8acfc2733b728d096f1e08e2f207661005db803da0e730 (image=quay.io/ceph/ceph:v19, name=vibrant_lamport, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:25:27 compute-0 podman[75381]: 2025-11-24 09:25:27.474910504 +0000 UTC m=+0.133658470 container start 44d0404743c5891cff8acfc2733b728d096f1e08e2f207661005db803da0e730 (image=quay.io/ceph/ceph:v19, name=vibrant_lamport, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:25:27 compute-0 podman[75381]: 2025-11-24 09:25:27.478253107 +0000 UTC m=+0.137001123 container attach 44d0404743c5891cff8acfc2733b728d096f1e08e2f207661005db803da0e730 (image=quay.io/ceph/ceph:v19, name=vibrant_lamport, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:25:27 compute-0 ceph-mgr[74626]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 24 09:25:27 compute-0 ceph-mon[74331]: from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Nov 24 09:25:27 compute-0 ceph-mon[74331]: from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Nov 24 09:25:27 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:27 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 24 09:25:27 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 24 09:25:27 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019928838 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:25:27 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 09:25:27 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Nov 24 09:25:27 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:27 compute-0 ceph-mgr[74626]: [cephadm INFO root] Set ssh ssh_user
Nov 24 09:25:27 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Nov 24 09:25:27 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Nov 24 09:25:27 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:27 compute-0 ceph-mgr[74626]: [cephadm INFO root] Set ssh ssh_config
Nov 24 09:25:27 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Nov 24 09:25:27 compute-0 ceph-mgr[74626]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Nov 24 09:25:27 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Nov 24 09:25:27 compute-0 vibrant_lamport[75408]: ssh user set to ceph-admin. sudo will be used
Nov 24 09:25:27 compute-0 systemd[1]: libpod-44d0404743c5891cff8acfc2733b728d096f1e08e2f207661005db803da0e730.scope: Deactivated successfully.
Nov 24 09:25:27 compute-0 podman[75381]: 2025-11-24 09:25:27.883059167 +0000 UTC m=+0.541807133 container died 44d0404743c5891cff8acfc2733b728d096f1e08e2f207661005db803da0e730 (image=quay.io/ceph/ceph:v19, name=vibrant_lamport, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:25:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-13795b91769ec8ebe5d229f65bff80aa35ddda3e31e9bdb317c11764fc616310-merged.mount: Deactivated successfully.
Nov 24 09:25:27 compute-0 podman[75381]: 2025-11-24 09:25:27.930487291 +0000 UTC m=+0.589235267 container remove 44d0404743c5891cff8acfc2733b728d096f1e08e2f207661005db803da0e730 (image=quay.io/ceph/ceph:v19, name=vibrant_lamport, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True)
Nov 24 09:25:27 compute-0 systemd[1]: libpod-conmon-44d0404743c5891cff8acfc2733b728d096f1e08e2f207661005db803da0e730.scope: Deactivated successfully.
Nov 24 09:25:27 compute-0 podman[75445]: 2025-11-24 09:25:27.996091075 +0000 UTC m=+0.042494114 container create 6506c259e5582be95a88ba0f58a9380359a72fe97c96be87004191b417c5b239 (image=quay.io/ceph/ceph:v19, name=exciting_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Nov 24 09:25:28 compute-0 systemd[1]: Started libpod-conmon-6506c259e5582be95a88ba0f58a9380359a72fe97c96be87004191b417c5b239.scope.
Nov 24 09:25:28 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:25:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/359ebc2a9a3ac27597130f0788780eddf912306fb1296b4811ca7c6c17b6e511/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/359ebc2a9a3ac27597130f0788780eddf912306fb1296b4811ca7c6c17b6e511/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/359ebc2a9a3ac27597130f0788780eddf912306fb1296b4811ca7c6c17b6e511/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/359ebc2a9a3ac27597130f0788780eddf912306fb1296b4811ca7c6c17b6e511/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/359ebc2a9a3ac27597130f0788780eddf912306fb1296b4811ca7c6c17b6e511/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:28 compute-0 podman[75445]: 2025-11-24 09:25:28.066289873 +0000 UTC m=+0.112692942 container init 6506c259e5582be95a88ba0f58a9380359a72fe97c96be87004191b417c5b239 (image=quay.io/ceph/ceph:v19, name=exciting_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:25:28 compute-0 podman[75445]: 2025-11-24 09:25:27.976889389 +0000 UTC m=+0.023292468 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:25:28 compute-0 podman[75445]: 2025-11-24 09:25:28.075412618 +0000 UTC m=+0.121815697 container start 6506c259e5582be95a88ba0f58a9380359a72fe97c96be87004191b417c5b239 (image=quay.io/ceph/ceph:v19, name=exciting_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:25:28 compute-0 podman[75445]: 2025-11-24 09:25:28.079280274 +0000 UTC m=+0.125683363 container attach 6506c259e5582be95a88ba0f58a9380359a72fe97c96be87004191b417c5b239 (image=quay.io/ceph/ceph:v19, name=exciting_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 09:25:28 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.mauvni(active, since 2s)
Nov 24 09:25:28 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 09:25:28 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Nov 24 09:25:28 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:28 compute-0 ceph-mgr[74626]: [cephadm INFO root] Set ssh ssh_identity_key
Nov 24 09:25:28 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Nov 24 09:25:28 compute-0 ceph-mgr[74626]: [cephadm INFO root] Set ssh private key
Nov 24 09:25:28 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Set ssh private key
Nov 24 09:25:28 compute-0 systemd[1]: libpod-6506c259e5582be95a88ba0f58a9380359a72fe97c96be87004191b417c5b239.scope: Deactivated successfully.
Nov 24 09:25:28 compute-0 podman[75445]: 2025-11-24 09:25:28.471871672 +0000 UTC m=+0.518274741 container died 6506c259e5582be95a88ba0f58a9380359a72fe97c96be87004191b417c5b239 (image=quay.io/ceph/ceph:v19, name=exciting_goodall, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:25:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-359ebc2a9a3ac27597130f0788780eddf912306fb1296b4811ca7c6c17b6e511-merged.mount: Deactivated successfully.
Nov 24 09:25:28 compute-0 podman[75445]: 2025-11-24 09:25:28.52225741 +0000 UTC m=+0.568660449 container remove 6506c259e5582be95a88ba0f58a9380359a72fe97c96be87004191b417c5b239 (image=quay.io/ceph/ceph:v19, name=exciting_goodall, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:25:28 compute-0 systemd[1]: libpod-conmon-6506c259e5582be95a88ba0f58a9380359a72fe97c96be87004191b417c5b239.scope: Deactivated successfully.
Nov 24 09:25:28 compute-0 podman[75501]: 2025-11-24 09:25:28.593032301 +0000 UTC m=+0.047416555 container create 6e63e3fbc6d1382fafd7d963295776f3772429267916927ed0fe3ba9c374ef07 (image=quay.io/ceph/ceph:v19, name=angry_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 24 09:25:28 compute-0 systemd[1]: Started libpod-conmon-6e63e3fbc6d1382fafd7d963295776f3772429267916927ed0fe3ba9c374ef07.scope.
Nov 24 09:25:28 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:25:28 compute-0 podman[75501]: 2025-11-24 09:25:28.573085668 +0000 UTC m=+0.027469742 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:25:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9410f9490ba2f781a42202695c0e0e74bdd7b99247c2ed5f45008961d14cad19/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9410f9490ba2f781a42202695c0e0e74bdd7b99247c2ed5f45008961d14cad19/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9410f9490ba2f781a42202695c0e0e74bdd7b99247c2ed5f45008961d14cad19/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9410f9490ba2f781a42202695c0e0e74bdd7b99247c2ed5f45008961d14cad19/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9410f9490ba2f781a42202695c0e0e74bdd7b99247c2ed5f45008961d14cad19/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:28 compute-0 podman[75501]: 2025-11-24 09:25:28.702880551 +0000 UTC m=+0.157264585 container init 6e63e3fbc6d1382fafd7d963295776f3772429267916927ed0fe3ba9c374ef07 (image=quay.io/ceph/ceph:v19, name=angry_kalam, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:25:28 compute-0 podman[75501]: 2025-11-24 09:25:28.717756648 +0000 UTC m=+0.172140682 container start 6e63e3fbc6d1382fafd7d963295776f3772429267916927ed0fe3ba9c374ef07 (image=quay.io/ceph/ceph:v19, name=angry_kalam, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 09:25:28 compute-0 podman[75501]: 2025-11-24 09:25:28.721828489 +0000 UTC m=+0.176212523 container attach 6e63e3fbc6d1382fafd7d963295776f3772429267916927ed0fe3ba9c374ef07 (image=quay.io/ceph/ceph:v19, name=angry_kalam, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1)
Nov 24 09:25:28 compute-0 ceph-mon[74331]: from='client.14134 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 09:25:28 compute-0 ceph-mon[74331]: [24/Nov/2025:09:25:27] ENGINE Bus STARTING
Nov 24 09:25:28 compute-0 ceph-mon[74331]: [24/Nov/2025:09:25:27] ENGINE Serving on https://192.168.122.100:7150
Nov 24 09:25:28 compute-0 ceph-mon[74331]: [24/Nov/2025:09:25:27] ENGINE Client ('192.168.122.100', 58872) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 24 09:25:28 compute-0 ceph-mon[74331]: [24/Nov/2025:09:25:27] ENGINE Serving on http://192.168.122.100:8765
Nov 24 09:25:28 compute-0 ceph-mon[74331]: [24/Nov/2025:09:25:27] ENGINE Bus STARTED
Nov 24 09:25:28 compute-0 ceph-mon[74331]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 09:25:28 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:28 compute-0 ceph-mon[74331]: Set ssh ssh_user
Nov 24 09:25:28 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:28 compute-0 ceph-mon[74331]: Set ssh ssh_config
Nov 24 09:25:28 compute-0 ceph-mon[74331]: ssh user set to ceph-admin. sudo will be used
Nov 24 09:25:28 compute-0 ceph-mon[74331]: mgrmap e8: compute-0.mauvni(active, since 2s)
Nov 24 09:25:28 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:29 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 09:25:29 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Nov 24 09:25:29 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:29 compute-0 ceph-mgr[74626]: [cephadm INFO root] Set ssh ssh_identity_pub
Nov 24 09:25:29 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Nov 24 09:25:29 compute-0 systemd[1]: libpod-6e63e3fbc6d1382fafd7d963295776f3772429267916927ed0fe3ba9c374ef07.scope: Deactivated successfully.
Nov 24 09:25:29 compute-0 podman[75501]: 2025-11-24 09:25:29.092858454 +0000 UTC m=+0.547242528 container died 6e63e3fbc6d1382fafd7d963295776f3772429267916927ed0fe3ba9c374ef07 (image=quay.io/ceph/ceph:v19, name=angry_kalam, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 09:25:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-9410f9490ba2f781a42202695c0e0e74bdd7b99247c2ed5f45008961d14cad19-merged.mount: Deactivated successfully.
Nov 24 09:25:29 compute-0 podman[75501]: 2025-11-24 09:25:29.145331183 +0000 UTC m=+0.599715217 container remove 6e63e3fbc6d1382fafd7d963295776f3772429267916927ed0fe3ba9c374ef07 (image=quay.io/ceph/ceph:v19, name=angry_kalam, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:25:29 compute-0 systemd[1]: libpod-conmon-6e63e3fbc6d1382fafd7d963295776f3772429267916927ed0fe3ba9c374ef07.scope: Deactivated successfully.
Nov 24 09:25:29 compute-0 podman[75556]: 2025-11-24 09:25:29.231855314 +0000 UTC m=+0.055802021 container create 4a3e966286618efe41564fa8986ac7f3068c711781a59966afda25fc82905962 (image=quay.io/ceph/ceph:v19, name=vigorous_hofstadter, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Nov 24 09:25:29 compute-0 systemd[1]: Started libpod-conmon-4a3e966286618efe41564fa8986ac7f3068c711781a59966afda25fc82905962.scope.
Nov 24 09:25:29 compute-0 podman[75556]: 2025-11-24 09:25:29.207791258 +0000 UTC m=+0.031737965 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:25:29 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:25:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec0188f3fc2a95e2b9817f493dc984121b3f9198c054ded156b03382e7e429d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec0188f3fc2a95e2b9817f493dc984121b3f9198c054ded156b03382e7e429d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec0188f3fc2a95e2b9817f493dc984121b3f9198c054ded156b03382e7e429d3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:29 compute-0 podman[75556]: 2025-11-24 09:25:29.32582123 +0000 UTC m=+0.149768007 container init 4a3e966286618efe41564fa8986ac7f3068c711781a59966afda25fc82905962 (image=quay.io/ceph/ceph:v19, name=vigorous_hofstadter, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:25:29 compute-0 podman[75556]: 2025-11-24 09:25:29.331173762 +0000 UTC m=+0.155120499 container start 4a3e966286618efe41564fa8986ac7f3068c711781a59966afda25fc82905962 (image=quay.io/ceph/ceph:v19, name=vigorous_hofstadter, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:25:29 compute-0 podman[75556]: 2025-11-24 09:25:29.335360506 +0000 UTC m=+0.159307203 container attach 4a3e966286618efe41564fa8986ac7f3068c711781a59966afda25fc82905962 (image=quay.io/ceph/ceph:v19, name=vigorous_hofstadter, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:25:29 compute-0 ceph-mgr[74626]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 24 09:25:29 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 09:25:29 compute-0 vigorous_hofstadter[75573]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLNHnHp2ic5G45qoIN1y3ZAaqUzqhds9Wj7H+uiMU9HkoMhNARnYHTlV3y+nIVSHqWO/YGHjPAE2DFG4Er42NVJVhVg5sAMxGZjfgELHfCx7bmzXoREFbS0nhRqG4tqTYKzu+rjxLnuIFxWQUIApAQqU6QWWORfGRZSrSswVO0TkaCeJAQ3n9r/qkGNF3Xu5vhz7QxqxXpT7XFq7S8ZDaQSWialXkqesa0bQO8KN5KwNZAKj5KxhnHZCCcHZw4m6aXsnseKTttfSdI1qW94bHeCtZWxVPXlIhnpV+IOOxg7LqYI5PWFWowNVCVVd58wveZQWXZIf29zJZv/VOChWMbd44KaQX0SNir9MgI480+A7PGGs0Q9CjgCPNE7ZpHfSX34gBToVmU4CxaRVCL8/duVYmqinQk012pNBEtHkOL90OdmVuJZ4Y/R2ooH5Ql/Jy4RnqigIYGVhTuB0CLcNzFK9sCSHLY8u9/e0BOBxQyjNHHQxJpuaZ2u9dSiNIQu5k= zuul@controller
Nov 24 09:25:29 compute-0 systemd[1]: libpod-4a3e966286618efe41564fa8986ac7f3068c711781a59966afda25fc82905962.scope: Deactivated successfully.
Nov 24 09:25:29 compute-0 podman[75556]: 2025-11-24 09:25:29.761172906 +0000 UTC m=+0.585119633 container died 4a3e966286618efe41564fa8986ac7f3068c711781a59966afda25fc82905962 (image=quay.io/ceph/ceph:v19, name=vigorous_hofstadter, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:25:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec0188f3fc2a95e2b9817f493dc984121b3f9198c054ded156b03382e7e429d3-merged.mount: Deactivated successfully.
Nov 24 09:25:29 compute-0 podman[75556]: 2025-11-24 09:25:29.803077304 +0000 UTC m=+0.627024001 container remove 4a3e966286618efe41564fa8986ac7f3068c711781a59966afda25fc82905962 (image=quay.io/ceph/ceph:v19, name=vigorous_hofstadter, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:25:29 compute-0 systemd[1]: libpod-conmon-4a3e966286618efe41564fa8986ac7f3068c711781a59966afda25fc82905962.scope: Deactivated successfully.
Nov 24 09:25:29 compute-0 podman[75608]: 2025-11-24 09:25:29.862143547 +0000 UTC m=+0.039491690 container create 942f7f98c5d9625147dcdd4e821943f1f3933f7cba7886634e33fee19796523f (image=quay.io/ceph/ceph:v19, name=naughty_roentgen, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Nov 24 09:25:29 compute-0 systemd[1]: Started libpod-conmon-942f7f98c5d9625147dcdd4e821943f1f3933f7cba7886634e33fee19796523f.scope.
Nov 24 09:25:29 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:25:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e37d75399c7a095c8b173a0efce4355a5b5e3512d8ec1dd1efb5a2d60e721e9e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e37d75399c7a095c8b173a0efce4355a5b5e3512d8ec1dd1efb5a2d60e721e9e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e37d75399c7a095c8b173a0efce4355a5b5e3512d8ec1dd1efb5a2d60e721e9e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:29 compute-0 podman[75608]: 2025-11-24 09:25:29.844842598 +0000 UTC m=+0.022190771 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:25:29 compute-0 podman[75608]: 2025-11-24 09:25:29.951901178 +0000 UTC m=+0.129249391 container init 942f7f98c5d9625147dcdd4e821943f1f3933f7cba7886634e33fee19796523f (image=quay.io/ceph/ceph:v19, name=naughty_roentgen, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 09:25:29 compute-0 podman[75608]: 2025-11-24 09:25:29.956915022 +0000 UTC m=+0.134263195 container start 942f7f98c5d9625147dcdd4e821943f1f3933f7cba7886634e33fee19796523f (image=quay.io/ceph/ceph:v19, name=naughty_roentgen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Nov 24 09:25:29 compute-0 podman[75608]: 2025-11-24 09:25:29.961435604 +0000 UTC m=+0.138783807 container attach 942f7f98c5d9625147dcdd4e821943f1f3933f7cba7886634e33fee19796523f (image=quay.io/ceph/ceph:v19, name=naughty_roentgen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Nov 24 09:25:30 compute-0 ceph-mon[74331]: from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 09:25:30 compute-0 ceph-mon[74331]: Set ssh ssh_identity_key
Nov 24 09:25:30 compute-0 ceph-mon[74331]: Set ssh private key
Nov 24 09:25:30 compute-0 ceph-mon[74331]: from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 09:25:30 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:30 compute-0 ceph-mon[74331]: Set ssh ssh_identity_pub
Nov 24 09:25:30 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 09:25:30 compute-0 sshd-session[75652]: Accepted publickey for ceph-admin from 192.168.122.100 port 34708 ssh2: RSA SHA256:d901dNHY28a6fGfVJZBiZ/6DokdrVSFZakqDQ7cQMIA
Nov 24 09:25:30 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Nov 24 09:25:30 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 24 09:25:30 compute-0 systemd-logind[822]: New session 21 of user ceph-admin.
Nov 24 09:25:30 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 24 09:25:30 compute-0 systemd[1]: Starting User Manager for UID 42477...
Nov 24 09:25:30 compute-0 systemd[75656]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 09:25:30 compute-0 sshd-session[75661]: Accepted publickey for ceph-admin from 192.168.122.100 port 34710 ssh2: RSA SHA256:d901dNHY28a6fGfVJZBiZ/6DokdrVSFZakqDQ7cQMIA
Nov 24 09:25:30 compute-0 systemd-logind[822]: New session 23 of user ceph-admin.
Nov 24 09:25:30 compute-0 systemd[75656]: Queued start job for default target Main User Target.
Nov 24 09:25:30 compute-0 systemd[75656]: Created slice User Application Slice.
Nov 24 09:25:30 compute-0 systemd[75656]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 24 09:25:30 compute-0 systemd[75656]: Started Daily Cleanup of User's Temporary Directories.
Nov 24 09:25:30 compute-0 systemd[75656]: Reached target Paths.
Nov 24 09:25:30 compute-0 systemd[75656]: Reached target Timers.
Nov 24 09:25:30 compute-0 systemd[75656]: Starting D-Bus User Message Bus Socket...
Nov 24 09:25:30 compute-0 systemd[75656]: Starting Create User's Volatile Files and Directories...
Nov 24 09:25:30 compute-0 systemd[75656]: Listening on D-Bus User Message Bus Socket.
Nov 24 09:25:30 compute-0 systemd[75656]: Reached target Sockets.
Nov 24 09:25:30 compute-0 systemd[75656]: Finished Create User's Volatile Files and Directories.
Nov 24 09:25:30 compute-0 systemd[75656]: Reached target Basic System.
Nov 24 09:25:30 compute-0 systemd[75656]: Reached target Main User Target.
Nov 24 09:25:30 compute-0 systemd[75656]: Startup finished in 158ms.
Nov 24 09:25:30 compute-0 systemd[1]: Started User Manager for UID 42477.
Nov 24 09:25:30 compute-0 systemd[1]: Started Session 21 of User ceph-admin.
Nov 24 09:25:30 compute-0 systemd[1]: Started Session 23 of User ceph-admin.
Nov 24 09:25:30 compute-0 sshd-session[75652]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 09:25:30 compute-0 sshd-session[75661]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 09:25:30 compute-0 sudo[75676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:25:30 compute-0 sudo[75676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:30 compute-0 sudo[75676]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:31 compute-0 ceph-mon[74331]: from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 09:25:31 compute-0 sshd-session[75701]: Accepted publickey for ceph-admin from 192.168.122.100 port 34716 ssh2: RSA SHA256:d901dNHY28a6fGfVJZBiZ/6DokdrVSFZakqDQ7cQMIA
Nov 24 09:25:31 compute-0 systemd-logind[822]: New session 24 of user ceph-admin.
Nov 24 09:25:31 compute-0 systemd[1]: Started Session 24 of User ceph-admin.
Nov 24 09:25:31 compute-0 sshd-session[75701]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 09:25:31 compute-0 sudo[75705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-0
Nov 24 09:25:31 compute-0 sudo[75705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:31 compute-0 sudo[75705]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:31 compute-0 ceph-mgr[74626]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 24 09:25:31 compute-0 sshd-session[75730]: Accepted publickey for ceph-admin from 192.168.122.100 port 34720 ssh2: RSA SHA256:d901dNHY28a6fGfVJZBiZ/6DokdrVSFZakqDQ7cQMIA
Nov 24 09:25:31 compute-0 systemd-logind[822]: New session 25 of user ceph-admin.
Nov 24 09:25:31 compute-0 systemd[1]: Started Session 25 of User ceph-admin.
Nov 24 09:25:31 compute-0 sshd-session[75730]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 09:25:31 compute-0 sudo[75734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36
Nov 24 09:25:31 compute-0 sudo[75734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:31 compute-0 sudo[75734]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:31 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Nov 24 09:25:31 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Nov 24 09:25:31 compute-0 sshd-session[75759]: Accepted publickey for ceph-admin from 192.168.122.100 port 34732 ssh2: RSA SHA256:d901dNHY28a6fGfVJZBiZ/6DokdrVSFZakqDQ7cQMIA
Nov 24 09:25:31 compute-0 systemd-logind[822]: New session 26 of user ceph-admin.
Nov 24 09:25:31 compute-0 systemd[1]: Started Session 26 of User ceph-admin.
Nov 24 09:25:31 compute-0 sshd-session[75759]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 09:25:32 compute-0 sudo[75763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:25:32 compute-0 sudo[75763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:32 compute-0 sudo[75763]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:32 compute-0 ceph-mon[74331]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 09:25:32 compute-0 sshd-session[75788]: Accepted publickey for ceph-admin from 192.168.122.100 port 34736 ssh2: RSA SHA256:d901dNHY28a6fGfVJZBiZ/6DokdrVSFZakqDQ7cQMIA
Nov 24 09:25:32 compute-0 systemd-logind[822]: New session 27 of user ceph-admin.
Nov 24 09:25:32 compute-0 systemd[1]: Started Session 27 of User ceph-admin.
Nov 24 09:25:32 compute-0 sshd-session[75788]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 09:25:32 compute-0 sudo[75792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:25:32 compute-0 sudo[75792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:32 compute-0 sudo[75792]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:32 compute-0 sshd-session[75817]: Accepted publickey for ceph-admin from 192.168.122.100 port 34748 ssh2: RSA SHA256:d901dNHY28a6fGfVJZBiZ/6DokdrVSFZakqDQ7cQMIA
Nov 24 09:25:32 compute-0 systemd-logind[822]: New session 28 of user ceph-admin.
Nov 24 09:25:32 compute-0 systemd[1]: Started Session 28 of User ceph-admin.
Nov 24 09:25:32 compute-0 sshd-session[75817]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 09:25:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053131 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:25:32 compute-0 sudo[75821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new
Nov 24 09:25:32 compute-0 sudo[75821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:32 compute-0 sudo[75821]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:32 compute-0 sshd-session[75846]: Accepted publickey for ceph-admin from 192.168.122.100 port 34750 ssh2: RSA SHA256:d901dNHY28a6fGfVJZBiZ/6DokdrVSFZakqDQ7cQMIA
Nov 24 09:25:32 compute-0 systemd-logind[822]: New session 29 of user ceph-admin.
Nov 24 09:25:32 compute-0 systemd[1]: Started Session 29 of User ceph-admin.
Nov 24 09:25:32 compute-0 sshd-session[75846]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 09:25:33 compute-0 sudo[75850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:25:33 compute-0 sudo[75850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:33 compute-0 sudo[75850]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:33 compute-0 ceph-mon[74331]: Deploying cephadm binary to compute-0
Nov 24 09:25:33 compute-0 sshd-session[75875]: Accepted publickey for ceph-admin from 192.168.122.100 port 34762 ssh2: RSA SHA256:d901dNHY28a6fGfVJZBiZ/6DokdrVSFZakqDQ7cQMIA
Nov 24 09:25:33 compute-0 systemd-logind[822]: New session 30 of user ceph-admin.
Nov 24 09:25:33 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Nov 24 09:25:33 compute-0 sshd-session[75875]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 09:25:33 compute-0 sudo[75879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new
Nov 24 09:25:33 compute-0 sudo[75879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:33 compute-0 sudo[75879]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:33 compute-0 ceph-mgr[74626]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 24 09:25:33 compute-0 sshd-session[75904]: Accepted publickey for ceph-admin from 192.168.122.100 port 34766 ssh2: RSA SHA256:d901dNHY28a6fGfVJZBiZ/6DokdrVSFZakqDQ7cQMIA
Nov 24 09:25:33 compute-0 systemd-logind[822]: New session 31 of user ceph-admin.
Nov 24 09:25:33 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Nov 24 09:25:33 compute-0 sshd-session[75904]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 09:25:34 compute-0 sshd-session[75931]: Accepted publickey for ceph-admin from 192.168.122.100 port 34778 ssh2: RSA SHA256:d901dNHY28a6fGfVJZBiZ/6DokdrVSFZakqDQ7cQMIA
Nov 24 09:25:34 compute-0 systemd-logind[822]: New session 32 of user ceph-admin.
Nov 24 09:25:34 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Nov 24 09:25:34 compute-0 sshd-session[75931]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 09:25:34 compute-0 sudo[75935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36
Nov 24 09:25:34 compute-0 sudo[75935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:34 compute-0 sudo[75935]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:35 compute-0 sshd-session[75960]: Accepted publickey for ceph-admin from 192.168.122.100 port 34786 ssh2: RSA SHA256:d901dNHY28a6fGfVJZBiZ/6DokdrVSFZakqDQ7cQMIA
Nov 24 09:25:35 compute-0 systemd-logind[822]: New session 33 of user ceph-admin.
Nov 24 09:25:35 compute-0 systemd[1]: Started Session 33 of User ceph-admin.
Nov 24 09:25:35 compute-0 sshd-session[75960]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 09:25:35 compute-0 sudo[75964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-0
Nov 24 09:25:35 compute-0 sudo[75964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:35 compute-0 sudo[75964]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:35 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Nov 24 09:25:35 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:35 compute-0 ceph-mgr[74626]: [cephadm INFO root] Added host compute-0
Nov 24 09:25:35 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Added host compute-0
Nov 24 09:25:35 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Nov 24 09:25:35 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 24 09:25:35 compute-0 naughty_roentgen[75625]: Added host 'compute-0' with addr '192.168.122.100'
Nov 24 09:25:35 compute-0 systemd[1]: libpod-942f7f98c5d9625147dcdd4e821943f1f3933f7cba7886634e33fee19796523f.scope: Deactivated successfully.
Nov 24 09:25:35 compute-0 podman[75608]: 2025-11-24 09:25:35.467953029 +0000 UTC m=+5.645301182 container died 942f7f98c5d9625147dcdd4e821943f1f3933f7cba7886634e33fee19796523f (image=quay.io/ceph/ceph:v19, name=naughty_roentgen, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:25:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-e37d75399c7a095c8b173a0efce4355a5b5e3512d8ec1dd1efb5a2d60e721e9e-merged.mount: Deactivated successfully.
Nov 24 09:25:35 compute-0 ceph-mgr[74626]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 24 09:25:35 compute-0 sudo[76011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:25:35 compute-0 podman[75608]: 2025-11-24 09:25:35.514384859 +0000 UTC m=+5.691733012 container remove 942f7f98c5d9625147dcdd4e821943f1f3933f7cba7886634e33fee19796523f (image=quay.io/ceph/ceph:v19, name=naughty_roentgen, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Nov 24 09:25:35 compute-0 sudo[76011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:35 compute-0 sudo[76011]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:35 compute-0 systemd[1]: libpod-conmon-942f7f98c5d9625147dcdd4e821943f1f3933f7cba7886634e33fee19796523f.scope: Deactivated successfully.
Nov 24 09:25:35 compute-0 sudo[76049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 pull
Nov 24 09:25:35 compute-0 sudo[76049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:35 compute-0 podman[76048]: 2025-11-24 09:25:35.574004715 +0000 UTC m=+0.040019442 container create e060e78ddb8b1463ea831fd1ea0a3bc2f334aefdd8b8a345e6715bd647931d65 (image=quay.io/ceph/ceph:v19, name=trusting_mayer, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:25:35 compute-0 systemd[1]: Started libpod-conmon-e060e78ddb8b1463ea831fd1ea0a3bc2f334aefdd8b8a345e6715bd647931d65.scope.
Nov 24 09:25:35 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:25:35 compute-0 podman[76048]: 2025-11-24 09:25:35.555422205 +0000 UTC m=+0.021436942 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:25:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c8b3a92662debd471217f77eb9feb5c780c871c027f94f4f7047a2bcbcc017a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c8b3a92662debd471217f77eb9feb5c780c871c027f94f4f7047a2bcbcc017a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c8b3a92662debd471217f77eb9feb5c780c871c027f94f4f7047a2bcbcc017a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:35 compute-0 podman[76048]: 2025-11-24 09:25:35.675638891 +0000 UTC m=+0.141653678 container init e060e78ddb8b1463ea831fd1ea0a3bc2f334aefdd8b8a345e6715bd647931d65 (image=quay.io/ceph/ceph:v19, name=trusting_mayer, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Nov 24 09:25:35 compute-0 podman[76048]: 2025-11-24 09:25:35.685069084 +0000 UTC m=+0.151083831 container start e060e78ddb8b1463ea831fd1ea0a3bc2f334aefdd8b8a345e6715bd647931d65 (image=quay.io/ceph/ceph:v19, name=trusting_mayer, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:25:35 compute-0 podman[76048]: 2025-11-24 09:25:35.690041907 +0000 UTC m=+0.156056664 container attach e060e78ddb8b1463ea831fd1ea0a3bc2f334aefdd8b8a345e6715bd647931d65 (image=quay.io/ceph/ceph:v19, name=trusting_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Nov 24 09:25:36 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 09:25:36 compute-0 ceph-mgr[74626]: [cephadm INFO root] Saving service mon spec with placement count:5
Nov 24 09:25:36 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Nov 24 09:25:36 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Nov 24 09:25:36 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:36 compute-0 trusting_mayer[76091]: Scheduled mon update...
Nov 24 09:25:36 compute-0 systemd[1]: libpod-e060e78ddb8b1463ea831fd1ea0a3bc2f334aefdd8b8a345e6715bd647931d65.scope: Deactivated successfully.
Nov 24 09:25:36 compute-0 podman[76048]: 2025-11-24 09:25:36.085168978 +0000 UTC m=+0.551183715 container died e060e78ddb8b1463ea831fd1ea0a3bc2f334aefdd8b8a345e6715bd647931d65 (image=quay.io/ceph/ceph:v19, name=trusting_mayer, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:25:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c8b3a92662debd471217f77eb9feb5c780c871c027f94f4f7047a2bcbcc017a-merged.mount: Deactivated successfully.
Nov 24 09:25:36 compute-0 podman[76048]: 2025-11-24 09:25:36.129227539 +0000 UTC m=+0.595242266 container remove e060e78ddb8b1463ea831fd1ea0a3bc2f334aefdd8b8a345e6715bd647931d65 (image=quay.io/ceph/ceph:v19, name=trusting_mayer, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:25:36 compute-0 systemd[1]: libpod-conmon-e060e78ddb8b1463ea831fd1ea0a3bc2f334aefdd8b8a345e6715bd647931d65.scope: Deactivated successfully.
Nov 24 09:25:36 compute-0 podman[76157]: 2025-11-24 09:25:36.190253569 +0000 UTC m=+0.044623616 container create 05a9f44105f46ecf481e0620239dd58b734f7bfb5ab9b029fbfd46d7e05bf2de (image=quay.io/ceph/ceph:v19, name=quirky_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:25:36 compute-0 systemd[1]: Started libpod-conmon-05a9f44105f46ecf481e0620239dd58b734f7bfb5ab9b029fbfd46d7e05bf2de.scope.
Nov 24 09:25:36 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:25:36 compute-0 podman[76157]: 2025-11-24 09:25:36.171307881 +0000 UTC m=+0.025677938 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:25:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98ad3aa46177bfab1b61d3c5b98c2b69a838f4150a2100c034eeea6c2bc66e83/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98ad3aa46177bfab1b61d3c5b98c2b69a838f4150a2100c034eeea6c2bc66e83/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98ad3aa46177bfab1b61d3c5b98c2b69a838f4150a2100c034eeea6c2bc66e83/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:36 compute-0 podman[76157]: 2025-11-24 09:25:36.282963044 +0000 UTC m=+0.137333171 container init 05a9f44105f46ecf481e0620239dd58b734f7bfb5ab9b029fbfd46d7e05bf2de (image=quay.io/ceph/ceph:v19, name=quirky_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:25:36 compute-0 podman[76157]: 2025-11-24 09:25:36.289765732 +0000 UTC m=+0.144135779 container start 05a9f44105f46ecf481e0620239dd58b734f7bfb5ab9b029fbfd46d7e05bf2de (image=quay.io/ceph/ceph:v19, name=quirky_tharp, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 24 09:25:36 compute-0 podman[76157]: 2025-11-24 09:25:36.293167327 +0000 UTC m=+0.147537394 container attach 05a9f44105f46ecf481e0620239dd58b734f7bfb5ab9b029fbfd46d7e05bf2de (image=quay.io/ceph/ceph:v19, name=quirky_tharp, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Nov 24 09:25:36 compute-0 podman[76110]: 2025-11-24 09:25:36.429415069 +0000 UTC m=+0.621449154 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:25:36 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:36 compute-0 ceph-mon[74331]: Added host compute-0
Nov 24 09:25:36 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 24 09:25:36 compute-0 ceph-mon[74331]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 09:25:36 compute-0 ceph-mon[74331]: Saving service mon spec with placement count:5
Nov 24 09:25:36 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:36 compute-0 podman[76212]: 2025-11-24 09:25:36.543622647 +0000 UTC m=+0.048022201 container create ca79b27c432fc1277c06102ed65e7ced65bdf5da8634f10670341d786ba21b84 (image=quay.io/ceph/ceph:v19, name=confident_swirles, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:25:36 compute-0 systemd[1]: Started libpod-conmon-ca79b27c432fc1277c06102ed65e7ced65bdf5da8634f10670341d786ba21b84.scope.
Nov 24 09:25:36 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:25:36 compute-0 podman[76212]: 2025-11-24 09:25:36.526251846 +0000 UTC m=+0.030651390 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:25:36 compute-0 podman[76212]: 2025-11-24 09:25:36.621522454 +0000 UTC m=+0.125922028 container init ca79b27c432fc1277c06102ed65e7ced65bdf5da8634f10670341d786ba21b84 (image=quay.io/ceph/ceph:v19, name=confident_swirles, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 24 09:25:36 compute-0 podman[76212]: 2025-11-24 09:25:36.627839531 +0000 UTC m=+0.132239075 container start ca79b27c432fc1277c06102ed65e7ced65bdf5da8634f10670341d786ba21b84 (image=quay.io/ceph/ceph:v19, name=confident_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:25:36 compute-0 podman[76212]: 2025-11-24 09:25:36.631080281 +0000 UTC m=+0.135479845 container attach ca79b27c432fc1277c06102ed65e7ced65bdf5da8634f10670341d786ba21b84 (image=quay.io/ceph/ceph:v19, name=confident_swirles, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 24 09:25:36 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 09:25:36 compute-0 ceph-mgr[74626]: [cephadm INFO root] Saving service mgr spec with placement count:2
Nov 24 09:25:36 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Nov 24 09:25:36 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Nov 24 09:25:36 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:36 compute-0 quirky_tharp[76174]: Scheduled mgr update...
Nov 24 09:25:36 compute-0 systemd[1]: libpod-05a9f44105f46ecf481e0620239dd58b734f7bfb5ab9b029fbfd46d7e05bf2de.scope: Deactivated successfully.
Nov 24 09:25:36 compute-0 podman[76157]: 2025-11-24 09:25:36.707999585 +0000 UTC m=+0.562369622 container died 05a9f44105f46ecf481e0620239dd58b734f7bfb5ab9b029fbfd46d7e05bf2de (image=quay.io/ceph/ceph:v19, name=quirky_tharp, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:25:36 compute-0 confident_swirles[76228]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Nov 24 09:25:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-98ad3aa46177bfab1b61d3c5b98c2b69a838f4150a2100c034eeea6c2bc66e83-merged.mount: Deactivated successfully.
Nov 24 09:25:36 compute-0 systemd[1]: libpod-ca79b27c432fc1277c06102ed65e7ced65bdf5da8634f10670341d786ba21b84.scope: Deactivated successfully.
Nov 24 09:25:36 compute-0 podman[76212]: 2025-11-24 09:25:36.736308696 +0000 UTC m=+0.240708240 container died ca79b27c432fc1277c06102ed65e7ced65bdf5da8634f10670341d786ba21b84 (image=quay.io/ceph/ceph:v19, name=confident_swirles, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:25:36 compute-0 podman[76157]: 2025-11-24 09:25:36.751219855 +0000 UTC m=+0.605589902 container remove 05a9f44105f46ecf481e0620239dd58b734f7bfb5ab9b029fbfd46d7e05bf2de (image=quay.io/ceph/ceph:v19, name=quirky_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:25:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-94a7204930cf31d28c4c333acbed8c7d36bbfebb35533eea2018a4d091b17f2d-merged.mount: Deactivated successfully.
Nov 24 09:25:36 compute-0 podman[76212]: 2025-11-24 09:25:36.785855393 +0000 UTC m=+0.290254937 container remove ca79b27c432fc1277c06102ed65e7ced65bdf5da8634f10670341d786ba21b84 (image=quay.io/ceph/ceph:v19, name=confident_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:25:36 compute-0 systemd[1]: libpod-conmon-ca79b27c432fc1277c06102ed65e7ced65bdf5da8634f10670341d786ba21b84.scope: Deactivated successfully.
Nov 24 09:25:36 compute-0 systemd[1]: libpod-conmon-05a9f44105f46ecf481e0620239dd58b734f7bfb5ab9b029fbfd46d7e05bf2de.scope: Deactivated successfully.
Nov 24 09:25:36 compute-0 podman[76254]: 2025-11-24 09:25:36.809865467 +0000 UTC m=+0.041691343 container create fe7d4ee363367ec423d70d66cc233fb9a10c7e24118142c5b3908a756da05f28 (image=quay.io/ceph/ceph:v19, name=gracious_feynman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:25:36 compute-0 sudo[76049]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:36 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Nov 24 09:25:36 compute-0 systemd[1]: Started libpod-conmon-fe7d4ee363367ec423d70d66cc233fb9a10c7e24118142c5b3908a756da05f28.scope.
Nov 24 09:25:36 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:36 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:25:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e249a723d403abe41af4372eefa62eef716ea4d1f26d4e9400c69c97418d8910/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e249a723d403abe41af4372eefa62eef716ea4d1f26d4e9400c69c97418d8910/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e249a723d403abe41af4372eefa62eef716ea4d1f26d4e9400c69c97418d8910/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:36 compute-0 podman[76254]: 2025-11-24 09:25:36.880930485 +0000 UTC m=+0.112756371 container init fe7d4ee363367ec423d70d66cc233fb9a10c7e24118142c5b3908a756da05f28 (image=quay.io/ceph/ceph:v19, name=gracious_feynman, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:25:36 compute-0 podman[76254]: 2025-11-24 09:25:36.886880083 +0000 UTC m=+0.118705959 container start fe7d4ee363367ec423d70d66cc233fb9a10c7e24118142c5b3908a756da05f28 (image=quay.io/ceph/ceph:v19, name=gracious_feynman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:25:36 compute-0 podman[76254]: 2025-11-24 09:25:36.791780689 +0000 UTC m=+0.023606575 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:25:36 compute-0 podman[76254]: 2025-11-24 09:25:36.890349399 +0000 UTC m=+0.122175305 container attach fe7d4ee363367ec423d70d66cc233fb9a10c7e24118142c5b3908a756da05f28 (image=quay.io/ceph/ceph:v19, name=gracious_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:25:36 compute-0 sudo[76278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:25:36 compute-0 sudo[76278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:36 compute-0 sudo[76278]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:36 compute-0 sudo[76305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Nov 24 09:25:36 compute-0 sudo[76305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:37 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 09:25:37 compute-0 ceph-mgr[74626]: [cephadm INFO root] Saving service crash spec with placement *
Nov 24 09:25:37 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Nov 24 09:25:37 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Nov 24 09:25:37 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:37 compute-0 gracious_feynman[76275]: Scheduled crash update...
Nov 24 09:25:37 compute-0 sudo[76305]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:37 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:25:37 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:37 compute-0 podman[76254]: 2025-11-24 09:25:37.268937661 +0000 UTC m=+0.500763567 container died fe7d4ee363367ec423d70d66cc233fb9a10c7e24118142c5b3908a756da05f28 (image=quay.io/ceph/ceph:v19, name=gracious_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 24 09:25:37 compute-0 systemd[1]: libpod-fe7d4ee363367ec423d70d66cc233fb9a10c7e24118142c5b3908a756da05f28.scope: Deactivated successfully.
Nov 24 09:25:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-e249a723d403abe41af4372eefa62eef716ea4d1f26d4e9400c69c97418d8910-merged.mount: Deactivated successfully.
Nov 24 09:25:37 compute-0 podman[76254]: 2025-11-24 09:25:37.306859479 +0000 UTC m=+0.538685365 container remove fe7d4ee363367ec423d70d66cc233fb9a10c7e24118142c5b3908a756da05f28 (image=quay.io/ceph/ceph:v19, name=gracious_feynman, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:25:37 compute-0 systemd[1]: libpod-conmon-fe7d4ee363367ec423d70d66cc233fb9a10c7e24118142c5b3908a756da05f28.scope: Deactivated successfully.
Nov 24 09:25:37 compute-0 sudo[76373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:25:37 compute-0 sudo[76373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:37 compute-0 sudo[76373]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:37 compute-0 podman[76407]: 2025-11-24 09:25:37.376659337 +0000 UTC m=+0.043217061 container create 457df1e2b9a13efb8a603da052ab41bb5c1892b799efbbbdc72d96b169a672f1 (image=quay.io/ceph/ceph:v19, name=stoic_shaw, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 24 09:25:37 compute-0 systemd[1]: Started libpod-conmon-457df1e2b9a13efb8a603da052ab41bb5c1892b799efbbbdc72d96b169a672f1.scope.
Nov 24 09:25:37 compute-0 sudo[76417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 24 09:25:37 compute-0 sudo[76417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:37 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:25:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/584cdc18c1414c09b4103503d0162a4271af67b0d138c0aa12575aca7d3e216d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/584cdc18c1414c09b4103503d0162a4271af67b0d138c0aa12575aca7d3e216d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/584cdc18c1414c09b4103503d0162a4271af67b0d138c0aa12575aca7d3e216d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:37 compute-0 podman[76407]: 2025-11-24 09:25:37.358671392 +0000 UTC m=+0.025229136 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:25:37 compute-0 podman[76407]: 2025-11-24 09:25:37.465951487 +0000 UTC m=+0.132509231 container init 457df1e2b9a13efb8a603da052ab41bb5c1892b799efbbbdc72d96b169a672f1 (image=quay.io/ceph/ceph:v19, name=stoic_shaw, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 24 09:25:37 compute-0 podman[76407]: 2025-11-24 09:25:37.472630863 +0000 UTC m=+0.139188587 container start 457df1e2b9a13efb8a603da052ab41bb5c1892b799efbbbdc72d96b169a672f1 (image=quay.io/ceph/ceph:v19, name=stoic_shaw, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Nov 24 09:25:37 compute-0 podman[76407]: 2025-11-24 09:25:37.475935814 +0000 UTC m=+0.142493558 container attach 457df1e2b9a13efb8a603da052ab41bb5c1892b799efbbbdc72d96b169a672f1 (image=quay.io/ceph/ceph:v19, name=stoic_shaw, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:25:37 compute-0 ceph-mgr[74626]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 24 09:25:37 compute-0 ceph-mon[74331]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 09:25:37 compute-0 ceph-mon[74331]: Saving service mgr spec with placement count:2
Nov 24 09:25:37 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:37 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:37 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:37 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:37 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054711 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:25:37 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Nov 24 09:25:37 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1526582208' entity='client.admin' 
Nov 24 09:25:37 compute-0 systemd[1]: libpod-457df1e2b9a13efb8a603da052ab41bb5c1892b799efbbbdc72d96b169a672f1.scope: Deactivated successfully.
Nov 24 09:25:37 compute-0 podman[76407]: 2025-11-24 09:25:37.85832815 +0000 UTC m=+0.524885874 container died 457df1e2b9a13efb8a603da052ab41bb5c1892b799efbbbdc72d96b169a672f1 (image=quay.io/ceph/ceph:v19, name=stoic_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 24 09:25:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-584cdc18c1414c09b4103503d0162a4271af67b0d138c0aa12575aca7d3e216d-merged.mount: Deactivated successfully.
Nov 24 09:25:37 compute-0 podman[76407]: 2025-11-24 09:25:37.893820529 +0000 UTC m=+0.560378253 container remove 457df1e2b9a13efb8a603da052ab41bb5c1892b799efbbbdc72d96b169a672f1 (image=quay.io/ceph/ceph:v19, name=stoic_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:25:37 compute-0 systemd[1]: libpod-conmon-457df1e2b9a13efb8a603da052ab41bb5c1892b799efbbbdc72d96b169a672f1.scope: Deactivated successfully.
Nov 24 09:25:37 compute-0 podman[76552]: 2025-11-24 09:25:37.956567582 +0000 UTC m=+0.041077408 container create 96be6b5914a4232e2aac727342826b106268bbb2f717accea57144e0938ed20a (image=quay.io/ceph/ceph:v19, name=thirsty_joliot, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:25:37 compute-0 systemd[1]: Started libpod-conmon-96be6b5914a4232e2aac727342826b106268bbb2f717accea57144e0938ed20a.scope.
Nov 24 09:25:38 compute-0 podman[76554]: 2025-11-24 09:25:38.00376211 +0000 UTC m=+0.081459378 container exec 926e81c0f890a1c1ac5ebf5b0a3fc7d39273a3029701ecf933d5ab782a4c6bc4 (image=quay.io/ceph/ceph:v19, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:25:38 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:25:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32e509cc050aa20db004744e93c56b10b1b3103f3830b341555f608e7951ff2a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32e509cc050aa20db004744e93c56b10b1b3103f3830b341555f608e7951ff2a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32e509cc050aa20db004744e93c56b10b1b3103f3830b341555f608e7951ff2a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:38 compute-0 podman[76552]: 2025-11-24 09:25:37.940364851 +0000 UTC m=+0.024874697 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:25:38 compute-0 podman[76552]: 2025-11-24 09:25:38.062201016 +0000 UTC m=+0.146710922 container init 96be6b5914a4232e2aac727342826b106268bbb2f717accea57144e0938ed20a (image=quay.io/ceph/ceph:v19, name=thirsty_joliot, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 24 09:25:38 compute-0 podman[76552]: 2025-11-24 09:25:38.068528643 +0000 UTC m=+0.153038469 container start 96be6b5914a4232e2aac727342826b106268bbb2f717accea57144e0938ed20a (image=quay.io/ceph/ceph:v19, name=thirsty_joliot, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:25:38 compute-0 podman[76552]: 2025-11-24 09:25:38.071874266 +0000 UTC m=+0.156384132 container attach 96be6b5914a4232e2aac727342826b106268bbb2f717accea57144e0938ed20a (image=quay.io/ceph/ceph:v19, name=thirsty_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Nov 24 09:25:38 compute-0 podman[76554]: 2025-11-24 09:25:38.116508181 +0000 UTC m=+0.194205449 container exec_died 926e81c0f890a1c1ac5ebf5b0a3fc7d39273a3029701ecf933d5ab782a4c6bc4 (image=quay.io/ceph/ceph:v19, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mon-compute-0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Nov 24 09:25:38 compute-0 sudo[76417]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:38 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:25:38 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:38 compute-0 sudo[76642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:25:38 compute-0 sudo[76642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:38 compute-0 sudo[76642]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:38 compute-0 sudo[76667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:25:38 compute-0 sudo[76667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:38 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 09:25:38 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Nov 24 09:25:38 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:38 compute-0 systemd[1]: libpod-96be6b5914a4232e2aac727342826b106268bbb2f717accea57144e0938ed20a.scope: Deactivated successfully.
Nov 24 09:25:38 compute-0 podman[76552]: 2025-11-24 09:25:38.454942619 +0000 UTC m=+0.539452445 container died 96be6b5914a4232e2aac727342826b106268bbb2f717accea57144e0938ed20a (image=quay.io/ceph/ceph:v19, name=thirsty_joliot, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Nov 24 09:25:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-32e509cc050aa20db004744e93c56b10b1b3103f3830b341555f608e7951ff2a-merged.mount: Deactivated successfully.
Nov 24 09:25:38 compute-0 podman[76552]: 2025-11-24 09:25:38.495250856 +0000 UTC m=+0.579760682 container remove 96be6b5914a4232e2aac727342826b106268bbb2f717accea57144e0938ed20a (image=quay.io/ceph/ceph:v19, name=thirsty_joliot, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 24 09:25:38 compute-0 systemd[1]: libpod-conmon-96be6b5914a4232e2aac727342826b106268bbb2f717accea57144e0938ed20a.scope: Deactivated successfully.
Nov 24 09:25:38 compute-0 podman[76706]: 2025-11-24 09:25:38.549433797 +0000 UTC m=+0.034593567 container create d1221f555ea6409d3716ec28183fd2ffec73dd8fc9cc1d8ae24267eec8c4b38f (image=quay.io/ceph/ceph:v19, name=distracted_austin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 09:25:38 compute-0 systemd[1]: Started libpod-conmon-d1221f555ea6409d3716ec28183fd2ffec73dd8fc9cc1d8ae24267eec8c4b38f.scope.
Nov 24 09:25:38 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 76735 (sysctl)
Nov 24 09:25:38 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Nov 24 09:25:38 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:25:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36c958caf16b70a6f88c851926460a3b85d5c8c26899c5f0d300299fd3d350cc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36c958caf16b70a6f88c851926460a3b85d5c8c26899c5f0d300299fd3d350cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36c958caf16b70a6f88c851926460a3b85d5c8c26899c5f0d300299fd3d350cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:38 compute-0 podman[76706]: 2025-11-24 09:25:38.533538264 +0000 UTC m=+0.018698054 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:25:38 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Nov 24 09:25:38 compute-0 podman[76706]: 2025-11-24 09:25:38.643304261 +0000 UTC m=+0.128464061 container init d1221f555ea6409d3716ec28183fd2ffec73dd8fc9cc1d8ae24267eec8c4b38f (image=quay.io/ceph/ceph:v19, name=distracted_austin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:25:38 compute-0 podman[76706]: 2025-11-24 09:25:38.650476618 +0000 UTC m=+0.135636388 container start d1221f555ea6409d3716ec28183fd2ffec73dd8fc9cc1d8ae24267eec8c4b38f (image=quay.io/ceph/ceph:v19, name=distracted_austin, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 24 09:25:38 compute-0 podman[76706]: 2025-11-24 09:25:38.654040857 +0000 UTC m=+0.139200627 container attach d1221f555ea6409d3716ec28183fd2ffec73dd8fc9cc1d8ae24267eec8c4b38f (image=quay.io/ceph/ceph:v19, name=distracted_austin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 09:25:38 compute-0 ceph-mon[74331]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 09:25:38 compute-0 ceph-mon[74331]: Saving service crash spec with placement *
Nov 24 09:25:38 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1526582208' entity='client.admin' 
Nov 24 09:25:38 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:38 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:38 compute-0 sudo[76667]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:38 compute-0 sudo[76780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:25:38 compute-0 sudo[76780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:38 compute-0 sudo[76780]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:39 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 09:25:39 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Nov 24 09:25:39 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:39 compute-0 ceph-mgr[74626]: [cephadm INFO root] Added label _admin to host compute-0
Nov 24 09:25:39 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Nov 24 09:25:39 compute-0 distracted_austin[76734]: Added label _admin to host compute-0
Nov 24 09:25:39 compute-0 systemd[1]: libpod-d1221f555ea6409d3716ec28183fd2ffec73dd8fc9cc1d8ae24267eec8c4b38f.scope: Deactivated successfully.
Nov 24 09:25:39 compute-0 podman[76706]: 2025-11-24 09:25:39.031325675 +0000 UTC m=+0.516485435 container died d1221f555ea6409d3716ec28183fd2ffec73dd8fc9cc1d8ae24267eec8c4b38f (image=quay.io/ceph/ceph:v19, name=distracted_austin, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 24 09:25:39 compute-0 sudo[76805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Nov 24 09:25:39 compute-0 sudo[76805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-36c958caf16b70a6f88c851926460a3b85d5c8c26899c5f0d300299fd3d350cc-merged.mount: Deactivated successfully.
Nov 24 09:25:39 compute-0 podman[76706]: 2025-11-24 09:25:39.069757157 +0000 UTC m=+0.554916937 container remove d1221f555ea6409d3716ec28183fd2ffec73dd8fc9cc1d8ae24267eec8c4b38f (image=quay.io/ceph/ceph:v19, name=distracted_austin, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Nov 24 09:25:39 compute-0 systemd[1]: libpod-conmon-d1221f555ea6409d3716ec28183fd2ffec73dd8fc9cc1d8ae24267eec8c4b38f.scope: Deactivated successfully.
Nov 24 09:25:39 compute-0 podman[76843]: 2025-11-24 09:25:39.1329069 +0000 UTC m=+0.040176075 container create 2c834b1327f1cddcc5113613a31cf01942f0b9de0323fea068b46b9e9f8d5bee (image=quay.io/ceph/ceph:v19, name=kind_colden, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:25:39 compute-0 systemd[1]: Started libpod-conmon-2c834b1327f1cddcc5113613a31cf01942f0b9de0323fea068b46b9e9f8d5bee.scope.
Nov 24 09:25:39 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:25:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/921f548a7860d5108b28a5fecdb5f008f607a18cb81239bf00c41dad17d6e141/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/921f548a7860d5108b28a5fecdb5f008f607a18cb81239bf00c41dad17d6e141/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/921f548a7860d5108b28a5fecdb5f008f607a18cb81239bf00c41dad17d6e141/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:39 compute-0 podman[76843]: 2025-11-24 09:25:39.117270734 +0000 UTC m=+0.024539919 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:25:39 compute-0 podman[76843]: 2025-11-24 09:25:39.216415827 +0000 UTC m=+0.123685002 container init 2c834b1327f1cddcc5113613a31cf01942f0b9de0323fea068b46b9e9f8d5bee (image=quay.io/ceph/ceph:v19, name=kind_colden, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:25:39 compute-0 podman[76843]: 2025-11-24 09:25:39.22301222 +0000 UTC m=+0.130281385 container start 2c834b1327f1cddcc5113613a31cf01942f0b9de0323fea068b46b9e9f8d5bee (image=quay.io/ceph/ceph:v19, name=kind_colden, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Nov 24 09:25:39 compute-0 podman[76843]: 2025-11-24 09:25:39.226579009 +0000 UTC m=+0.133848204 container attach 2c834b1327f1cddcc5113613a31cf01942f0b9de0323fea068b46b9e9f8d5bee (image=quay.io/ceph/ceph:v19, name=kind_colden, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:25:39 compute-0 sudo[76805]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:39 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:25:39 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:39 compute-0 sudo[76884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:25:39 compute-0 sudo[76884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:39 compute-0 sudo[76884]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:39 compute-0 sudo[76926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- inventory --format=json-pretty --filter-for-batch
Nov 24 09:25:39 compute-0 sudo[76926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:39 compute-0 ceph-mgr[74626]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 24 09:25:39 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Nov 24 09:25:39 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/395949677' entity='client.admin' 
Nov 24 09:25:39 compute-0 kind_colden[76859]: set mgr/dashboard/cluster/status
Nov 24 09:25:39 compute-0 systemd[1]: libpod-2c834b1327f1cddcc5113613a31cf01942f0b9de0323fea068b46b9e9f8d5bee.scope: Deactivated successfully.
Nov 24 09:25:39 compute-0 podman[76843]: 2025-11-24 09:25:39.674668271 +0000 UTC m=+0.581937436 container died 2c834b1327f1cddcc5113613a31cf01942f0b9de0323fea068b46b9e9f8d5bee (image=quay.io/ceph/ceph:v19, name=kind_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:25:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-921f548a7860d5108b28a5fecdb5f008f607a18cb81239bf00c41dad17d6e141-merged.mount: Deactivated successfully.
Nov 24 09:25:39 compute-0 podman[76843]: 2025-11-24 09:25:39.824975731 +0000 UTC m=+0.732244906 container remove 2c834b1327f1cddcc5113613a31cf01942f0b9de0323fea068b46b9e9f8d5bee (image=quay.io/ceph/ceph:v19, name=kind_colden, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:25:39 compute-0 systemd[1]: libpod-conmon-2c834b1327f1cddcc5113613a31cf01942f0b9de0323fea068b46b9e9f8d5bee.scope: Deactivated successfully.
Nov 24 09:25:39 compute-0 podman[77004]: 2025-11-24 09:25:39.898797509 +0000 UTC m=+0.041980550 container create ea2c55cb9987bf04e6852412de5f2e3470743e63e96a9e6115c6027afd7a505f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_lamport, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:25:39 compute-0 sudo[73272]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:39 compute-0 systemd[1]: Started libpod-conmon-ea2c55cb9987bf04e6852412de5f2e3470743e63e96a9e6115c6027afd7a505f.scope.
Nov 24 09:25:39 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:25:39 compute-0 podman[77004]: 2025-11-24 09:25:39.879413439 +0000 UTC m=+0.022596520 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:25:39 compute-0 podman[77004]: 2025-11-24 09:25:39.978371409 +0000 UTC m=+0.121554470 container init ea2c55cb9987bf04e6852412de5f2e3470743e63e96a9e6115c6027afd7a505f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_lamport, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 24 09:25:39 compute-0 podman[77004]: 2025-11-24 09:25:39.982645014 +0000 UTC m=+0.125828055 container start ea2c55cb9987bf04e6852412de5f2e3470743e63e96a9e6115c6027afd7a505f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_lamport, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1)
Nov 24 09:25:39 compute-0 podman[77004]: 2025-11-24 09:25:39.985529626 +0000 UTC m=+0.128712667 container attach ea2c55cb9987bf04e6852412de5f2e3470743e63e96a9e6115c6027afd7a505f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_lamport, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:25:39 compute-0 stoic_lamport[77019]: 167 167
Nov 24 09:25:39 compute-0 systemd[1]: libpod-ea2c55cb9987bf04e6852412de5f2e3470743e63e96a9e6115c6027afd7a505f.scope: Deactivated successfully.
Nov 24 09:25:39 compute-0 podman[77004]: 2025-11-24 09:25:39.987826732 +0000 UTC m=+0.131009773 container died ea2c55cb9987bf04e6852412de5f2e3470743e63e96a9e6115c6027afd7a505f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_lamport, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:25:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-1be836cb8bb9a2743dabbee9ca2bddedb9e4330f78c0f107bff8b6c13626aee1-merged.mount: Deactivated successfully.
Nov 24 09:25:40 compute-0 podman[77004]: 2025-11-24 09:25:40.017186019 +0000 UTC m=+0.160369050 container remove ea2c55cb9987bf04e6852412de5f2e3470743e63e96a9e6115c6027afd7a505f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Nov 24 09:25:40 compute-0 ceph-mon[74331]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 09:25:40 compute-0 ceph-mon[74331]: from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 09:25:40 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:40 compute-0 ceph-mon[74331]: Added label _admin to host compute-0
Nov 24 09:25:40 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:40 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/395949677' entity='client.admin' 
Nov 24 09:25:40 compute-0 systemd[1]: libpod-conmon-ea2c55cb9987bf04e6852412de5f2e3470743e63e96a9e6115c6027afd7a505f.scope: Deactivated successfully.
Nov 24 09:25:40 compute-0 podman[77041]: 2025-11-24 09:25:40.172428542 +0000 UTC m=+0.046436041 container create 5a9591ea393a84663f5700933b4e4e8e8b53a1b2da420a68078c4c58761df979 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_snyder, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:25:40 compute-0 systemd[1]: Started libpod-conmon-5a9591ea393a84663f5700933b4e4e8e8b53a1b2da420a68078c4c58761df979.scope.
Nov 24 09:25:40 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:25:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50f935441cab4518c1f60d950c12cc5269398424d0a5d785004ac4339d8b0406/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50f935441cab4518c1f60d950c12cc5269398424d0a5d785004ac4339d8b0406/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50f935441cab4518c1f60d950c12cc5269398424d0a5d785004ac4339d8b0406/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50f935441cab4518c1f60d950c12cc5269398424d0a5d785004ac4339d8b0406/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:40 compute-0 podman[77041]: 2025-11-24 09:25:40.154998621 +0000 UTC m=+0.029006140 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:25:40 compute-0 podman[77041]: 2025-11-24 09:25:40.261962198 +0000 UTC m=+0.135969697 container init 5a9591ea393a84663f5700933b4e4e8e8b53a1b2da420a68078c4c58761df979 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:25:40 compute-0 podman[77041]: 2025-11-24 09:25:40.271426963 +0000 UTC m=+0.145434462 container start 5a9591ea393a84663f5700933b4e4e8e8b53a1b2da420a68078c4c58761df979 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:25:40 compute-0 podman[77041]: 2025-11-24 09:25:40.274623631 +0000 UTC m=+0.148631120 container attach 5a9591ea393a84663f5700933b4e4e8e8b53a1b2da420a68078c4c58761df979 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_snyder, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 24 09:25:40 compute-0 sudo[77085]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kacpsyicwssuhtelyanjjsmfckzbfhzh ; /usr/bin/python3'
Nov 24 09:25:40 compute-0 sudo[77085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:25:40 compute-0 python3[77087]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:25:40 compute-0 podman[77088]: 2025-11-24 09:25:40.485794209 +0000 UTC m=+0.042219016 container create 474f8c24bae11fe784168475bfe582d2128ab02843ddb547ff38fd31cef74fc8 (image=quay.io/ceph/ceph:v19, name=gifted_hamilton, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 24 09:25:40 compute-0 systemd[1]: Started libpod-conmon-474f8c24bae11fe784168475bfe582d2128ab02843ddb547ff38fd31cef74fc8.scope.
Nov 24 09:25:40 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:25:40 compute-0 podman[77088]: 2025-11-24 09:25:40.465206289 +0000 UTC m=+0.021631126 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:25:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/175a14fd6daecd72bf866e0bdd5ecb75af107038aa71ed7d602aef31f5244e76/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/175a14fd6daecd72bf866e0bdd5ecb75af107038aa71ed7d602aef31f5244e76/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:40 compute-0 podman[77088]: 2025-11-24 09:25:40.577823807 +0000 UTC m=+0.134248634 container init 474f8c24bae11fe784168475bfe582d2128ab02843ddb547ff38fd31cef74fc8 (image=quay.io/ceph/ceph:v19, name=gifted_hamilton, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:25:40 compute-0 podman[77088]: 2025-11-24 09:25:40.58564747 +0000 UTC m=+0.142072277 container start 474f8c24bae11fe784168475bfe582d2128ab02843ddb547ff38fd31cef74fc8 (image=quay.io/ceph/ceph:v19, name=gifted_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Nov 24 09:25:40 compute-0 podman[77088]: 2025-11-24 09:25:40.590395478 +0000 UTC m=+0.146820305 container attach 474f8c24bae11fe784168475bfe582d2128ab02843ddb547ff38fd31cef74fc8 (image=quay.io/ceph/ceph:v19, name=gifted_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Nov 24 09:25:40 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Nov 24 09:25:40 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3408495661' entity='client.admin' 
Nov 24 09:25:40 compute-0 systemd[1]: libpod-474f8c24bae11fe784168475bfe582d2128ab02843ddb547ff38fd31cef74fc8.scope: Deactivated successfully.
Nov 24 09:25:40 compute-0 conmon[77105]: conmon 474f8c24bae11fe78416 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-474f8c24bae11fe784168475bfe582d2128ab02843ddb547ff38fd31cef74fc8.scope/container/memory.events
Nov 24 09:25:40 compute-0 podman[77088]: 2025-11-24 09:25:40.947968949 +0000 UTC m=+0.504393776 container died 474f8c24bae11fe784168475bfe582d2128ab02843ddb547ff38fd31cef74fc8 (image=quay.io/ceph/ceph:v19, name=gifted_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:25:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-175a14fd6daecd72bf866e0bdd5ecb75af107038aa71ed7d602aef31f5244e76-merged.mount: Deactivated successfully.
Nov 24 09:25:40 compute-0 podman[77088]: 2025-11-24 09:25:40.987272532 +0000 UTC m=+0.543697339 container remove 474f8c24bae11fe784168475bfe582d2128ab02843ddb547ff38fd31cef74fc8 (image=quay.io/ceph/ceph:v19, name=gifted_hamilton, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Nov 24 09:25:40 compute-0 systemd[1]: libpod-conmon-474f8c24bae11fe784168475bfe582d2128ab02843ddb547ff38fd31cef74fc8.scope: Deactivated successfully.
Nov 24 09:25:41 compute-0 sudo[77085]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:41 compute-0 angry_snyder[77057]: [
Nov 24 09:25:41 compute-0 angry_snyder[77057]:     {
Nov 24 09:25:41 compute-0 angry_snyder[77057]:         "available": false,
Nov 24 09:25:41 compute-0 angry_snyder[77057]:         "being_replaced": false,
Nov 24 09:25:41 compute-0 angry_snyder[77057]:         "ceph_device_lvm": false,
Nov 24 09:25:41 compute-0 angry_snyder[77057]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 24 09:25:41 compute-0 angry_snyder[77057]:         "lsm_data": {},
Nov 24 09:25:41 compute-0 angry_snyder[77057]:         "lvs": [],
Nov 24 09:25:41 compute-0 angry_snyder[77057]:         "path": "/dev/sr0",
Nov 24 09:25:41 compute-0 angry_snyder[77057]:         "rejected_reasons": [
Nov 24 09:25:41 compute-0 angry_snyder[77057]:             "Insufficient space (<5GB)",
Nov 24 09:25:41 compute-0 angry_snyder[77057]:             "Has a FileSystem"
Nov 24 09:25:41 compute-0 angry_snyder[77057]:         ],
Nov 24 09:25:41 compute-0 angry_snyder[77057]:         "sys_api": {
Nov 24 09:25:41 compute-0 angry_snyder[77057]:             "actuators": null,
Nov 24 09:25:41 compute-0 angry_snyder[77057]:             "device_nodes": [
Nov 24 09:25:41 compute-0 angry_snyder[77057]:                 "sr0"
Nov 24 09:25:41 compute-0 angry_snyder[77057]:             ],
Nov 24 09:25:41 compute-0 angry_snyder[77057]:             "devname": "sr0",
Nov 24 09:25:41 compute-0 angry_snyder[77057]:             "human_readable_size": "482.00 KB",
Nov 24 09:25:41 compute-0 angry_snyder[77057]:             "id_bus": "ata",
Nov 24 09:25:41 compute-0 angry_snyder[77057]:             "model": "QEMU DVD-ROM",
Nov 24 09:25:41 compute-0 angry_snyder[77057]:             "nr_requests": "2",
Nov 24 09:25:41 compute-0 angry_snyder[77057]:             "parent": "/dev/sr0",
Nov 24 09:25:41 compute-0 angry_snyder[77057]:             "partitions": {},
Nov 24 09:25:41 compute-0 angry_snyder[77057]:             "path": "/dev/sr0",
Nov 24 09:25:41 compute-0 angry_snyder[77057]:             "removable": "1",
Nov 24 09:25:41 compute-0 angry_snyder[77057]:             "rev": "2.5+",
Nov 24 09:25:41 compute-0 angry_snyder[77057]:             "ro": "0",
Nov 24 09:25:41 compute-0 angry_snyder[77057]:             "rotational": "1",
Nov 24 09:25:41 compute-0 angry_snyder[77057]:             "sas_address": "",
Nov 24 09:25:41 compute-0 angry_snyder[77057]:             "sas_device_handle": "",
Nov 24 09:25:41 compute-0 angry_snyder[77057]:             "scheduler_mode": "mq-deadline",
Nov 24 09:25:41 compute-0 angry_snyder[77057]:             "sectors": 0,
Nov 24 09:25:41 compute-0 angry_snyder[77057]:             "sectorsize": "2048",
Nov 24 09:25:41 compute-0 angry_snyder[77057]:             "size": 493568.0,
Nov 24 09:25:41 compute-0 angry_snyder[77057]:             "support_discard": "2048",
Nov 24 09:25:41 compute-0 angry_snyder[77057]:             "type": "disk",
Nov 24 09:25:41 compute-0 angry_snyder[77057]:             "vendor": "QEMU"
Nov 24 09:25:41 compute-0 angry_snyder[77057]:         }
Nov 24 09:25:41 compute-0 angry_snyder[77057]:     }
Nov 24 09:25:41 compute-0 angry_snyder[77057]: ]
Nov 24 09:25:41 compute-0 systemd[1]: libpod-5a9591ea393a84663f5700933b4e4e8e8b53a1b2da420a68078c4c58761df979.scope: Deactivated successfully.
Nov 24 09:25:41 compute-0 podman[77041]: 2025-11-24 09:25:41.048290633 +0000 UTC m=+0.922298132 container died 5a9591ea393a84663f5700933b4e4e8e8b53a1b2da420a68078c4c58761df979 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_snyder, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:25:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-50f935441cab4518c1f60d950c12cc5269398424d0a5d785004ac4339d8b0406-merged.mount: Deactivated successfully.
Nov 24 09:25:41 compute-0 podman[77041]: 2025-11-24 09:25:41.082088479 +0000 UTC m=+0.956095978 container remove 5a9591ea393a84663f5700933b4e4e8e8b53a1b2da420a68078c4c58761df979 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_snyder, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:25:41 compute-0 systemd[1]: libpod-conmon-5a9591ea393a84663f5700933b4e4e8e8b53a1b2da420a68078c4c58761df979.scope: Deactivated successfully.
Nov 24 09:25:41 compute-0 sudo[76926]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:41 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:25:41 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:41 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:25:41 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:41 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:25:41 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:41 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:25:41 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:41 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Nov 24 09:25:41 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 09:25:41 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:25:41 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:25:41 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 09:25:41 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:25:41 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Nov 24 09:25:41 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Nov 24 09:25:41 compute-0 sudo[78089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 24 09:25:41 compute-0 sudo[78089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:41 compute-0 sudo[78089]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:41 compute-0 sudo[78114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph
Nov 24 09:25:41 compute-0 sudo[78114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:41 compute-0 sudo[78114]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:41 compute-0 sudo[78139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.conf.new
Nov 24 09:25:41 compute-0 sudo[78139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:41 compute-0 sudo[78139]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:41 compute-0 sudo[78187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:25:41 compute-0 sudo[78187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:41 compute-0 sudo[78187]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:41 compute-0 sudo[78241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.conf.new
Nov 24 09:25:41 compute-0 sudo[78241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:41 compute-0 sudo[78241]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:41 compute-0 ceph-mgr[74626]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 24 09:25:41 compute-0 sudo[78312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.conf.new
Nov 24 09:25:41 compute-0 sudo[78312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:41 compute-0 sudo[78312]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:41 compute-0 sudo[78337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.conf.new
Nov 24 09:25:41 compute-0 sudo[78337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:41 compute-0 sudo[78337]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:41 compute-0 sudo[78362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Nov 24 09:25:41 compute-0 sudo[78362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:41 compute-0 sudo[78362]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:41 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:25:41 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:25:41 compute-0 sudo[78387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config
Nov 24 09:25:41 compute-0 sudo[78387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:41 compute-0 sudo[78387]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:41 compute-0 sudo[78435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config
Nov 24 09:25:41 compute-0 sudo[78435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:41 compute-0 sudo[78435]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:41 compute-0 sudo[78532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkqzrjprdorgnlqllnjzhgeebjxezlsw ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763976341.3286479-37040-157962864729107/async_wrapper.py j772933658785 30 /home/zuul/.ansible/tmp/ansible-tmp-1763976341.3286479-37040-157962864729107/AnsiballZ_command.py _'
Nov 24 09:25:41 compute-0 sudo[78488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf.new
Nov 24 09:25:41 compute-0 sudo[78532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:25:41 compute-0 sudo[78488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:41 compute-0 sudo[78488]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:41 compute-0 sudo[78537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:25:41 compute-0 sudo[78537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:41 compute-0 sudo[78537]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:41 compute-0 sudo[78562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf.new
Nov 24 09:25:41 compute-0 sudo[78562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:41 compute-0 sudo[78562]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:41 compute-0 ansible-async_wrapper.py[78535]: Invoked with j772933658785 30 /home/zuul/.ansible/tmp/ansible-tmp-1763976341.3286479-37040-157962864729107/AnsiballZ_command.py _
Nov 24 09:25:41 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3408495661' entity='client.admin' 
Nov 24 09:25:41 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:41 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:41 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:41 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:41 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 09:25:41 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:25:41 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:25:41 compute-0 ansible-async_wrapper.py[78612]: Starting module and watcher
Nov 24 09:25:41 compute-0 ansible-async_wrapper.py[78612]: Start watching 78613 (30)
Nov 24 09:25:41 compute-0 ansible-async_wrapper.py[78613]: Start module (78613)
Nov 24 09:25:41 compute-0 ansible-async_wrapper.py[78535]: Return async_wrapper task started.
Nov 24 09:25:41 compute-0 sudo[78532]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:41 compute-0 sudo[78615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf.new
Nov 24 09:25:41 compute-0 sudo[78615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:42 compute-0 sudo[78615]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:42 compute-0 sudo[78640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf.new
Nov 24 09:25:42 compute-0 sudo[78640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:42 compute-0 sudo[78640]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:42 compute-0 python3[78614]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:25:42 compute-0 sudo[78665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf.new /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:25:42 compute-0 sudo[78665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:42 compute-0 sudo[78665]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:42 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 24 09:25:42 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 24 09:25:42 compute-0 podman[78686]: 2025-11-24 09:25:42.134023198 +0000 UTC m=+0.037039937 container create 5c9a9009fafda6e44f592619a1eefe69ba9f3364e2787499f189b71ea19bd3b8 (image=quay.io/ceph/ceph:v19, name=vigilant_gates, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325)
Nov 24 09:25:42 compute-0 systemd[1]: Started libpod-conmon-5c9a9009fafda6e44f592619a1eefe69ba9f3364e2787499f189b71ea19bd3b8.scope.
Nov 24 09:25:42 compute-0 sudo[78701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 24 09:25:42 compute-0 sudo[78701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:42 compute-0 sudo[78701]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:42 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:25:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c309c13b69970cb92380cc677667daf7466c26294e14bd28bd293bf76c2d7fa9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c309c13b69970cb92380cc677667daf7466c26294e14bd28bd293bf76c2d7fa9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:42 compute-0 podman[78686]: 2025-11-24 09:25:42.119575911 +0000 UTC m=+0.022592680 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:25:42 compute-0 podman[78686]: 2025-11-24 09:25:42.220197982 +0000 UTC m=+0.123214741 container init 5c9a9009fafda6e44f592619a1eefe69ba9f3364e2787499f189b71ea19bd3b8 (image=quay.io/ceph/ceph:v19, name=vigilant_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:25:42 compute-0 podman[78686]: 2025-11-24 09:25:42.22783407 +0000 UTC m=+0.130850809 container start 5c9a9009fafda6e44f592619a1eefe69ba9f3364e2787499f189b71ea19bd3b8 (image=quay.io/ceph/ceph:v19, name=vigilant_gates, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Nov 24 09:25:42 compute-0 podman[78686]: 2025-11-24 09:25:42.23101645 +0000 UTC m=+0.134033209 container attach 5c9a9009fafda6e44f592619a1eefe69ba9f3364e2787499f189b71ea19bd3b8 (image=quay.io/ceph/ceph:v19, name=vigilant_gates, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:25:42 compute-0 sudo[78732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph
Nov 24 09:25:42 compute-0 sudo[78732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:42 compute-0 sudo[78732]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:42 compute-0 sudo[78758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.client.admin.keyring.new
Nov 24 09:25:42 compute-0 sudo[78758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:42 compute-0 sudo[78758]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:42 compute-0 sudo[78783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:25:42 compute-0 sudo[78783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:42 compute-0 sudo[78783]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:42 compute-0 sudo[78827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.client.admin.keyring.new
Nov 24 09:25:42 compute-0 sudo[78827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:42 compute-0 sudo[78827]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:42 compute-0 sudo[78875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.client.admin.keyring.new
Nov 24 09:25:42 compute-0 sudo[78875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:42 compute-0 sudo[78875]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:42 compute-0 sudo[78900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.client.admin.keyring.new
Nov 24 09:25:42 compute-0 sudo[78900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:42 compute-0 sudo[78900]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:42 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 24 09:25:42 compute-0 vigilant_gates[78728]: 
Nov 24 09:25:42 compute-0 vigilant_gates[78728]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 24 09:25:42 compute-0 systemd[1]: libpod-5c9a9009fafda6e44f592619a1eefe69ba9f3364e2787499f189b71ea19bd3b8.scope: Deactivated successfully.
Nov 24 09:25:42 compute-0 podman[78686]: 2025-11-24 09:25:42.605058359 +0000 UTC m=+0.508075098 container died 5c9a9009fafda6e44f592619a1eefe69ba9f3364e2787499f189b71ea19bd3b8 (image=quay.io/ceph/ceph:v19, name=vigilant_gates, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:25:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-c309c13b69970cb92380cc677667daf7466c26294e14bd28bd293bf76c2d7fa9-merged.mount: Deactivated successfully.
Nov 24 09:25:42 compute-0 sudo[78925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Nov 24 09:25:42 compute-0 sudo[78925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:42 compute-0 sudo[78925]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:42 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 09:25:42 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 09:25:42 compute-0 podman[78686]: 2025-11-24 09:25:42.644220048 +0000 UTC m=+0.547236787 container remove 5c9a9009fafda6e44f592619a1eefe69ba9f3364e2787499f189b71ea19bd3b8 (image=quay.io/ceph/ceph:v19, name=vigilant_gates, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Nov 24 09:25:42 compute-0 systemd[1]: libpod-conmon-5c9a9009fafda6e44f592619a1eefe69ba9f3364e2787499f189b71ea19bd3b8.scope: Deactivated successfully.
Nov 24 09:25:42 compute-0 ansible-async_wrapper.py[78613]: Module complete (78613)
Nov 24 09:25:42 compute-0 sudo[78963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config
Nov 24 09:25:42 compute-0 sudo[78963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:42 compute-0 sudo[78963]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:42 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:25:42 compute-0 sudo[78988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config
Nov 24 09:25:42 compute-0 sudo[78988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:42 compute-0 sudo[78988]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:42 compute-0 sudo[79013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring.new
Nov 24 09:25:42 compute-0 sudo[79013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:42 compute-0 sudo[79013]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:42 compute-0 sudo[79038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:25:42 compute-0 sudo[79038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:42 compute-0 sudo[79038]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:42 compute-0 sudo[79063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring.new
Nov 24 09:25:42 compute-0 sudo[79063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:42 compute-0 sudo[79063]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:42 compute-0 ceph-mon[74331]: Updating compute-0:/etc/ceph/ceph.conf
Nov 24 09:25:42 compute-0 ceph-mon[74331]: Updating compute-0:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:25:43 compute-0 sudo[79111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring.new
Nov 24 09:25:43 compute-0 sudo[79111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:43 compute-0 sudo[79111]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:43 compute-0 sudo[79159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring.new
Nov 24 09:25:43 compute-0 sudo[79159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:43 compute-0 sudo[79159]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:43 compute-0 sudo[79184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring.new /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 09:25:43 compute-0 sudo[79184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:43 compute-0 sudo[79184]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:25:43 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:25:43 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:25:43 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:43 compute-0 ceph-mgr[74626]: [progress INFO root] update: starting ev 1bc2362b-3f77-49eb-94e2-d61e1fd7a670 (Updating crash deployment (+1 -> 1))
Nov 24 09:25:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Nov 24 09:25:43 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 24 09:25:43 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 24 09:25:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:25:43 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:25:43 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Nov 24 09:25:43 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Nov 24 09:25:43 compute-0 sudo[79209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:25:43 compute-0 sudo[79209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:43 compute-0 sudo[79209]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:43 compute-0 sudo[79277]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyofudhxttifprscmirtlktbmmbeemlb ; /usr/bin/python3'
Nov 24 09:25:43 compute-0 sudo[79277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:25:43 compute-0 sudo[79235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:25:43 compute-0 sudo[79235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:43 compute-0 python3[79283]: ansible-ansible.legacy.async_status Invoked with jid=j772933658785.78535 mode=status _async_dir=/root/.ansible_async
Nov 24 09:25:43 compute-0 sudo[79277]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:43 compute-0 ceph-mgr[74626]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 24 09:25:43 compute-0 sudo[79353]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahyrwtuilmikbrxnyplvsdskqxdhzctd ; /usr/bin/python3'
Nov 24 09:25:43 compute-0 sudo[79353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:25:43 compute-0 podman[79374]: 2025-11-24 09:25:43.653602744 +0000 UTC m=+0.036649579 container create 8c4bfcd37ff18c6f2e5e9760cbe0c7fe8f995967084f1607668644666754281b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_herschel, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Nov 24 09:25:43 compute-0 systemd[1]: Started libpod-conmon-8c4bfcd37ff18c6f2e5e9760cbe0c7fe8f995967084f1607668644666754281b.scope.
Nov 24 09:25:43 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:25:43 compute-0 python3[79360]: ansible-ansible.legacy.async_status Invoked with jid=j772933658785.78535 mode=cleanup _async_dir=/root/.ansible_async
Nov 24 09:25:43 compute-0 sudo[79353]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:43 compute-0 podman[79374]: 2025-11-24 09:25:43.711088427 +0000 UTC m=+0.094135282 container init 8c4bfcd37ff18c6f2e5e9760cbe0c7fe8f995967084f1607668644666754281b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_herschel, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default)
Nov 24 09:25:43 compute-0 podman[79374]: 2025-11-24 09:25:43.717347651 +0000 UTC m=+0.100394496 container start 8c4bfcd37ff18c6f2e5e9760cbe0c7fe8f995967084f1607668644666754281b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_herschel, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Nov 24 09:25:43 compute-0 podman[79374]: 2025-11-24 09:25:43.721159536 +0000 UTC m=+0.104206371 container attach 8c4bfcd37ff18c6f2e5e9760cbe0c7fe8f995967084f1607668644666754281b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:25:43 compute-0 jolly_herschel[79390]: 167 167
Nov 24 09:25:43 compute-0 systemd[1]: libpod-8c4bfcd37ff18c6f2e5e9760cbe0c7fe8f995967084f1607668644666754281b.scope: Deactivated successfully.
Nov 24 09:25:43 compute-0 podman[79374]: 2025-11-24 09:25:43.722866978 +0000 UTC m=+0.105913813 container died 8c4bfcd37ff18c6f2e5e9760cbe0c7fe8f995967084f1607668644666754281b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:25:43 compute-0 podman[79374]: 2025-11-24 09:25:43.637241569 +0000 UTC m=+0.020288434 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:25:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-4128974b0047015434fc1a31e5718803f34c43693e483bb37e7f17949bb5476e-merged.mount: Deactivated successfully.
Nov 24 09:25:43 compute-0 podman[79374]: 2025-11-24 09:25:43.752247745 +0000 UTC m=+0.135294580 container remove 8c4bfcd37ff18c6f2e5e9760cbe0c7fe8f995967084f1607668644666754281b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_herschel, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:25:43 compute-0 systemd[1]: libpod-conmon-8c4bfcd37ff18c6f2e5e9760cbe0c7fe8f995967084f1607668644666754281b.scope: Deactivated successfully.
Nov 24 09:25:43 compute-0 systemd[1]: Reloading.
Nov 24 09:25:43 compute-0 systemd-rc-local-generator[79435]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:25:43 compute-0 systemd-sysv-generator[79438]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:25:43 compute-0 ceph-mon[74331]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 24 09:25:43 compute-0 ceph-mon[74331]: from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 24 09:25:43 compute-0 ceph-mon[74331]: Updating compute-0:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 09:25:43 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:43 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:43 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:43 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 24 09:25:43 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 24 09:25:43 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:25:44 compute-0 sudo[79468]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opddmcagzvpfxakwroccufekhorrpuea ; /usr/bin/python3'
Nov 24 09:25:44 compute-0 sudo[79468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:25:44 compute-0 systemd[1]: Reloading.
Nov 24 09:25:44 compute-0 systemd-rc-local-generator[79500]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:25:44 compute-0 systemd-sysv-generator[79503]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:25:44 compute-0 python3[79474]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 24 09:25:44 compute-0 systemd[1]: Starting Ceph crash.compute-0 for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:25:44 compute-0 sudo[79468]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:44 compute-0 podman[79562]: 2025-11-24 09:25:44.647793614 +0000 UTC m=+0.051415055 container create dc063fe0e39f7bb4b4bbc5df375d462f040a3f4de9a4fcfb2144b8d329fa7c7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 24 09:25:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d94906950a5dadcf50eb1c995c9529aa92bc257bd50dd7030400e964dbe1231b/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d94906950a5dadcf50eb1c995c9529aa92bc257bd50dd7030400e964dbe1231b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d94906950a5dadcf50eb1c995c9529aa92bc257bd50dd7030400e964dbe1231b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d94906950a5dadcf50eb1c995c9529aa92bc257bd50dd7030400e964dbe1231b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:44 compute-0 podman[79562]: 2025-11-24 09:25:44.712985526 +0000 UTC m=+0.116606977 container init dc063fe0e39f7bb4b4bbc5df375d462f040a3f4de9a4fcfb2144b8d329fa7c7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:25:44 compute-0 podman[79562]: 2025-11-24 09:25:44.62342775 +0000 UTC m=+0.027049231 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:25:44 compute-0 sudo[79603]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emkamjfcuhmggdulviqhdjmiswxifvec ; /usr/bin/python3'
Nov 24 09:25:44 compute-0 podman[79562]: 2025-11-24 09:25:44.72320111 +0000 UTC m=+0.126822561 container start dc063fe0e39f7bb4b4bbc5df375d462f040a3f4de9a4fcfb2144b8d329fa7c7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:25:44 compute-0 sudo[79603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:25:44 compute-0 bash[79562]: dc063fe0e39f7bb4b4bbc5df375d462f040a3f4de9a4fcfb2144b8d329fa7c7f
Nov 24 09:25:44 compute-0 systemd[1]: Started Ceph crash.compute-0 for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:25:44 compute-0 sudo[79235]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:44 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:25:44 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:44 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:25:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-0[79585]: INFO:ceph-crash:pinging cluster to exercise our key
Nov 24 09:25:44 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:44 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Nov 24 09:25:44 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:44 compute-0 ceph-mgr[74626]: [progress INFO root] complete: finished ev 1bc2362b-3f77-49eb-94e2-d61e1fd7a670 (Updating crash deployment (+1 -> 1))
Nov 24 09:25:44 compute-0 ceph-mgr[74626]: [progress INFO root] Completed event 1bc2362b-3f77-49eb-94e2-d61e1fd7a670 (Updating crash deployment (+1 -> 1)) in 2 seconds
Nov 24 09:25:44 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Nov 24 09:25:44 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:44 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Nov 24 09:25:44 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:44 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Nov 24 09:25:44 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:44 compute-0 sudo[79610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:25:44 compute-0 sudo[79610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:44 compute-0 python3[79607]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:25:44 compute-0 sudo[79610]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-0[79585]: 2025-11-24T09:25:44.880+0000 7f6b6b3b8640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 24 09:25:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-0[79585]: 2025-11-24T09:25:44.880+0000 7f6b6b3b8640 -1 AuthRegistry(0x7f6b640698f0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 24 09:25:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-0[79585]: 2025-11-24T09:25:44.881+0000 7f6b6b3b8640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 24 09:25:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-0[79585]: 2025-11-24T09:25:44.881+0000 7f6b6b3b8640 -1 AuthRegistry(0x7f6b6b3b6ff0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 24 09:25:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-0[79585]: 2025-11-24T09:25:44.882+0000 7f6b6912d640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Nov 24 09:25:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-0[79585]: 2025-11-24T09:25:44.882+0000 7f6b6b3b8640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Nov 24 09:25:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-0[79585]: [errno 13] RADOS permission denied (error connecting to the cluster)
Nov 24 09:25:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-0[79585]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Nov 24 09:25:44 compute-0 sudo[79636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:25:44 compute-0 sudo[79636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:44 compute-0 sudo[79636]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:44 compute-0 podman[79635]: 2025-11-24 09:25:44.926081601 +0000 UTC m=+0.041040176 container create 28e8bdd06d8561dddc458240d7d3fb226c16a664eafe5f8d9219a487e2077f2c (image=quay.io/ceph/ceph:v19, name=modest_elion, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 24 09:25:44 compute-0 ceph-mon[74331]: Deploying daemon crash.compute-0 on compute-0
Nov 24 09:25:44 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:44 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:44 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:44 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:44 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:44 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:44 compute-0 systemd[1]: Started libpod-conmon-28e8bdd06d8561dddc458240d7d3fb226c16a664eafe5f8d9219a487e2077f2c.scope.
Nov 24 09:25:44 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:25:44 compute-0 sudo[79683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 24 09:25:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/439401d9555d6726cd48b0f6a25408f0747b14f274cee1cee96f331d35bbf58c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/439401d9555d6726cd48b0f6a25408f0747b14f274cee1cee96f331d35bbf58c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:44 compute-0 sudo[79683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/439401d9555d6726cd48b0f6a25408f0747b14f274cee1cee96f331d35bbf58c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:45 compute-0 podman[79635]: 2025-11-24 09:25:44.908880346 +0000 UTC m=+0.023838951 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:25:45 compute-0 podman[79635]: 2025-11-24 09:25:45.015163977 +0000 UTC m=+0.130122572 container init 28e8bdd06d8561dddc458240d7d3fb226c16a664eafe5f8d9219a487e2077f2c (image=quay.io/ceph/ceph:v19, name=modest_elion, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 09:25:45 compute-0 podman[79635]: 2025-11-24 09:25:45.021111164 +0000 UTC m=+0.136069739 container start 28e8bdd06d8561dddc458240d7d3fb226c16a664eafe5f8d9219a487e2077f2c (image=quay.io/ceph/ceph:v19, name=modest_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Nov 24 09:25:45 compute-0 podman[79635]: 2025-11-24 09:25:45.02377653 +0000 UTC m=+0.138735105 container attach 28e8bdd06d8561dddc458240d7d3fb226c16a664eafe5f8d9219a487e2077f2c (image=quay.io/ceph/ceph:v19, name=modest_elion, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:25:45 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 24 09:25:45 compute-0 modest_elion[79708]: 
Nov 24 09:25:45 compute-0 modest_elion[79708]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 24 09:25:45 compute-0 systemd[1]: libpod-28e8bdd06d8561dddc458240d7d3fb226c16a664eafe5f8d9219a487e2077f2c.scope: Deactivated successfully.
Nov 24 09:25:45 compute-0 podman[79635]: 2025-11-24 09:25:45.389533573 +0000 UTC m=+0.504492168 container died 28e8bdd06d8561dddc458240d7d3fb226c16a664eafe5f8d9219a487e2077f2c (image=quay.io/ceph/ceph:v19, name=modest_elion, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:25:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-439401d9555d6726cd48b0f6a25408f0747b14f274cee1cee96f331d35bbf58c-merged.mount: Deactivated successfully.
Nov 24 09:25:45 compute-0 podman[79635]: 2025-11-24 09:25:45.422042009 +0000 UTC m=+0.537000584 container remove 28e8bdd06d8561dddc458240d7d3fb226c16a664eafe5f8d9219a487e2077f2c (image=quay.io/ceph/ceph:v19, name=modest_elion, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Nov 24 09:25:45 compute-0 systemd[1]: libpod-conmon-28e8bdd06d8561dddc458240d7d3fb226c16a664eafe5f8d9219a487e2077f2c.scope: Deactivated successfully.
Nov 24 09:25:45 compute-0 sudo[79603]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:45 compute-0 podman[79818]: 2025-11-24 09:25:45.502534911 +0000 UTC m=+0.049820774 container exec 926e81c0f890a1c1ac5ebf5b0a3fc7d39273a3029701ecf933d5ab782a4c6bc4 (image=quay.io/ceph/ceph:v19, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Nov 24 09:25:45 compute-0 ceph-mon[74331]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Nov 24 09:25:45 compute-0 ceph-mgr[74626]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Nov 24 09:25:45 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:25:45 compute-0 ceph-mgr[74626]: [progress INFO root] Writing back 1 completed events
Nov 24 09:25:45 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 24 09:25:45 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:45 compute-0 podman[79818]: 2025-11-24 09:25:45.591455292 +0000 UTC m=+0.138741155 container exec_died 926e81c0f890a1c1ac5ebf5b0a3fc7d39273a3029701ecf933d5ab782a4c6bc4 (image=quay.io/ceph/ceph:v19, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Nov 24 09:25:45 compute-0 sudo[79899]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aivitvmkwcmxbbbjjgcupvuhyxcqwnhh ; /usr/bin/python3'
Nov 24 09:25:45 compute-0 sudo[79899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:25:45 compute-0 sudo[79683]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:45 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:25:45 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:45 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:25:45 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:45 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:25:45 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:25:45 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 09:25:45 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:25:45 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:25:45 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:45 compute-0 sudo[79914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:25:45 compute-0 sudo[79914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:45 compute-0 python3[79911]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:25:45 compute-0 sudo[79914]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:45 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0)
Nov 24 09:25:45 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:45 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0)
Nov 24 09:25:45 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:45 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0)
Nov 24 09:25:45 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:45 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0)
Nov 24 09:25:45 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:45 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Nov 24 09:25:45 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Nov 24 09:25:45 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Nov 24 09:25:45 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 24 09:25:45 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Nov 24 09:25:45 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 24 09:25:45 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:25:45 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:25:45 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Nov 24 09:25:45 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Nov 24 09:25:45 compute-0 podman[79939]: 2025-11-24 09:25:45.930757291 +0000 UTC m=+0.038050273 container create afb6baf4f1d65ae445ab5b40cfa35c92d19baf4a70e89beca1c107372d29a130 (image=quay.io/ceph/ceph:v19, name=ecstatic_wiles, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Nov 24 09:25:45 compute-0 ceph-mon[74331]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Nov 24 09:25:45 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:45 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:45 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:45 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:25:45 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:25:45 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:45 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:45 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:45 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:45 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:45 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 24 09:25:45 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 24 09:25:45 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:25:45 compute-0 systemd[1]: Started libpod-conmon-afb6baf4f1d65ae445ab5b40cfa35c92d19baf4a70e89beca1c107372d29a130.scope.
Nov 24 09:25:45 compute-0 sudo[79950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:25:45 compute-0 sudo[79950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:45 compute-0 sudo[79950]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:45 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:25:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a42efbda1fd2a29c4a36139f73e5d21b66956f01299758bad6821e1e54ec4f43/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a42efbda1fd2a29c4a36139f73e5d21b66956f01299758bad6821e1e54ec4f43/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a42efbda1fd2a29c4a36139f73e5d21b66956f01299758bad6821e1e54ec4f43/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:46 compute-0 podman[79939]: 2025-11-24 09:25:45.915084483 +0000 UTC m=+0.022377485 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:25:46 compute-0 podman[79939]: 2025-11-24 09:25:46.023617909 +0000 UTC m=+0.130910941 container init afb6baf4f1d65ae445ab5b40cfa35c92d19baf4a70e89beca1c107372d29a130 (image=quay.io/ceph/ceph:v19, name=ecstatic_wiles, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 09:25:46 compute-0 sudo[79983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:25:46 compute-0 sudo[79983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:46 compute-0 podman[79939]: 2025-11-24 09:25:46.033071214 +0000 UTC m=+0.140364196 container start afb6baf4f1d65ae445ab5b40cfa35c92d19baf4a70e89beca1c107372d29a130 (image=quay.io/ceph/ceph:v19, name=ecstatic_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 09:25:46 compute-0 podman[79939]: 2025-11-24 09:25:46.036315184 +0000 UTC m=+0.143608206 container attach afb6baf4f1d65ae445ab5b40cfa35c92d19baf4a70e89beca1c107372d29a130 (image=quay.io/ceph/ceph:v19, name=ecstatic_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:25:46 compute-0 podman[80045]: 2025-11-24 09:25:46.324641731 +0000 UTC m=+0.044412380 container create 1fb7f905ff114a548f17e799c6f6b5160013a3e9e0216a6709aec8900e61f16f (image=quay.io/ceph/ceph:v19, name=compassionate_joliot, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:25:46 compute-0 systemd[1]: Started libpod-conmon-1fb7f905ff114a548f17e799c6f6b5160013a3e9e0216a6709aec8900e61f16f.scope.
Nov 24 09:25:46 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Nov 24 09:25:46 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1300268686' entity='client.admin' 
Nov 24 09:25:46 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:25:46 compute-0 podman[80045]: 2025-11-24 09:25:46.39167643 +0000 UTC m=+0.111447099 container init 1fb7f905ff114a548f17e799c6f6b5160013a3e9e0216a6709aec8900e61f16f (image=quay.io/ceph/ceph:v19, name=compassionate_joliot, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:25:46 compute-0 systemd[1]: libpod-afb6baf4f1d65ae445ab5b40cfa35c92d19baf4a70e89beca1c107372d29a130.scope: Deactivated successfully.
Nov 24 09:25:46 compute-0 podman[79939]: 2025-11-24 09:25:46.394457659 +0000 UTC m=+0.501750641 container died afb6baf4f1d65ae445ab5b40cfa35c92d19baf4a70e89beca1c107372d29a130 (image=quay.io/ceph/ceph:v19, name=ecstatic_wiles, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Nov 24 09:25:46 compute-0 podman[80045]: 2025-11-24 09:25:46.399230768 +0000 UTC m=+0.119001427 container start 1fb7f905ff114a548f17e799c6f6b5160013a3e9e0216a6709aec8900e61f16f (image=quay.io/ceph/ceph:v19, name=compassionate_joliot, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Nov 24 09:25:46 compute-0 podman[80045]: 2025-11-24 09:25:46.304978114 +0000 UTC m=+0.024748803 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:25:46 compute-0 compassionate_joliot[80061]: 167 167
Nov 24 09:25:46 compute-0 podman[80045]: 2025-11-24 09:25:46.406825875 +0000 UTC m=+0.126596564 container attach 1fb7f905ff114a548f17e799c6f6b5160013a3e9e0216a6709aec8900e61f16f (image=quay.io/ceph/ceph:v19, name=compassionate_joliot, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 09:25:46 compute-0 systemd[1]: libpod-1fb7f905ff114a548f17e799c6f6b5160013a3e9e0216a6709aec8900e61f16f.scope: Deactivated successfully.
Nov 24 09:25:46 compute-0 podman[80045]: 2025-11-24 09:25:46.417072609 +0000 UTC m=+0.136843258 container died 1fb7f905ff114a548f17e799c6f6b5160013a3e9e0216a6709aec8900e61f16f (image=quay.io/ceph/ceph:v19, name=compassionate_joliot, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True)
Nov 24 09:25:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-a42efbda1fd2a29c4a36139f73e5d21b66956f01299758bad6821e1e54ec4f43-merged.mount: Deactivated successfully.
Nov 24 09:25:46 compute-0 podman[79939]: 2025-11-24 09:25:46.4438092 +0000 UTC m=+0.551102172 container remove afb6baf4f1d65ae445ab5b40cfa35c92d19baf4a70e89beca1c107372d29a130 (image=quay.io/ceph/ceph:v19, name=ecstatic_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 24 09:25:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f2e22c52a5ad3ff4d47694471175d140940ebc60d76a9604909e0ea679a1785-merged.mount: Deactivated successfully.
Nov 24 09:25:46 compute-0 podman[80045]: 2025-11-24 09:25:46.462067992 +0000 UTC m=+0.181838681 container remove 1fb7f905ff114a548f17e799c6f6b5160013a3e9e0216a6709aec8900e61f16f (image=quay.io/ceph/ceph:v19, name=compassionate_joliot, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:25:46 compute-0 sudo[79899]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:46 compute-0 systemd[1]: libpod-conmon-1fb7f905ff114a548f17e799c6f6b5160013a3e9e0216a6709aec8900e61f16f.scope: Deactivated successfully.
Nov 24 09:25:46 compute-0 systemd[1]: libpod-conmon-afb6baf4f1d65ae445ab5b40cfa35c92d19baf4a70e89beca1c107372d29a130.scope: Deactivated successfully.
Nov 24 09:25:46 compute-0 sudo[79983]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:46 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:25:46 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:46 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:25:46 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:46 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.mauvni (unknown last config time)...
Nov 24 09:25:46 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.mauvni (unknown last config time)...
Nov 24 09:25:46 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.mauvni", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Nov 24 09:25:46 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.mauvni", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 24 09:25:46 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 24 09:25:46 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 09:25:46 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:25:46 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:25:46 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.mauvni on compute-0
Nov 24 09:25:46 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.mauvni on compute-0
Nov 24 09:25:46 compute-0 sudo[80091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:25:46 compute-0 sudo[80091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:46 compute-0 sudo[80091]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:46 compute-0 sudo[80116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:25:46 compute-0 sudo[80116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:46 compute-0 sudo[80162]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjcpcmkdttgrlumarhltlejxghwsixlt ; /usr/bin/python3'
Nov 24 09:25:46 compute-0 sudo[80162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:25:46 compute-0 python3[80166]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:25:46 compute-0 podman[80167]: 2025-11-24 09:25:46.833220241 +0000 UTC m=+0.040218097 container create fce7b1dd89bc904b2b1a2b43842833203eb8ecf23bc4fe2109f235cc6e406bb2 (image=quay.io/ceph/ceph:v19, name=nifty_wu, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Nov 24 09:25:46 compute-0 systemd[1]: Started libpod-conmon-fce7b1dd89bc904b2b1a2b43842833203eb8ecf23bc4fe2109f235cc6e406bb2.scope.
Nov 24 09:25:46 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:25:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a9903b9385eaabe49e69b2fda8d6e4ea30fd9e2eb549ffe4f91b3d7d8b44f88/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a9903b9385eaabe49e69b2fda8d6e4ea30fd9e2eb549ffe4f91b3d7d8b44f88/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a9903b9385eaabe49e69b2fda8d6e4ea30fd9e2eb549ffe4f91b3d7d8b44f88/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:46 compute-0 podman[80167]: 2025-11-24 09:25:46.816512457 +0000 UTC m=+0.023510333 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:25:46 compute-0 podman[80167]: 2025-11-24 09:25:46.919936896 +0000 UTC m=+0.126934772 container init fce7b1dd89bc904b2b1a2b43842833203eb8ecf23bc4fe2109f235cc6e406bb2 (image=quay.io/ceph/ceph:v19, name=nifty_wu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:25:46 compute-0 podman[80167]: 2025-11-24 09:25:46.925885094 +0000 UTC m=+0.132882950 container start fce7b1dd89bc904b2b1a2b43842833203eb8ecf23bc4fe2109f235cc6e406bb2 (image=quay.io/ceph/ceph:v19, name=nifty_wu, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 09:25:46 compute-0 podman[80167]: 2025-11-24 09:25:46.93099176 +0000 UTC m=+0.137989646 container attach fce7b1dd89bc904b2b1a2b43842833203eb8ecf23bc4fe2109f235cc6e406bb2 (image=quay.io/ceph/ceph:v19, name=nifty_wu, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 24 09:25:46 compute-0 ansible-async_wrapper.py[78612]: Done in kid B.
Nov 24 09:25:46 compute-0 ceph-mon[74331]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 24 09:25:46 compute-0 ceph-mon[74331]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:25:46 compute-0 ceph-mon[74331]: Reconfiguring mon.compute-0 (unknown last config time)...
Nov 24 09:25:46 compute-0 ceph-mon[74331]: Reconfiguring daemon mon.compute-0 on compute-0
Nov 24 09:25:46 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1300268686' entity='client.admin' 
Nov 24 09:25:46 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:46 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:46 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.mauvni", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 24 09:25:46 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 09:25:46 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:25:46 compute-0 podman[80200]: 2025-11-24 09:25:46.960294825 +0000 UTC m=+0.050697735 container create a032b8f15d27edfb9686231dd5c8dfd59bc6203db23e8f0ffefbdd56f92993c6 (image=quay.io/ceph/ceph:v19, name=gracious_lovelace, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:25:46 compute-0 systemd[1]: Started libpod-conmon-a032b8f15d27edfb9686231dd5c8dfd59bc6203db23e8f0ffefbdd56f92993c6.scope.
Nov 24 09:25:47 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:25:47 compute-0 podman[80200]: 2025-11-24 09:25:47.029473728 +0000 UTC m=+0.119876668 container init a032b8f15d27edfb9686231dd5c8dfd59bc6203db23e8f0ffefbdd56f92993c6 (image=quay.io/ceph/ceph:v19, name=gracious_lovelace, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 24 09:25:47 compute-0 podman[80200]: 2025-11-24 09:25:47.03517981 +0000 UTC m=+0.125582720 container start a032b8f15d27edfb9686231dd5c8dfd59bc6203db23e8f0ffefbdd56f92993c6 (image=quay.io/ceph/ceph:v19, name=gracious_lovelace, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 24 09:25:47 compute-0 gracious_lovelace[80217]: 167 167
Nov 24 09:25:47 compute-0 podman[80200]: 2025-11-24 09:25:46.941834099 +0000 UTC m=+0.032237049 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:25:47 compute-0 podman[80200]: 2025-11-24 09:25:47.037985309 +0000 UTC m=+0.128388219 container attach a032b8f15d27edfb9686231dd5c8dfd59bc6203db23e8f0ffefbdd56f92993c6 (image=quay.io/ceph/ceph:v19, name=gracious_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:25:47 compute-0 systemd[1]: libpod-a032b8f15d27edfb9686231dd5c8dfd59bc6203db23e8f0ffefbdd56f92993c6.scope: Deactivated successfully.
Nov 24 09:25:47 compute-0 podman[80200]: 2025-11-24 09:25:47.03884446 +0000 UTC m=+0.129247390 container died a032b8f15d27edfb9686231dd5c8dfd59bc6203db23e8f0ffefbdd56f92993c6 (image=quay.io/ceph/ceph:v19, name=gracious_lovelace, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid)
Nov 24 09:25:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-b68de6ca06c30d1215eb4662cf5990bca51ef09c19ec41e5389cf30f48db4b64-merged.mount: Deactivated successfully.
Nov 24 09:25:47 compute-0 podman[80200]: 2025-11-24 09:25:47.0703533 +0000 UTC m=+0.160756210 container remove a032b8f15d27edfb9686231dd5c8dfd59bc6203db23e8f0ffefbdd56f92993c6 (image=quay.io/ceph/ceph:v19, name=gracious_lovelace, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Nov 24 09:25:47 compute-0 systemd[1]: libpod-conmon-a032b8f15d27edfb9686231dd5c8dfd59bc6203db23e8f0ffefbdd56f92993c6.scope: Deactivated successfully.
Nov 24 09:25:47 compute-0 sudo[80116]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:47 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:25:47 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:47 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:25:47 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:47 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:25:47 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:25:47 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 09:25:47 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:25:47 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:25:47 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:47 compute-0 sudo[80251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:25:47 compute-0 sudo[80251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:47 compute-0 sudo[80251]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:47 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Nov 24 09:25:47 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1317558612' entity='client.admin' 
Nov 24 09:25:47 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:25:47 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:25:47 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 09:25:47 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:25:47 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:25:47 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:47 compute-0 systemd[1]: libpod-fce7b1dd89bc904b2b1a2b43842833203eb8ecf23bc4fe2109f235cc6e406bb2.scope: Deactivated successfully.
Nov 24 09:25:47 compute-0 podman[80167]: 2025-11-24 09:25:47.298373585 +0000 UTC m=+0.505371441 container died fce7b1dd89bc904b2b1a2b43842833203eb8ecf23bc4fe2109f235cc6e406bb2 (image=quay.io/ceph/ceph:v19, name=nifty_wu, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid)
Nov 24 09:25:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a9903b9385eaabe49e69b2fda8d6e4ea30fd9e2eb549ffe4f91b3d7d8b44f88-merged.mount: Deactivated successfully.
Nov 24 09:25:47 compute-0 podman[80167]: 2025-11-24 09:25:47.334252662 +0000 UTC m=+0.541250518 container remove fce7b1dd89bc904b2b1a2b43842833203eb8ecf23bc4fe2109f235cc6e406bb2 (image=quay.io/ceph/ceph:v19, name=nifty_wu, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:25:47 compute-0 systemd[1]: libpod-conmon-fce7b1dd89bc904b2b1a2b43842833203eb8ecf23bc4fe2109f235cc6e406bb2.scope: Deactivated successfully.
Nov 24 09:25:47 compute-0 sudo[80279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:25:47 compute-0 sudo[80279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:47 compute-0 sudo[80279]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:47 compute-0 sudo[80162]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:47 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:25:47 compute-0 sudo[80336]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvouxrnlyplbghlshlfaevplfyqfllcj ; /usr/bin/python3'
Nov 24 09:25:47 compute-0 sudo[80336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:25:47 compute-0 python3[80338]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:25:47 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:25:47 compute-0 podman[80339]: 2025-11-24 09:25:47.738745565 +0000 UTC m=+0.037116609 container create 92101df978cde847088e83635c79859abb931accce46735e08cd93cee74f885d (image=quay.io/ceph/ceph:v19, name=admiring_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Nov 24 09:25:47 compute-0 systemd[1]: Started libpod-conmon-92101df978cde847088e83635c79859abb931accce46735e08cd93cee74f885d.scope.
Nov 24 09:25:47 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:25:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f2e5d3e0a97e942665a02a393d224d14ddb2382de85f891dbe34f825fe47011/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f2e5d3e0a97e942665a02a393d224d14ddb2382de85f891dbe34f825fe47011/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f2e5d3e0a97e942665a02a393d224d14ddb2382de85f891dbe34f825fe47011/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:47 compute-0 podman[80339]: 2025-11-24 09:25:47.796371011 +0000 UTC m=+0.094742075 container init 92101df978cde847088e83635c79859abb931accce46735e08cd93cee74f885d (image=quay.io/ceph/ceph:v19, name=admiring_engelbart, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:25:47 compute-0 podman[80339]: 2025-11-24 09:25:47.802662987 +0000 UTC m=+0.101034031 container start 92101df978cde847088e83635c79859abb931accce46735e08cd93cee74f885d (image=quay.io/ceph/ceph:v19, name=admiring_engelbart, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1)
Nov 24 09:25:47 compute-0 podman[80339]: 2025-11-24 09:25:47.805257402 +0000 UTC m=+0.103628446 container attach 92101df978cde847088e83635c79859abb931accce46735e08cd93cee74f885d (image=quay.io/ceph/ceph:v19, name=admiring_engelbart, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Nov 24 09:25:47 compute-0 podman[80339]: 2025-11-24 09:25:47.723343954 +0000 UTC m=+0.021714998 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:25:47 compute-0 ceph-mon[74331]: Reconfiguring mgr.compute-0.mauvni (unknown last config time)...
Nov 24 09:25:47 compute-0 ceph-mon[74331]: Reconfiguring daemon mgr.compute-0.mauvni on compute-0
Nov 24 09:25:47 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:47 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:47 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:25:47 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:25:47 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:47 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1317558612' entity='client.admin' 
Nov 24 09:25:47 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:25:47 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:25:47 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:48 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Nov 24 09:25:48 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/603681033' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Nov 24 09:25:48 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Nov 24 09:25:48 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 24 09:25:48 compute-0 ceph-mon[74331]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:25:48 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/603681033' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Nov 24 09:25:48 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/603681033' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Nov 24 09:25:48 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Nov 24 09:25:48 compute-0 admiring_engelbart[80354]: set require_min_compat_client to mimic
Nov 24 09:25:48 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Nov 24 09:25:48 compute-0 systemd[1]: libpod-92101df978cde847088e83635c79859abb931accce46735e08cd93cee74f885d.scope: Deactivated successfully.
Nov 24 09:25:48 compute-0 podman[80339]: 2025-11-24 09:25:48.990805257 +0000 UTC m=+1.289176301 container died 92101df978cde847088e83635c79859abb931accce46735e08cd93cee74f885d (image=quay.io/ceph/ceph:v19, name=admiring_engelbart, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:25:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f2e5d3e0a97e942665a02a393d224d14ddb2382de85f891dbe34f825fe47011-merged.mount: Deactivated successfully.
Nov 24 09:25:49 compute-0 podman[80339]: 2025-11-24 09:25:49.027370723 +0000 UTC m=+1.325741807 container remove 92101df978cde847088e83635c79859abb931accce46735e08cd93cee74f885d (image=quay.io/ceph/ceph:v19, name=admiring_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Nov 24 09:25:49 compute-0 systemd[1]: libpod-conmon-92101df978cde847088e83635c79859abb931accce46735e08cd93cee74f885d.scope: Deactivated successfully.
Nov 24 09:25:49 compute-0 sudo[80336]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:49 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:25:49 compute-0 sudo[80413]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwazuznznberrtpqffgzgkjpxacalkcu ; /usr/bin/python3'
Nov 24 09:25:49 compute-0 sudo[80413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:25:49 compute-0 python3[80415]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:25:49 compute-0 podman[80416]: 2025-11-24 09:25:49.766780506 +0000 UTC m=+0.042399391 container create 76a336158a47dcac8e5da1148c0b985ea33347f12ce63856fbd6476b9eb50566 (image=quay.io/ceph/ceph:v19, name=awesome_kare, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Nov 24 09:25:49 compute-0 systemd[1]: Started libpod-conmon-76a336158a47dcac8e5da1148c0b985ea33347f12ce63856fbd6476b9eb50566.scope.
Nov 24 09:25:49 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:25:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/325c9b37d90e8175fe9e39580de2158199c558bbe8a9da9c582e549023728570/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/325c9b37d90e8175fe9e39580de2158199c558bbe8a9da9c582e549023728570/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/325c9b37d90e8175fe9e39580de2158199c558bbe8a9da9c582e549023728570/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:25:49 compute-0 podman[80416]: 2025-11-24 09:25:49.745718255 +0000 UTC m=+0.021337190 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:25:49 compute-0 podman[80416]: 2025-11-24 09:25:49.848911229 +0000 UTC m=+0.124530144 container init 76a336158a47dcac8e5da1148c0b985ea33347f12ce63856fbd6476b9eb50566 (image=quay.io/ceph/ceph:v19, name=awesome_kare, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Nov 24 09:25:49 compute-0 podman[80416]: 2025-11-24 09:25:49.858422304 +0000 UTC m=+0.134041199 container start 76a336158a47dcac8e5da1148c0b985ea33347f12ce63856fbd6476b9eb50566 (image=quay.io/ceph/ceph:v19, name=awesome_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 24 09:25:49 compute-0 podman[80416]: 2025-11-24 09:25:49.861298866 +0000 UTC m=+0.136917741 container attach 76a336158a47dcac8e5da1148c0b985ea33347f12ce63856fbd6476b9eb50566 (image=quay.io/ceph/ceph:v19, name=awesome_kare, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:25:49 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/603681033' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Nov 24 09:25:49 compute-0 ceph-mon[74331]: osdmap e3: 0 total, 0 up, 0 in
Nov 24 09:25:50 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 09:25:50 compute-0 sudo[80455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:25:50 compute-0 sudo[80455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:50 compute-0 sudo[80455]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:50 compute-0 sudo[80480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-0
Nov 24 09:25:50 compute-0 sudo[80480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:50 compute-0 sudo[80480]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:50 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Nov 24 09:25:50 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:50 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Nov 24 09:25:50 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:50 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Nov 24 09:25:50 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:50 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Nov 24 09:25:50 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:50 compute-0 ceph-mgr[74626]: [cephadm INFO root] Added host compute-0
Nov 24 09:25:50 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Added host compute-0
Nov 24 09:25:50 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:25:50 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:25:50 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 09:25:50 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:25:50 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:25:50 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:50 compute-0 sudo[80526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:25:50 compute-0 sudo[80526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:25:50 compute-0 sudo[80526]: pam_unix(sudo:session): session closed for user root
Nov 24 09:25:50 compute-0 ceph-mon[74331]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:25:50 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:50 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:50 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:50 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:50 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:25:50 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:25:50 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:51 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:25:51 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-1
Nov 24 09:25:51 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-1
Nov 24 09:25:51 compute-0 ceph-mon[74331]: from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 09:25:51 compute-0 ceph-mon[74331]: Added host compute-0
Nov 24 09:25:52 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:25:52 compute-0 ceph-mon[74331]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:25:52 compute-0 ceph-mon[74331]: Deploying cephadm binary to compute-1
Nov 24 09:25:53 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:25:55 compute-0 ceph-mon[74331]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:25:55 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Nov 24 09:25:55 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:55 compute-0 ceph-mgr[74626]: [cephadm INFO root] Added host compute-1
Nov 24 09:25:55 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Added host compute-1
Nov 24 09:25:55 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:25:55 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:25:55 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:25:55 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:25:55 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:25:55 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:25:55 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:25:55 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 09:25:55 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:56 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 09:25:56 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:56 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:56 compute-0 ceph-mon[74331]: Added host compute-1
Nov 24 09:25:56 compute-0 ceph-mon[74331]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:25:56 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:56 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:56 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-2
Nov 24 09:25:56 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-2
Nov 24 09:25:57 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 09:25:57 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:57 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:25:57 compute-0 ceph-mon[74331]: Deploying cephadm binary to compute-2
Nov 24 09:25:57 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:25:57 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:25:58 compute-0 ceph-mon[74331]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:25:59 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:26:00 compute-0 ceph-mon[74331]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:26:00 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Nov 24 09:26:00 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:00 compute-0 ceph-mgr[74626]: [cephadm INFO root] Added host compute-2
Nov 24 09:26:00 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Added host compute-2
Nov 24 09:26:00 compute-0 ceph-mgr[74626]: [cephadm INFO root] Saving service mon spec with placement compute-0;compute-1;compute-2
Nov 24 09:26:00 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0;compute-1;compute-2
Nov 24 09:26:00 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Nov 24 09:26:00 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:00 compute-0 ceph-mgr[74626]: [cephadm INFO root] Saving service mgr spec with placement compute-0;compute-1;compute-2
Nov 24 09:26:00 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0;compute-1;compute-2
Nov 24 09:26:00 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Nov 24 09:26:00 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:00 compute-0 ceph-mgr[74626]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Nov 24 09:26:00 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Nov 24 09:26:00 compute-0 ceph-mgr[74626]: [cephadm INFO root] Marking host: compute-1 for OSDSpec preview refresh.
Nov 24 09:26:00 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Marking host: compute-1 for OSDSpec preview refresh.
Nov 24 09:26:00 compute-0 ceph-mgr[74626]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Nov 24 09:26:00 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Nov 24 09:26:00 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Nov 24 09:26:00 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:00 compute-0 awesome_kare[80431]: Added host 'compute-0' with addr '192.168.122.100'
Nov 24 09:26:00 compute-0 awesome_kare[80431]: Added host 'compute-1' with addr '192.168.122.101'
Nov 24 09:26:00 compute-0 awesome_kare[80431]: Added host 'compute-2' with addr '192.168.122.102'
Nov 24 09:26:00 compute-0 awesome_kare[80431]: Scheduled mon update...
Nov 24 09:26:00 compute-0 awesome_kare[80431]: Scheduled mgr update...
Nov 24 09:26:00 compute-0 awesome_kare[80431]: Scheduled osd.default_drive_group update...
Nov 24 09:26:00 compute-0 systemd[1]: libpod-76a336158a47dcac8e5da1148c0b985ea33347f12ce63856fbd6476b9eb50566.scope: Deactivated successfully.
Nov 24 09:26:00 compute-0 podman[80416]: 2025-11-24 09:26:00.69787138 +0000 UTC m=+10.973490265 container died 76a336158a47dcac8e5da1148c0b985ea33347f12ce63856fbd6476b9eb50566 (image=quay.io/ceph/ceph:v19, name=awesome_kare, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 09:26:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-325c9b37d90e8175fe9e39580de2158199c558bbe8a9da9c582e549023728570-merged.mount: Deactivated successfully.
Nov 24 09:26:00 compute-0 podman[80416]: 2025-11-24 09:26:00.735486871 +0000 UTC m=+11.011105756 container remove 76a336158a47dcac8e5da1148c0b985ea33347f12ce63856fbd6476b9eb50566 (image=quay.io/ceph/ceph:v19, name=awesome_kare, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 24 09:26:00 compute-0 systemd[1]: libpod-conmon-76a336158a47dcac8e5da1148c0b985ea33347f12ce63856fbd6476b9eb50566.scope: Deactivated successfully.
Nov 24 09:26:00 compute-0 sudo[80413]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:01 compute-0 sudo[80590]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdpcpnxeukwpkqjuyzfvbyiulpuurawt ; /usr/bin/python3'
Nov 24 09:26:01 compute-0 sudo[80590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:26:01 compute-0 python3[80592]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:26:01 compute-0 podman[80594]: 2025-11-24 09:26:01.209471623 +0000 UTC m=+0.043126758 container create 80190154d93c3e11410f09a2a040eaa7aea9fd08bdc8757598c1e151b5c3bcf7 (image=quay.io/ceph/ceph:v19, name=affectionate_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:26:01 compute-0 systemd[1]: Started libpod-conmon-80190154d93c3e11410f09a2a040eaa7aea9fd08bdc8757598c1e151b5c3bcf7.scope.
Nov 24 09:26:01 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:26:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ca6f79d2461a5e498f013af54cc8d70bc3b7dbd4596b5e3634af81db0430b71/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ca6f79d2461a5e498f013af54cc8d70bc3b7dbd4596b5e3634af81db0430b71/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ca6f79d2461a5e498f013af54cc8d70bc3b7dbd4596b5e3634af81db0430b71/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:01 compute-0 podman[80594]: 2025-11-24 09:26:01.282587213 +0000 UTC m=+0.116242438 container init 80190154d93c3e11410f09a2a040eaa7aea9fd08bdc8757598c1e151b5c3bcf7 (image=quay.io/ceph/ceph:v19, name=affectionate_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:26:01 compute-0 podman[80594]: 2025-11-24 09:26:01.18750675 +0000 UTC m=+0.021161915 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:26:01 compute-0 podman[80594]: 2025-11-24 09:26:01.289163476 +0000 UTC m=+0.122818631 container start 80190154d93c3e11410f09a2a040eaa7aea9fd08bdc8757598c1e151b5c3bcf7 (image=quay.io/ceph/ceph:v19, name=affectionate_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 24 09:26:01 compute-0 podman[80594]: 2025-11-24 09:26:01.292365195 +0000 UTC m=+0.126020320 container attach 80190154d93c3e11410f09a2a040eaa7aea9fd08bdc8757598c1e151b5c3bcf7 (image=quay.io/ceph/ceph:v19, name=affectionate_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:26:01 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:26:01 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:01 compute-0 ceph-mon[74331]: Added host compute-2
Nov 24 09:26:01 compute-0 ceph-mon[74331]: Saving service mon spec with placement compute-0;compute-1;compute-2
Nov 24 09:26:01 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:01 compute-0 ceph-mon[74331]: Saving service mgr spec with placement compute-0;compute-1;compute-2
Nov 24 09:26:01 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:01 compute-0 ceph-mon[74331]: Marking host: compute-0 for OSDSpec preview refresh.
Nov 24 09:26:01 compute-0 ceph-mon[74331]: Marking host: compute-1 for OSDSpec preview refresh.
Nov 24 09:26:01 compute-0 ceph-mon[74331]: Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Nov 24 09:26:01 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:01 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Nov 24 09:26:01 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/81755289' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 24 09:26:01 compute-0 affectionate_brahmagupta[80611]: 
Nov 24 09:26:01 compute-0 affectionate_brahmagupta[80611]: {"fsid":"84a084c3-61a7-5de7-8207-1f88efa59a64","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":54,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2025-11-24T09:25:05:540478+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-11-24T09:25:05.542309+0000","services":{}},"progress_events":{}}
Nov 24 09:26:01 compute-0 systemd[1]: libpod-80190154d93c3e11410f09a2a040eaa7aea9fd08bdc8757598c1e151b5c3bcf7.scope: Deactivated successfully.
Nov 24 09:26:01 compute-0 podman[80594]: 2025-11-24 09:26:01.769044775 +0000 UTC m=+0.602699910 container died 80190154d93c3e11410f09a2a040eaa7aea9fd08bdc8757598c1e151b5c3bcf7 (image=quay.io/ceph/ceph:v19, name=affectionate_brahmagupta, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 09:26:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ca6f79d2461a5e498f013af54cc8d70bc3b7dbd4596b5e3634af81db0430b71-merged.mount: Deactivated successfully.
Nov 24 09:26:01 compute-0 podman[80594]: 2025-11-24 09:26:01.806198854 +0000 UTC m=+0.639853999 container remove 80190154d93c3e11410f09a2a040eaa7aea9fd08bdc8757598c1e151b5c3bcf7 (image=quay.io/ceph/ceph:v19, name=affectionate_brahmagupta, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:26:01 compute-0 systemd[1]: libpod-conmon-80190154d93c3e11410f09a2a040eaa7aea9fd08bdc8757598c1e151b5c3bcf7.scope: Deactivated successfully.
Nov 24 09:26:01 compute-0 sudo[80590]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:02 compute-0 ceph-mon[74331]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:26:02 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/81755289' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 24 09:26:02 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:26:03 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:26:04 compute-0 ceph-mon[74331]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:26:05 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:26:06 compute-0 ceph-mon[74331]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:26:07 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:26:07 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:26:08 compute-0 ceph-mon[74331]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:26:09 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:26:10 compute-0 ceph-mon[74331]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:26:11 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:26:12 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:26:12 compute-0 ceph-mon[74331]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:26:13 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:26:14 compute-0 ceph-mon[74331]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:26:14 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 09:26:14 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:14 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 09:26:14 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:14 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 09:26:14 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:14 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 09:26:14 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:14 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Nov 24 09:26:14 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 24 09:26:14 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:26:14 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:26:14 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 09:26:14 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:26:14 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Nov 24 09:26:14 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Nov 24 09:26:15 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:26:15 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:26:15 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:26:15 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:15 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:15 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:15 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:15 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 24 09:26:15 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:26:15 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:26:15 compute-0 ceph-mon[74331]: Updating compute-1:/etc/ceph/ceph.conf
Nov 24 09:26:15 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 24 09:26:15 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 24 09:26:16 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 09:26:16 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 09:26:16 compute-0 ceph-mon[74331]: Updating compute-1:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:26:16 compute-0 ceph-mon[74331]: pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:26:16 compute-0 ceph-mon[74331]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 24 09:26:16 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 09:26:16 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:16 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 09:26:16 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:16 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:26:16 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:16 compute-0 ceph-mgr[74626]: [cephadm ERROR cephadm.serve] Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Nov 24 09:26:16 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Nov 24 09:26:16 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:26:16 compute-0 ceph-mgr[74626]: [cephadm ERROR cephadm.serve] Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Nov 24 09:26:16 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Nov 24 09:26:16 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:26:16 compute-0 ceph-mgr[74626]: [progress INFO root] update: starting ev 274dc2cc-b9d9-45c4-bdc1-f3a42134015c (Updating crash deployment (+1 -> 2))
Nov 24 09:26:16 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Nov 24 09:26:16 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 24 09:26:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:26:16.940+0000 7fa43b1f3640 -1 log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
Nov 24 09:26:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: service_name: mon
Nov 24 09:26:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: placement:
Nov 24 09:26:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]:   hosts:
Nov 24 09:26:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]:   - compute-0
Nov 24 09:26:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]:   - compute-1
Nov 24 09:26:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]:   - compute-2
Nov 24 09:26:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Nov 24 09:26:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:26:16.941+0000 7fa43b1f3640 -1 log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
Nov 24 09:26:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: service_name: mgr
Nov 24 09:26:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: placement:
Nov 24 09:26:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]:   hosts:
Nov 24 09:26:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]:   - compute-0
Nov 24 09:26:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]:   - compute-1
Nov 24 09:26:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]:   - compute-2
Nov 24 09:26:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Nov 24 09:26:16 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 24 09:26:16 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:26:16 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:26:16 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-1 on compute-1
Nov 24 09:26:16 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-1 on compute-1
Nov 24 09:26:17 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:26:17 compute-0 ceph-mon[74331]: Updating compute-1:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 09:26:17 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:17 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:17 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:17 compute-0 ceph-mon[74331]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Nov 24 09:26:17 compute-0 ceph-mon[74331]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:26:17 compute-0 ceph-mon[74331]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Nov 24 09:26:17 compute-0 ceph-mon[74331]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:26:17 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 24 09:26:17 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 24 09:26:17 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:26:17 compute-0 ceph-mon[74331]: Deploying daemon crash.compute-1 on compute-1
Nov 24 09:26:17 compute-0 ceph-mon[74331]: log_channel(cluster) log [WRN] : Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Nov 24 09:26:18 compute-0 ceph-mon[74331]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Nov 24 09:26:18 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:26:19 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 09:26:19 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:19 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 09:26:19 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:19 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Nov 24 09:26:19 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:19 compute-0 ceph-mgr[74626]: [progress INFO root] complete: finished ev 274dc2cc-b9d9-45c4-bdc1-f3a42134015c (Updating crash deployment (+1 -> 2))
Nov 24 09:26:19 compute-0 ceph-mgr[74626]: [progress INFO root] Completed event 274dc2cc-b9d9-45c4-bdc1-f3a42134015c (Updating crash deployment (+1 -> 2)) in 2 seconds
Nov 24 09:26:19 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Nov 24 09:26:19 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:19 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 09:26:19 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:26:19 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 09:26:19 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:26:19 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:26:19 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:26:19 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 09:26:19 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:26:19 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:26:19 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:26:19 compute-0 sudo[80646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:26:19 compute-0 sudo[80646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:19 compute-0 sudo[80646]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:19 compute-0 sudo[80671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 24 09:26:19 compute-0 sudo[80671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:19 compute-0 podman[80735]: 2025-11-24 09:26:19.784930761 +0000 UTC m=+0.046358198 container create daa0b37ada54a7b2580e16d381367cf9e2ae2406c77d94414142859b2385f91f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_euclid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 09:26:19 compute-0 systemd[1]: Started libpod-conmon-daa0b37ada54a7b2580e16d381367cf9e2ae2406c77d94414142859b2385f91f.scope.
Nov 24 09:26:19 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:26:19 compute-0 podman[80735]: 2025-11-24 09:26:19.858531753 +0000 UTC m=+0.119959210 container init daa0b37ada54a7b2580e16d381367cf9e2ae2406c77d94414142859b2385f91f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_euclid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 24 09:26:19 compute-0 podman[80735]: 2025-11-24 09:26:19.767623753 +0000 UTC m=+0.029051220 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:26:19 compute-0 podman[80735]: 2025-11-24 09:26:19.863830404 +0000 UTC m=+0.125257841 container start daa0b37ada54a7b2580e16d381367cf9e2ae2406c77d94414142859b2385f91f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_euclid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Nov 24 09:26:19 compute-0 podman[80735]: 2025-11-24 09:26:19.867282 +0000 UTC m=+0.128709437 container attach daa0b37ada54a7b2580e16d381367cf9e2ae2406c77d94414142859b2385f91f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 09:26:19 compute-0 vibrant_euclid[80752]: 167 167
Nov 24 09:26:19 compute-0 systemd[1]: libpod-daa0b37ada54a7b2580e16d381367cf9e2ae2406c77d94414142859b2385f91f.scope: Deactivated successfully.
Nov 24 09:26:19 compute-0 podman[80735]: 2025-11-24 09:26:19.869000482 +0000 UTC m=+0.130427919 container died daa0b37ada54a7b2580e16d381367cf9e2ae2406c77d94414142859b2385f91f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_euclid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Nov 24 09:26:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b9cd2551cfca850901781f1197a47f7f6016ad455f206fb2e6b1809879a52bb-merged.mount: Deactivated successfully.
Nov 24 09:26:19 compute-0 podman[80735]: 2025-11-24 09:26:19.909666809 +0000 UTC m=+0.171094246 container remove daa0b37ada54a7b2580e16d381367cf9e2ae2406c77d94414142859b2385f91f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 24 09:26:19 compute-0 systemd[1]: libpod-conmon-daa0b37ada54a7b2580e16d381367cf9e2ae2406c77d94414142859b2385f91f.scope: Deactivated successfully.
Nov 24 09:26:19 compute-0 ceph-mon[74331]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:26:19 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:19 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:19 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:19 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:19 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:26:19 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:26:19 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:26:19 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:26:19 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:26:20 compute-0 podman[80775]: 2025-11-24 09:26:20.091989191 +0000 UTC m=+0.050362487 container create 769fd245fb69061b11b76070201740e4a1a491aa9ec1bb67dd25727b05c361b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_chatelet, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:26:20 compute-0 systemd[1]: Started libpod-conmon-769fd245fb69061b11b76070201740e4a1a491aa9ec1bb67dd25727b05c361b8.scope.
Nov 24 09:26:20 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:26:20 compute-0 podman[80775]: 2025-11-24 09:26:20.065155677 +0000 UTC m=+0.023529063 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:26:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e370d8e0d519df3a9663f8d537920779cd2c88fb229374c914fa1eb5a6f22329/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e370d8e0d519df3a9663f8d537920779cd2c88fb229374c914fa1eb5a6f22329/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e370d8e0d519df3a9663f8d537920779cd2c88fb229374c914fa1eb5a6f22329/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e370d8e0d519df3a9663f8d537920779cd2c88fb229374c914fa1eb5a6f22329/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e370d8e0d519df3a9663f8d537920779cd2c88fb229374c914fa1eb5a6f22329/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:20 compute-0 podman[80775]: 2025-11-24 09:26:20.171135181 +0000 UTC m=+0.129508487 container init 769fd245fb69061b11b76070201740e4a1a491aa9ec1bb67dd25727b05c361b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:26:20 compute-0 podman[80775]: 2025-11-24 09:26:20.179435877 +0000 UTC m=+0.137809163 container start 769fd245fb69061b11b76070201740e4a1a491aa9ec1bb67dd25727b05c361b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_chatelet, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:26:20 compute-0 podman[80775]: 2025-11-24 09:26:20.18239134 +0000 UTC m=+0.140764646 container attach 769fd245fb69061b11b76070201740e4a1a491aa9ec1bb67dd25727b05c361b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_chatelet, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Nov 24 09:26:20 compute-0 quizzical_chatelet[80792]: --> passed data devices: 0 physical, 1 LVM
Nov 24 09:26:20 compute-0 quizzical_chatelet[80792]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 24 09:26:20 compute-0 ceph-mgr[74626]: [progress INFO root] Writing back 2 completed events
Nov 24 09:26:20 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 24 09:26:20 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:20 compute-0 quizzical_chatelet[80792]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 24 09:26:20 compute-0 quizzical_chatelet[80792]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c
Nov 24 09:26:20 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:26:20 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c"} v 0)
Nov 24 09:26:20 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3344904896' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c"}]: dispatch
Nov 24 09:26:20 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Nov 24 09:26:20 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 24 09:26:20 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3344904896' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c"}]': finished
Nov 24 09:26:20 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Nov 24 09:26:20 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Nov 24 09:26:20 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 24 09:26:20 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 09:26:20 compute-0 ceph-mgr[74626]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 24 09:26:21 compute-0 quizzical_chatelet[80792]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Nov 24 09:26:21 compute-0 quizzical_chatelet[80792]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Nov 24 09:26:21 compute-0 quizzical_chatelet[80792]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 24 09:26:21 compute-0 lvm[80853]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 09:26:21 compute-0 lvm[80853]: VG ceph_vg0 finished
Nov 24 09:26:21 compute-0 quizzical_chatelet[80792]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 24 09:26:21 compute-0 quizzical_chatelet[80792]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Nov 24 09:26:21 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "d66edcc6-663b-43db-9331-33ccbb320884"} v 0)
Nov 24 09:26:21 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/3818245863' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d66edcc6-663b-43db-9331-33ccbb320884"}]: dispatch
Nov 24 09:26:21 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Nov 24 09:26:21 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 24 09:26:21 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/3818245863' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d66edcc6-663b-43db-9331-33ccbb320884"}]': finished
Nov 24 09:26:21 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Nov 24 09:26:21 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Nov 24 09:26:21 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 24 09:26:21 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 09:26:21 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 24 09:26:21 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 09:26:21 compute-0 ceph-mgr[74626]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 09:26:21 compute-0 ceph-mgr[74626]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 24 09:26:21 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Nov 24 09:26:21 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1822629335' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 24 09:26:21 compute-0 quizzical_chatelet[80792]:  stderr: got monmap epoch 1
Nov 24 09:26:21 compute-0 quizzical_chatelet[80792]: --> Creating keyring file for osd.0
Nov 24 09:26:21 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:21 compute-0 ceph-mon[74331]: pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:26:21 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3344904896' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c"}]: dispatch
Nov 24 09:26:21 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3344904896' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c"}]': finished
Nov 24 09:26:21 compute-0 ceph-mon[74331]: osdmap e4: 1 total, 0 up, 1 in
Nov 24 09:26:21 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 09:26:21 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/3818245863' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d66edcc6-663b-43db-9331-33ccbb320884"}]: dispatch
Nov 24 09:26:21 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/3818245863' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d66edcc6-663b-43db-9331-33ccbb320884"}]': finished
Nov 24 09:26:21 compute-0 ceph-mon[74331]: osdmap e5: 2 total, 0 up, 2 in
Nov 24 09:26:21 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 09:26:21 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 09:26:21 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1822629335' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 24 09:26:21 compute-0 quizzical_chatelet[80792]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Nov 24 09:26:21 compute-0 quizzical_chatelet[80792]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Nov 24 09:26:21 compute-0 quizzical_chatelet[80792]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c --setuser ceph --setgroup ceph
Nov 24 09:26:21 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Nov 24 09:26:21 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3279587715' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 24 09:26:22 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/3279587715' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 24 09:26:22 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:26:22 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:26:23 compute-0 ceph-mon[74331]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 24 09:26:23 compute-0 ceph-mon[74331]: pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:26:24 compute-0 quizzical_chatelet[80792]:  stderr: 2025-11-24T09:26:21.694+0000 7fdca344c740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) No valid bdev label found
Nov 24 09:26:24 compute-0 quizzical_chatelet[80792]:  stderr: 2025-11-24T09:26:21.959+0000 7fdca344c740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Nov 24 09:26:24 compute-0 quizzical_chatelet[80792]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Nov 24 09:26:24 compute-0 quizzical_chatelet[80792]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 24 09:26:24 compute-0 quizzical_chatelet[80792]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Nov 24 09:26:24 compute-0 ceph-mon[74331]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 24 09:26:24 compute-0 quizzical_chatelet[80792]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 24 09:26:24 compute-0 quizzical_chatelet[80792]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Nov 24 09:26:24 compute-0 quizzical_chatelet[80792]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 24 09:26:24 compute-0 quizzical_chatelet[80792]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 24 09:26:24 compute-0 quizzical_chatelet[80792]: --> ceph-volume lvm activate successful for osd ID: 0
Nov 24 09:26:24 compute-0 quizzical_chatelet[80792]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Nov 24 09:26:24 compute-0 systemd[1]: libpod-769fd245fb69061b11b76070201740e4a1a491aa9ec1bb67dd25727b05c361b8.scope: Deactivated successfully.
Nov 24 09:26:24 compute-0 systemd[1]: libpod-769fd245fb69061b11b76070201740e4a1a491aa9ec1bb67dd25727b05c361b8.scope: Consumed 2.071s CPU time.
Nov 24 09:26:24 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:26:24 compute-0 podman[81768]: 2025-11-24 09:26:24.989365059 +0000 UTC m=+0.045370853 container died 769fd245fb69061b11b76070201740e4a1a491aa9ec1bb67dd25727b05c361b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_chatelet, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Nov 24 09:26:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-e370d8e0d519df3a9663f8d537920779cd2c88fb229374c914fa1eb5a6f22329-merged.mount: Deactivated successfully.
Nov 24 09:26:25 compute-0 podman[81768]: 2025-11-24 09:26:25.046429372 +0000 UTC m=+0.102435126 container remove 769fd245fb69061b11b76070201740e4a1a491aa9ec1bb67dd25727b05c361b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 24 09:26:25 compute-0 systemd[1]: libpod-conmon-769fd245fb69061b11b76070201740e4a1a491aa9ec1bb67dd25727b05c361b8.scope: Deactivated successfully.
Nov 24 09:26:25 compute-0 sudo[80671]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:25 compute-0 sudo[81783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:26:25 compute-0 sudo[81783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:25 compute-0 sudo[81783]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:25 compute-0 sudo[81808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- lvm list --format json
Nov 24 09:26:25 compute-0 sudo[81808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:25 compute-0 ceph-mgr[74626]: [balancer INFO root] Optimize plan auto_2025-11-24_09:26:25
Nov 24 09:26:25 compute-0 ceph-mgr[74626]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 09:26:25 compute-0 ceph-mgr[74626]: [balancer INFO root] do_upmap
Nov 24 09:26:25 compute-0 ceph-mgr[74626]: [balancer INFO root] No pools available
Nov 24 09:26:25 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 09:26:25 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 09:26:25 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:26:25 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:26:25 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 09:26:25 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:26:25 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:26:25 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:26:25 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:26:25 compute-0 ceph-mon[74331]: pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:26:25 compute-0 podman[81871]: 2025-11-24 09:26:25.739513989 +0000 UTC m=+0.057626198 container create dd7a256e23a5f7e9382f4ef27217a09ea01bb0ca66b8460dddb7033419c42db7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_curie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:26:25 compute-0 systemd[1]: Started libpod-conmon-dd7a256e23a5f7e9382f4ef27217a09ea01bb0ca66b8460dddb7033419c42db7.scope.
Nov 24 09:26:25 compute-0 podman[81871]: 2025-11-24 09:26:25.711209878 +0000 UTC m=+0.029322097 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:26:25 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:26:25 compute-0 podman[81871]: 2025-11-24 09:26:25.843986265 +0000 UTC m=+0.162098554 container init dd7a256e23a5f7e9382f4ef27217a09ea01bb0ca66b8460dddb7033419c42db7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True)
Nov 24 09:26:25 compute-0 podman[81871]: 2025-11-24 09:26:25.85918736 +0000 UTC m=+0.177299599 container start dd7a256e23a5f7e9382f4ef27217a09ea01bb0ca66b8460dddb7033419c42db7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_curie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid)
Nov 24 09:26:25 compute-0 podman[81871]: 2025-11-24 09:26:25.864127453 +0000 UTC m=+0.182239752 container attach dd7a256e23a5f7e9382f4ef27217a09ea01bb0ca66b8460dddb7033419c42db7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_curie, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:26:25 compute-0 quizzical_curie[81887]: 167 167
Nov 24 09:26:25 compute-0 systemd[1]: libpod-dd7a256e23a5f7e9382f4ef27217a09ea01bb0ca66b8460dddb7033419c42db7.scope: Deactivated successfully.
Nov 24 09:26:25 compute-0 podman[81871]: 2025-11-24 09:26:25.870773248 +0000 UTC m=+0.188885467 container died dd7a256e23a5f7e9382f4ef27217a09ea01bb0ca66b8460dddb7033419c42db7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_curie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 24 09:26:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-3110ecbf8cef8fd870e54ba13b7ccfaa1bf93379bc2109436d43f65b44a3db17-merged.mount: Deactivated successfully.
Nov 24 09:26:25 compute-0 podman[81871]: 2025-11-24 09:26:25.92461757 +0000 UTC m=+0.242729759 container remove dd7a256e23a5f7e9382f4ef27217a09ea01bb0ca66b8460dddb7033419c42db7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Nov 24 09:26:25 compute-0 systemd[1]: libpod-conmon-dd7a256e23a5f7e9382f4ef27217a09ea01bb0ca66b8460dddb7033419c42db7.scope: Deactivated successfully.
Nov 24 09:26:26 compute-0 podman[81912]: 2025-11-24 09:26:26.13425124 +0000 UTC m=+0.051804653 container create 2b2a1370a0ce297e798c543f34a617e4c207eac16f7449de31f7ae7e10168b4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 24 09:26:26 compute-0 systemd[1]: Started libpod-conmon-2b2a1370a0ce297e798c543f34a617e4c207eac16f7449de31f7ae7e10168b4b.scope.
Nov 24 09:26:26 compute-0 podman[81912]: 2025-11-24 09:26:26.106360089 +0000 UTC m=+0.023913482 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:26:26 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:26:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e8bf87a245ab3859b6ced08e9dcba747703ca3179260d1b053593c29624a79d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e8bf87a245ab3859b6ced08e9dcba747703ca3179260d1b053593c29624a79d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e8bf87a245ab3859b6ced08e9dcba747703ca3179260d1b053593c29624a79d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e8bf87a245ab3859b6ced08e9dcba747703ca3179260d1b053593c29624a79d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:26 compute-0 podman[81912]: 2025-11-24 09:26:26.247264748 +0000 UTC m=+0.164818161 container init 2b2a1370a0ce297e798c543f34a617e4c207eac16f7449de31f7ae7e10168b4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:26:26 compute-0 podman[81912]: 2025-11-24 09:26:26.259032569 +0000 UTC m=+0.176585952 container start 2b2a1370a0ce297e798c543f34a617e4c207eac16f7449de31f7ae7e10168b4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_keller, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 24 09:26:26 compute-0 podman[81912]: 2025-11-24 09:26:26.263272814 +0000 UTC m=+0.180826207 container attach 2b2a1370a0ce297e798c543f34a617e4c207eac16f7449de31f7ae7e10168b4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 24 09:26:26 compute-0 cranky_keller[81929]: {
Nov 24 09:26:26 compute-0 cranky_keller[81929]:     "0": [
Nov 24 09:26:26 compute-0 cranky_keller[81929]:         {
Nov 24 09:26:26 compute-0 cranky_keller[81929]:             "devices": [
Nov 24 09:26:26 compute-0 cranky_keller[81929]:                 "/dev/loop3"
Nov 24 09:26:26 compute-0 cranky_keller[81929]:             ],
Nov 24 09:26:26 compute-0 cranky_keller[81929]:             "lv_name": "ceph_lv0",
Nov 24 09:26:26 compute-0 cranky_keller[81929]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:26:26 compute-0 cranky_keller[81929]:             "lv_size": "21470642176",
Nov 24 09:26:26 compute-0 cranky_keller[81929]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=84a084c3-61a7-5de7-8207-1f88efa59a64,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 24 09:26:26 compute-0 cranky_keller[81929]:             "lv_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:26:26 compute-0 cranky_keller[81929]:             "name": "ceph_lv0",
Nov 24 09:26:26 compute-0 cranky_keller[81929]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:26:26 compute-0 cranky_keller[81929]:             "tags": {
Nov 24 09:26:26 compute-0 cranky_keller[81929]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:26:26 compute-0 cranky_keller[81929]:                 "ceph.block_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:26:26 compute-0 cranky_keller[81929]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 09:26:26 compute-0 cranky_keller[81929]:                 "ceph.cluster_fsid": "84a084c3-61a7-5de7-8207-1f88efa59a64",
Nov 24 09:26:26 compute-0 cranky_keller[81929]:                 "ceph.cluster_name": "ceph",
Nov 24 09:26:26 compute-0 cranky_keller[81929]:                 "ceph.crush_device_class": "",
Nov 24 09:26:26 compute-0 cranky_keller[81929]:                 "ceph.encrypted": "0",
Nov 24 09:26:26 compute-0 cranky_keller[81929]:                 "ceph.osd_fsid": "4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c",
Nov 24 09:26:26 compute-0 cranky_keller[81929]:                 "ceph.osd_id": "0",
Nov 24 09:26:26 compute-0 cranky_keller[81929]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 09:26:26 compute-0 cranky_keller[81929]:                 "ceph.type": "block",
Nov 24 09:26:26 compute-0 cranky_keller[81929]:                 "ceph.vdo": "0",
Nov 24 09:26:26 compute-0 cranky_keller[81929]:                 "ceph.with_tpm": "0"
Nov 24 09:26:26 compute-0 cranky_keller[81929]:             },
Nov 24 09:26:26 compute-0 cranky_keller[81929]:             "type": "block",
Nov 24 09:26:26 compute-0 cranky_keller[81929]:             "vg_name": "ceph_vg0"
Nov 24 09:26:26 compute-0 cranky_keller[81929]:         }
Nov 24 09:26:26 compute-0 cranky_keller[81929]:     ]
Nov 24 09:26:26 compute-0 cranky_keller[81929]: }
Nov 24 09:26:26 compute-0 systemd[1]: libpod-2b2a1370a0ce297e798c543f34a617e4c207eac16f7449de31f7ae7e10168b4b.scope: Deactivated successfully.
Nov 24 09:26:26 compute-0 podman[81912]: 2025-11-24 09:26:26.599771103 +0000 UTC m=+0.517324556 container died 2b2a1370a0ce297e798c543f34a617e4c207eac16f7449de31f7ae7e10168b4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_keller, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 24 09:26:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e8bf87a245ab3859b6ced08e9dcba747703ca3179260d1b053593c29624a79d-merged.mount: Deactivated successfully.
Nov 24 09:26:26 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Nov 24 09:26:26 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 24 09:26:26 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:26:26 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:26:26 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-1
Nov 24 09:26:26 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-1
Nov 24 09:26:26 compute-0 podman[81912]: 2025-11-24 09:26:26.651264918 +0000 UTC m=+0.568818301 container remove 2b2a1370a0ce297e798c543f34a617e4c207eac16f7449de31f7ae7e10168b4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_keller, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 24 09:26:26 compute-0 systemd[1]: libpod-conmon-2b2a1370a0ce297e798c543f34a617e4c207eac16f7449de31f7ae7e10168b4b.scope: Deactivated successfully.
Nov 24 09:26:26 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 24 09:26:26 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:26:26 compute-0 sudo[81808]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:26 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Nov 24 09:26:26 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 24 09:26:26 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:26:26 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:26:26 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Nov 24 09:26:26 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Nov 24 09:26:26 compute-0 sudo[81950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:26:26 compute-0 sudo[81950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:26 compute-0 sudo[81950]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:26 compute-0 sudo[81975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:26:26 compute-0 sudo[81975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:26 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:26:27 compute-0 podman[82042]: 2025-11-24 09:26:27.279086119 +0000 UTC m=+0.045697202 container create 2309aa287c387c189bce4f454ed1a843daeb3f8d97288b01685bb3a6abbb20d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_grothendieck, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:26:27 compute-0 systemd[1]: Started libpod-conmon-2309aa287c387c189bce4f454ed1a843daeb3f8d97288b01685bb3a6abbb20d9.scope.
Nov 24 09:26:27 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:26:27 compute-0 podman[82042]: 2025-11-24 09:26:27.258597352 +0000 UTC m=+0.025208445 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:26:27 compute-0 podman[82042]: 2025-11-24 09:26:27.364900583 +0000 UTC m=+0.131511686 container init 2309aa287c387c189bce4f454ed1a843daeb3f8d97288b01685bb3a6abbb20d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:26:27 compute-0 podman[82042]: 2025-11-24 09:26:27.377648558 +0000 UTC m=+0.144259641 container start 2309aa287c387c189bce4f454ed1a843daeb3f8d97288b01685bb3a6abbb20d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:26:27 compute-0 podman[82042]: 2025-11-24 09:26:27.381117724 +0000 UTC m=+0.147728837 container attach 2309aa287c387c189bce4f454ed1a843daeb3f8d97288b01685bb3a6abbb20d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_grothendieck, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:26:27 compute-0 focused_grothendieck[82058]: 167 167
Nov 24 09:26:27 compute-0 systemd[1]: libpod-2309aa287c387c189bce4f454ed1a843daeb3f8d97288b01685bb3a6abbb20d9.scope: Deactivated successfully.
Nov 24 09:26:27 compute-0 podman[82042]: 2025-11-24 09:26:27.383323309 +0000 UTC m=+0.149934392 container died 2309aa287c387c189bce4f454ed1a843daeb3f8d97288b01685bb3a6abbb20d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:26:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-505e5e5636869ef0510e9d06c47de79cd92435f68b30507d360b3060042ae0a4-merged.mount: Deactivated successfully.
Nov 24 09:26:27 compute-0 podman[82042]: 2025-11-24 09:26:27.426673432 +0000 UTC m=+0.193284545 container remove 2309aa287c387c189bce4f454ed1a843daeb3f8d97288b01685bb3a6abbb20d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:26:27 compute-0 systemd[1]: libpod-conmon-2309aa287c387c189bce4f454ed1a843daeb3f8d97288b01685bb3a6abbb20d9.scope: Deactivated successfully.
Nov 24 09:26:27 compute-0 ceph-mon[74331]: Deploying daemon osd.1 on compute-1
Nov 24 09:26:27 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 24 09:26:27 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:26:27 compute-0 ceph-mon[74331]: Deploying daemon osd.0 on compute-0
Nov 24 09:26:27 compute-0 ceph-mon[74331]: pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:26:27 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:26:27 compute-0 podman[82088]: 2025-11-24 09:26:27.717652705 +0000 UTC m=+0.049290731 container create 7c48ee046280949493c545813edae47de6d28f9da9d5f892f7a31164da9f2031 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-0-activate-test, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:26:27 compute-0 systemd[1]: Started libpod-conmon-7c48ee046280949493c545813edae47de6d28f9da9d5f892f7a31164da9f2031.scope.
Nov 24 09:26:27 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:26:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fda2090bd2338f0db8937f28e0f967fb6546436866cdac6960e56edc6ba4f04c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fda2090bd2338f0db8937f28e0f967fb6546436866cdac6960e56edc6ba4f04c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fda2090bd2338f0db8937f28e0f967fb6546436866cdac6960e56edc6ba4f04c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fda2090bd2338f0db8937f28e0f967fb6546436866cdac6960e56edc6ba4f04c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fda2090bd2338f0db8937f28e0f967fb6546436866cdac6960e56edc6ba4f04c/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:27 compute-0 podman[82088]: 2025-11-24 09:26:27.694093172 +0000 UTC m=+0.025731178 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:26:27 compute-0 podman[82088]: 2025-11-24 09:26:27.79707137 +0000 UTC m=+0.128709366 container init 7c48ee046280949493c545813edae47de6d28f9da9d5f892f7a31164da9f2031 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-0-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:26:27 compute-0 podman[82088]: 2025-11-24 09:26:27.802914776 +0000 UTC m=+0.134552762 container start 7c48ee046280949493c545813edae47de6d28f9da9d5f892f7a31164da9f2031 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-0-activate-test, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Nov 24 09:26:27 compute-0 podman[82088]: 2025-11-24 09:26:27.806837162 +0000 UTC m=+0.138475148 container attach 7c48ee046280949493c545813edae47de6d28f9da9d5f892f7a31164da9f2031 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-0-activate-test, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:26:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-0-activate-test[82105]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Nov 24 09:26:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-0-activate-test[82105]:                             [--no-systemd] [--no-tmpfs]
Nov 24 09:26:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-0-activate-test[82105]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 24 09:26:27 compute-0 systemd[1]: libpod-7c48ee046280949493c545813edae47de6d28f9da9d5f892f7a31164da9f2031.scope: Deactivated successfully.
Nov 24 09:26:27 compute-0 podman[82088]: 2025-11-24 09:26:27.990675833 +0000 UTC m=+0.322313899 container died 7c48ee046280949493c545813edae47de6d28f9da9d5f892f7a31164da9f2031 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-0-activate-test, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:26:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-fda2090bd2338f0db8937f28e0f967fb6546436866cdac6960e56edc6ba4f04c-merged.mount: Deactivated successfully.
Nov 24 09:26:28 compute-0 podman[82088]: 2025-11-24 09:26:28.035775419 +0000 UTC m=+0.367413395 container remove 7c48ee046280949493c545813edae47de6d28f9da9d5f892f7a31164da9f2031 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-0-activate-test, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Nov 24 09:26:28 compute-0 systemd[1]: libpod-conmon-7c48ee046280949493c545813edae47de6d28f9da9d5f892f7a31164da9f2031.scope: Deactivated successfully.
Nov 24 09:26:28 compute-0 systemd[1]: Reloading.
Nov 24 09:26:28 compute-0 systemd-rc-local-generator[82168]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:26:28 compute-0 systemd-sysv-generator[82172]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:26:28 compute-0 systemd[1]: Reloading.
Nov 24 09:26:28 compute-0 systemd-rc-local-generator[82207]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:26:28 compute-0 systemd-sysv-generator[82210]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:26:28 compute-0 systemd[1]: Starting Ceph osd.0 for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:26:28 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:26:29 compute-0 podman[82264]: 2025-11-24 09:26:29.108972314 +0000 UTC m=+0.053061703 container create d5971ea401249e6c038b15c00b7e8b9481864e35da52be511186be127dd993d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-0-activate, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Nov 24 09:26:29 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:26:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df2484cb2754700df92c827df3674cfd90b7fda207f43527202c7f5d83d09d0e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:29 compute-0 podman[82264]: 2025-11-24 09:26:29.082222122 +0000 UTC m=+0.026311581 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:26:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df2484cb2754700df92c827df3674cfd90b7fda207f43527202c7f5d83d09d0e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df2484cb2754700df92c827df3674cfd90b7fda207f43527202c7f5d83d09d0e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df2484cb2754700df92c827df3674cfd90b7fda207f43527202c7f5d83d09d0e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df2484cb2754700df92c827df3674cfd90b7fda207f43527202c7f5d83d09d0e/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:29 compute-0 podman[82264]: 2025-11-24 09:26:29.188608956 +0000 UTC m=+0.132698365 container init d5971ea401249e6c038b15c00b7e8b9481864e35da52be511186be127dd993d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-0-activate, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:26:29 compute-0 podman[82264]: 2025-11-24 09:26:29.195943018 +0000 UTC m=+0.140032407 container start d5971ea401249e6c038b15c00b7e8b9481864e35da52be511186be127dd993d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-0-activate, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Nov 24 09:26:29 compute-0 podman[82264]: 2025-11-24 09:26:29.199183648 +0000 UTC m=+0.143273037 container attach d5971ea401249e6c038b15c00b7e8b9481864e35da52be511186be127dd993d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-0-activate, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:26:29 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-0-activate[82279]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 24 09:26:29 compute-0 bash[82264]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 24 09:26:29 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-0-activate[82279]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 24 09:26:29 compute-0 bash[82264]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 24 09:26:30 compute-0 lvm[82360]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 09:26:30 compute-0 lvm[82360]: VG ceph_vg0 finished
Nov 24 09:26:30 compute-0 ceph-mon[74331]: pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:26:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-0-activate[82279]: --> Failed to activate via raw: did not find any matching OSD to activate
Nov 24 09:26:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-0-activate[82279]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 24 09:26:30 compute-0 bash[82264]: --> Failed to activate via raw: did not find any matching OSD to activate
Nov 24 09:26:30 compute-0 bash[82264]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 24 09:26:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-0-activate[82279]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 24 09:26:30 compute-0 bash[82264]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 24 09:26:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-0-activate[82279]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 24 09:26:30 compute-0 bash[82264]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 24 09:26:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-0-activate[82279]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Nov 24 09:26:30 compute-0 bash[82264]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Nov 24 09:26:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-0-activate[82279]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 24 09:26:30 compute-0 bash[82264]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 24 09:26:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-0-activate[82279]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Nov 24 09:26:30 compute-0 bash[82264]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Nov 24 09:26:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-0-activate[82279]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 24 09:26:30 compute-0 bash[82264]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 24 09:26:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-0-activate[82279]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 24 09:26:30 compute-0 bash[82264]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 24 09:26:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-0-activate[82279]: --> ceph-volume lvm activate successful for osd ID: 0
Nov 24 09:26:30 compute-0 bash[82264]: --> ceph-volume lvm activate successful for osd ID: 0
Nov 24 09:26:30 compute-0 systemd[1]: libpod-d5971ea401249e6c038b15c00b7e8b9481864e35da52be511186be127dd993d2.scope: Deactivated successfully.
Nov 24 09:26:30 compute-0 systemd[1]: libpod-d5971ea401249e6c038b15c00b7e8b9481864e35da52be511186be127dd993d2.scope: Consumed 1.590s CPU time.
Nov 24 09:26:30 compute-0 podman[82264]: 2025-11-24 09:26:30.552554092 +0000 UTC m=+1.496643481 container died d5971ea401249e6c038b15c00b7e8b9481864e35da52be511186be127dd993d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-0-activate, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:26:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-df2484cb2754700df92c827df3674cfd90b7fda207f43527202c7f5d83d09d0e-merged.mount: Deactivated successfully.
Nov 24 09:26:30 compute-0 podman[82264]: 2025-11-24 09:26:30.600852888 +0000 UTC m=+1.544942277 container remove d5971ea401249e6c038b15c00b7e8b9481864e35da52be511186be127dd993d2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-0-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:26:30 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 09:26:30 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:30 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 09:26:30 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:30 compute-0 podman[82529]: 2025-11-24 09:26:30.821225173 +0000 UTC m=+0.037610858 container create 1545a78bd796a77eb9f2f55dccab95a3678ea5f1102b9835e501663eb82e422a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:26:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a847531e7a3ac5e0024a4552388fb4b534c56a637ea745067aac8aa6135754d5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a847531e7a3ac5e0024a4552388fb4b534c56a637ea745067aac8aa6135754d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a847531e7a3ac5e0024a4552388fb4b534c56a637ea745067aac8aa6135754d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a847531e7a3ac5e0024a4552388fb4b534c56a637ea745067aac8aa6135754d5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a847531e7a3ac5e0024a4552388fb4b534c56a637ea745067aac8aa6135754d5/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:30 compute-0 podman[82529]: 2025-11-24 09:26:30.88261619 +0000 UTC m=+0.099001895 container init 1545a78bd796a77eb9f2f55dccab95a3678ea5f1102b9835e501663eb82e422a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 09:26:30 compute-0 podman[82529]: 2025-11-24 09:26:30.889384499 +0000 UTC m=+0.105770184 container start 1545a78bd796a77eb9f2f55dccab95a3678ea5f1102b9835e501663eb82e422a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:26:30 compute-0 bash[82529]: 1545a78bd796a77eb9f2f55dccab95a3678ea5f1102b9835e501663eb82e422a
Nov 24 09:26:30 compute-0 podman[82529]: 2025-11-24 09:26:30.805696875 +0000 UTC m=+0.022082590 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:26:30 compute-0 systemd[1]: Started Ceph osd.0 for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:26:30 compute-0 ceph-osd[82549]: set uid:gid to 167:167 (ceph:ceph)
Nov 24 09:26:30 compute-0 ceph-osd[82549]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-osd, pid 2
Nov 24 09:26:30 compute-0 ceph-osd[82549]: pidfile_write: ignore empty --pid-file
Nov 24 09:26:30 compute-0 ceph-osd[82549]: bdev(0x558d1fb3d800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 24 09:26:30 compute-0 ceph-osd[82549]: bdev(0x558d1fb3d800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 24 09:26:30 compute-0 ceph-osd[82549]: bdev(0x558d1fb3d800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 09:26:30 compute-0 ceph-osd[82549]: bdev(0x558d1fb3d800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 09:26:30 compute-0 ceph-osd[82549]: bdev(0x558d1fb3d800 /var/lib/ceph/osd/ceph-0/block) close
Nov 24 09:26:30 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:26:30 compute-0 sudo[81975]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:30 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:26:30 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:30 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:26:30 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:31 compute-0 sudo[82561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:26:31 compute-0 sudo[82561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:31 compute-0 sudo[82561]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:31 compute-0 sudo[82586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- raw list --format json
Nov 24 09:26:31 compute-0 sudo[82586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:31 compute-0 ceph-osd[82549]: bdev(0x558d1fb3d800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 24 09:26:31 compute-0 ceph-osd[82549]: bdev(0x558d1fb3d800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 24 09:26:31 compute-0 ceph-osd[82549]: bdev(0x558d1fb3d800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 09:26:31 compute-0 ceph-osd[82549]: bdev(0x558d1fb3d800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 09:26:31 compute-0 ceph-osd[82549]: bdev(0x558d1fb3d800 /var/lib/ceph/osd/ceph-0/block) close
Nov 24 09:26:31 compute-0 ceph-osd[82549]: bdev(0x558d1fb3d800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 24 09:26:31 compute-0 ceph-osd[82549]: bdev(0x558d1fb3d800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 24 09:26:31 compute-0 ceph-osd[82549]: bdev(0x558d1fb3d800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 09:26:31 compute-0 ceph-osd[82549]: bdev(0x558d1fb3d800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 09:26:31 compute-0 ceph-osd[82549]: bdev(0x558d1fb3d800 /var/lib/ceph/osd/ceph-0/block) close
Nov 24 09:26:31 compute-0 ceph-osd[82549]: bdev(0x558d1fb3d800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 24 09:26:31 compute-0 ceph-osd[82549]: bdev(0x558d1fb3d800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 24 09:26:31 compute-0 ceph-osd[82549]: bdev(0x558d1fb3d800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 09:26:31 compute-0 ceph-osd[82549]: bdev(0x558d1fb3d800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 09:26:31 compute-0 ceph-osd[82549]: bdev(0x558d1fb3d800 /var/lib/ceph/osd/ceph-0/block) close
Nov 24 09:26:31 compute-0 ceph-osd[82549]: bdev(0x558d1fb3d800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 24 09:26:31 compute-0 ceph-osd[82549]: bdev(0x558d1fb3d800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 24 09:26:31 compute-0 ceph-osd[82549]: bdev(0x558d1fb3d800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 09:26:31 compute-0 ceph-osd[82549]: bdev(0x558d1fb3d800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 09:26:31 compute-0 ceph-osd[82549]: bdev(0x558d1fb3d800 /var/lib/ceph/osd/ceph-0/block) close
Nov 24 09:26:31 compute-0 podman[82658]: 2025-11-24 09:26:31.563890716 +0000 UTC m=+0.052156084 container create 0352f5aeb43b2d1d57c586fa6d123d0d8bf7958e23b950ea159e0b73d62e8f19 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_napier, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 24 09:26:31 compute-0 systemd[1]: Started libpod-conmon-0352f5aeb43b2d1d57c586fa6d123d0d8bf7958e23b950ea159e0b73d62e8f19.scope.
Nov 24 09:26:31 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:26:31 compute-0 podman[82658]: 2025-11-24 09:26:31.537599177 +0000 UTC m=+0.025864545 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:26:31 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:31 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:31 compute-0 ceph-mon[74331]: pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:26:31 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:31 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:31 compute-0 podman[82658]: 2025-11-24 09:26:31.653555962 +0000 UTC m=+0.141821360 container init 0352f5aeb43b2d1d57c586fa6d123d0d8bf7958e23b950ea159e0b73d62e8f19 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_napier, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:26:31 compute-0 podman[82658]: 2025-11-24 09:26:31.663839618 +0000 UTC m=+0.152104966 container start 0352f5aeb43b2d1d57c586fa6d123d0d8bf7958e23b950ea159e0b73d62e8f19 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Nov 24 09:26:31 compute-0 podman[82658]: 2025-11-24 09:26:31.667635375 +0000 UTC m=+0.155900773 container attach 0352f5aeb43b2d1d57c586fa6d123d0d8bf7958e23b950ea159e0b73d62e8f19 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_napier, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:26:31 compute-0 intelligent_napier[82679]: 167 167
Nov 24 09:26:31 compute-0 systemd[1]: libpod-0352f5aeb43b2d1d57c586fa6d123d0d8bf7958e23b950ea159e0b73d62e8f19.scope: Deactivated successfully.
Nov 24 09:26:31 compute-0 podman[82658]: 2025-11-24 09:26:31.671889186 +0000 UTC m=+0.160154534 container died 0352f5aeb43b2d1d57c586fa6d123d0d8bf7958e23b950ea159e0b73d62e8f19 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_napier, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 24 09:26:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2800c3b3e0e69fd069bd48b85a399a828134c3a9bcb3f5f7e733ec99944232f-merged.mount: Deactivated successfully.
Nov 24 09:26:31 compute-0 podman[82658]: 2025-11-24 09:26:31.710598916 +0000 UTC m=+0.198864274 container remove 0352f5aeb43b2d1d57c586fa6d123d0d8bf7958e23b950ea159e0b73d62e8f19 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_napier, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Nov 24 09:26:31 compute-0 systemd[1]: libpod-conmon-0352f5aeb43b2d1d57c586fa6d123d0d8bf7958e23b950ea159e0b73d62e8f19.scope: Deactivated successfully.
Nov 24 09:26:31 compute-0 ceph-osd[82549]: bdev(0x558d1fb3d800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 24 09:26:31 compute-0 ceph-osd[82549]: bdev(0x558d1fb3d800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 24 09:26:31 compute-0 ceph-osd[82549]: bdev(0x558d1fb3d800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 09:26:31 compute-0 ceph-osd[82549]: bdev(0x558d1fb3d800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 09:26:31 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 24 09:26:31 compute-0 ceph-osd[82549]: bdev(0x558d1fb3dc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 24 09:26:31 compute-0 ceph-osd[82549]: bdev(0x558d1fb3dc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 24 09:26:31 compute-0 ceph-osd[82549]: bdev(0x558d1fb3dc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 09:26:31 compute-0 ceph-osd[82549]: bdev(0x558d1fb3dc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 09:26:31 compute-0 ceph-osd[82549]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 24 09:26:31 compute-0 ceph-osd[82549]: bdev(0x558d1fb3dc00 /var/lib/ceph/osd/ceph-0/block) close
Nov 24 09:26:31 compute-0 podman[82704]: 2025-11-24 09:26:31.913209985 +0000 UTC m=+0.049857733 container create 7c7bcaadd35280366c98c7ba046554a4c221c71e31f03e95076d39dfe394c62f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_mestorf, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:26:31 compute-0 systemd[1]: Started libpod-conmon-7c7bcaadd35280366c98c7ba046554a4c221c71e31f03e95076d39dfe394c62f.scope.
Nov 24 09:26:31 compute-0 sudo[82743]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whbpdqcwqoyjrnsdbjpsnecvqstgawys ; /usr/bin/python3'
Nov 24 09:26:31 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:26:31 compute-0 podman[82704]: 2025-11-24 09:26:31.891807287 +0000 UTC m=+0.028455045 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:26:31 compute-0 sudo[82743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:26:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed922d975952e97737252d5a00891cc6529ecbfc12141002dba6a084b3600cf1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed922d975952e97737252d5a00891cc6529ecbfc12141002dba6a084b3600cf1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed922d975952e97737252d5a00891cc6529ecbfc12141002dba6a084b3600cf1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed922d975952e97737252d5a00891cc6529ecbfc12141002dba6a084b3600cf1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:31 compute-0 podman[82704]: 2025-11-24 09:26:31.999288812 +0000 UTC m=+0.135936570 container init 7c7bcaadd35280366c98c7ba046554a4c221c71e31f03e95076d39dfe394c62f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_mestorf, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default)
Nov 24 09:26:32 compute-0 podman[82704]: 2025-11-24 09:26:32.007945328 +0000 UTC m=+0.144593066 container start 7c7bcaadd35280366c98c7ba046554a4c221c71e31f03e95076d39dfe394c62f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:26:32 compute-0 podman[82704]: 2025-11-24 09:26:32.011439485 +0000 UTC m=+0.148087223 container attach 7c7bcaadd35280366c98c7ba046554a4c221c71e31f03e95076d39dfe394c62f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_mestorf, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:26:32 compute-0 ceph-osd[82549]: bdev(0x558d1fb3d800 /var/lib/ceph/osd/ceph-0/block) close
Nov 24 09:26:32 compute-0 python3[82748]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:26:32 compute-0 podman[82753]: 2025-11-24 09:26:32.186071294 +0000 UTC m=+0.041591440 container create 373da523bdf0d2c433e98dae40613723347d181d168711f0ada4ab7fb36dacf4 (image=quay.io/ceph/ceph:v19, name=suspicious_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:26:32 compute-0 systemd[1]: Started libpod-conmon-373da523bdf0d2c433e98dae40613723347d181d168711f0ada4ab7fb36dacf4.scope.
Nov 24 09:26:32 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:26:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/402812556aa51c93e39ce8ce28e1e6be30517cbc61d7f88dfe1c3bb285e70cbc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/402812556aa51c93e39ce8ce28e1e6be30517cbc61d7f88dfe1c3bb285e70cbc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/402812556aa51c93e39ce8ce28e1e6be30517cbc61d7f88dfe1c3bb285e70cbc/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:32 compute-0 podman[82753]: 2025-11-24 09:26:32.168848915 +0000 UTC m=+0.024369081 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:26:32 compute-0 podman[82753]: 2025-11-24 09:26:32.264549196 +0000 UTC m=+0.120069372 container init 373da523bdf0d2c433e98dae40613723347d181d168711f0ada4ab7fb36dacf4 (image=quay.io/ceph/ceph:v19, name=suspicious_shaw, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Nov 24 09:26:32 compute-0 podman[82753]: 2025-11-24 09:26:32.272831101 +0000 UTC m=+0.128351247 container start 373da523bdf0d2c433e98dae40613723347d181d168711f0ada4ab7fb36dacf4 (image=quay.io/ceph/ceph:v19, name=suspicious_shaw, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Nov 24 09:26:32 compute-0 podman[82753]: 2025-11-24 09:26:32.276590197 +0000 UTC m=+0.132110363 container attach 373da523bdf0d2c433e98dae40613723347d181d168711f0ada4ab7fb36dacf4 (image=quay.io/ceph/ceph:v19, name=suspicious_shaw, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Nov 24 09:26:32 compute-0 ceph-osd[82549]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Nov 24 09:26:32 compute-0 ceph-osd[82549]: load: jerasure load: lrc 
Nov 24 09:26:32 compute-0 ceph-osd[82549]: bdev(0x558d209d8c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 24 09:26:32 compute-0 ceph-osd[82549]: bdev(0x558d209d8c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 24 09:26:32 compute-0 ceph-osd[82549]: bdev(0x558d209d8c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 09:26:32 compute-0 ceph-osd[82549]: bdev(0x558d209d8c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 09:26:32 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 24 09:26:32 compute-0 ceph-osd[82549]: bdev(0x558d209d8c00 /var/lib/ceph/osd/ceph-0/block) close
Nov 24 09:26:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 09:26:32 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 09:26:32 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:32 compute-0 ceph-osd[82549]: bdev(0x558d209d8c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 24 09:26:32 compute-0 ceph-osd[82549]: bdev(0x558d209d8c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 24 09:26:32 compute-0 ceph-osd[82549]: bdev(0x558d209d8c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 09:26:32 compute-0 ceph-osd[82549]: bdev(0x558d209d8c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 09:26:32 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 24 09:26:32 compute-0 ceph-osd[82549]: bdev(0x558d209d8c00 /var/lib/ceph/osd/ceph-0/block) close
Nov 24 09:26:32 compute-0 lvm[82865]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 09:26:32 compute-0 lvm[82865]: VG ceph_vg0 finished
Nov 24 09:26:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Nov 24 09:26:32 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2864336643' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 24 09:26:32 compute-0 suspicious_shaw[82778]: 
Nov 24 09:26:32 compute-0 suspicious_shaw[82778]: {"fsid":"84a084c3-61a7-5de7-8207-1f88efa59a64","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":84,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":5,"num_osds":2,"num_up_osds":0,"osd_up_since":0,"num_in_osds":2,"osd_in_since":1763976381,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2025-11-24T09:25:05:540478+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-24T09:26:26.946461+0000","services":{}},"progress_events":{}}
Nov 24 09:26:32 compute-0 modest_mestorf[82744]: {}
Nov 24 09:26:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:26:32 compute-0 systemd[1]: libpod-373da523bdf0d2c433e98dae40613723347d181d168711f0ada4ab7fb36dacf4.scope: Deactivated successfully.
Nov 24 09:26:32 compute-0 conmon[82778]: conmon 373da523bdf0d2c433e9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-373da523bdf0d2c433e98dae40613723347d181d168711f0ada4ab7fb36dacf4.scope/container/memory.events
Nov 24 09:26:32 compute-0 podman[82753]: 2025-11-24 09:26:32.719154713 +0000 UTC m=+0.574674899 container died 373da523bdf0d2c433e98dae40613723347d181d168711f0ada4ab7fb36dacf4 (image=quay.io/ceph/ceph:v19, name=suspicious_shaw, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:26:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-402812556aa51c93e39ce8ce28e1e6be30517cbc61d7f88dfe1c3bb285e70cbc-merged.mount: Deactivated successfully.
Nov 24 09:26:32 compute-0 systemd[1]: libpod-7c7bcaadd35280366c98c7ba046554a4c221c71e31f03e95076d39dfe394c62f.scope: Deactivated successfully.
Nov 24 09:26:32 compute-0 podman[82704]: 2025-11-24 09:26:32.764536308 +0000 UTC m=+0.901184076 container died 7c7bcaadd35280366c98c7ba046554a4c221c71e31f03e95076d39dfe394c62f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_mestorf, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:26:32 compute-0 systemd[1]: libpod-7c7bcaadd35280366c98c7ba046554a4c221c71e31f03e95076d39dfe394c62f.scope: Consumed 1.184s CPU time.
Nov 24 09:26:32 compute-0 podman[82753]: 2025-11-24 09:26:32.788625288 +0000 UTC m=+0.644145434 container remove 373da523bdf0d2c433e98dae40613723347d181d168711f0ada4ab7fb36dacf4 (image=quay.io/ceph/ceph:v19, name=suspicious_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:26:32 compute-0 systemd[1]: libpod-conmon-373da523bdf0d2c433e98dae40613723347d181d168711f0ada4ab7fb36dacf4.scope: Deactivated successfully.
Nov 24 09:26:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed922d975952e97737252d5a00891cc6529ecbfc12141002dba6a084b3600cf1-merged.mount: Deactivated successfully.
Nov 24 09:26:32 compute-0 sudo[82743]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:32 compute-0 podman[82704]: 2025-11-24 09:26:32.819289461 +0000 UTC m=+0.955937199 container remove 7c7bcaadd35280366c98c7ba046554a4c221c71e31f03e95076d39dfe394c62f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:26:32 compute-0 systemd[1]: libpod-conmon-7c7bcaadd35280366c98c7ba046554a4c221c71e31f03e95076d39dfe394c62f.scope: Deactivated successfully.
Nov 24 09:26:32 compute-0 sudo[82586]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:26:32 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:32 compute-0 ceph-osd[82549]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 24 09:26:32 compute-0 ceph-osd[82549]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 24 09:26:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:26:32 compute-0 ceph-osd[82549]: bdev(0x558d209d8c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 24 09:26:32 compute-0 ceph-osd[82549]: bdev(0x558d209d8c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 24 09:26:32 compute-0 ceph-osd[82549]: bdev(0x558d209d8c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 09:26:32 compute-0 ceph-osd[82549]: bdev(0x558d209d8c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 09:26:32 compute-0 ceph-osd[82549]: bdev(0x558d209d8c00 /var/lib/ceph/osd/ceph-0/block) close
Nov 24 09:26:32 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:32 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:26:32 compute-0 sudo[82907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:26:32 compute-0 sudo[82907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:32 compute-0 sudo[82907]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:33 compute-0 ceph-osd[82549]: bdev(0x558d209d8c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 24 09:26:33 compute-0 ceph-osd[82549]: bdev(0x558d209d8c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 24 09:26:33 compute-0 ceph-osd[82549]: bdev(0x558d209d8c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 09:26:33 compute-0 ceph-osd[82549]: bdev(0x558d209d8c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 09:26:33 compute-0 ceph-osd[82549]: bdev(0x558d209d8c00 /var/lib/ceph/osd/ceph-0/block) close
Nov 24 09:26:33 compute-0 sudo[82935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:26:33 compute-0 sudo[82935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:33 compute-0 sudo[82935]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:33 compute-0 sudo[82960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 24 09:26:33 compute-0 sudo[82960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:33 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:33 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:33 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2864336643' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 24 09:26:33 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:33 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:33 compute-0 ceph-mon[74331]: pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:26:33 compute-0 ceph-osd[82549]: bdev(0x558d209d8c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 24 09:26:33 compute-0 ceph-osd[82549]: bdev(0x558d209d8c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 24 09:26:33 compute-0 ceph-osd[82549]: bdev(0x558d209d8c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 09:26:33 compute-0 ceph-osd[82549]: bdev(0x558d209d8c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 09:26:33 compute-0 ceph-osd[82549]: bdev(0x558d209d8c00 /var/lib/ceph/osd/ceph-0/block) close
Nov 24 09:26:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Nov 24 09:26:33 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/2493412744,v1:192.168.122.101:6801/2493412744]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 24 09:26:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 09:26:33 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:33 compute-0 ceph-osd[82549]: bdev(0x558d209d8c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 24 09:26:33 compute-0 ceph-osd[82549]: bdev(0x558d209d8c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 24 09:26:33 compute-0 ceph-osd[82549]: bdev(0x558d209d8c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 09:26:33 compute-0 ceph-osd[82549]: bdev(0x558d209d8c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 09:26:33 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 24 09:26:33 compute-0 ceph-osd[82549]: bdev(0x558d209d9000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 24 09:26:33 compute-0 ceph-osd[82549]: bdev(0x558d209d9000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 24 09:26:33 compute-0 ceph-osd[82549]: bdev(0x558d209d9000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 09:26:33 compute-0 ceph-osd[82549]: bdev(0x558d209d9000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 09:26:33 compute-0 ceph-osd[82549]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 24 09:26:33 compute-0 ceph-osd[82549]: bluefs mount
Nov 24 09:26:33 compute-0 ceph-osd[82549]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 24 09:26:33 compute-0 ceph-osd[82549]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 24 09:26:33 compute-0 ceph-osd[82549]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 24 09:26:33 compute-0 ceph-osd[82549]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 24 09:26:33 compute-0 ceph-osd[82549]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 24 09:26:33 compute-0 ceph-osd[82549]: bluefs mount shared_bdev_used = 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: RocksDB version: 7.9.2
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Git sha 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Compile date 2025-07-17 03:12:14
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: DB SUMMARY
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: DB Session ID:  CN0NKLUDD11MECT3EWQE
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: CURRENT file:  CURRENT
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: IDENTITY file:  IDENTITY
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                         Options.error_if_exists: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                       Options.create_if_missing: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                         Options.paranoid_checks: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                                     Options.env: 0x558d209a9dc0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                                Options.info_log: 0x558d209ad7a0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.max_file_opening_threads: 16
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                              Options.statistics: (nil)
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                               Options.use_fsync: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                       Options.max_log_file_size: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                         Options.allow_fallocate: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                        Options.use_direct_reads: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.create_missing_column_families: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                              Options.db_log_dir: 
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                                 Options.wal_dir: db.wal
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                   Options.advise_random_on_open: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                    Options.write_buffer_manager: 0x558d20aa2a00
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                            Options.rate_limiter: (nil)
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                  Options.unordered_write: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                               Options.row_cache: None
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                              Options.wal_filter: None
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:             Options.allow_ingest_behind: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:             Options.two_write_queues: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:             Options.manual_wal_flush: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:             Options.wal_compression: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:             Options.atomic_flush: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                 Options.log_readahead_size: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:             Options.allow_data_in_errors: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:             Options.db_host_id: __hostname__
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:             Options.max_background_jobs: 4
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:             Options.max_background_compactions: -1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:             Options.max_subcompactions: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                          Options.max_open_files: -1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                          Options.bytes_per_sync: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                  Options.max_background_flushes: -1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Compression algorithms supported:
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         kZSTD supported: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         kXpressCompression supported: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         kBZip2Compression supported: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         kLZ4Compression supported: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         kZlibCompression supported: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         kLZ4HCCompression supported: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         kSnappyCompression supported: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:        Options.compaction_filter: None
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558d209adb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558d1fbd3350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.compression: LZ4
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:             Options.num_levels: 7
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                           Options.bloom_locality: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                               Options.ttl: 2592000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                       Options.enable_blob_files: false
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                           Options.min_blob_size: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:           Options.merge_operator: None
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:        Options.compaction_filter: None
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558d209adb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558d1fbd3350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.compression: LZ4
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:             Options.num_levels: 7
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                           Options.bloom_locality: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                               Options.ttl: 2592000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                       Options.enable_blob_files: false
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                           Options.min_blob_size: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:           Options.merge_operator: None
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:        Options.compaction_filter: None
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558d209adb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558d1fbd3350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.compression: LZ4
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:             Options.num_levels: 7
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                           Options.bloom_locality: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                               Options.ttl: 2592000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                       Options.enable_blob_files: false
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                           Options.min_blob_size: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:           Options.merge_operator: None
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:        Options.compaction_filter: None
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558d209adb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558d1fbd3350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.compression: LZ4
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:             Options.num_levels: 7
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                           Options.bloom_locality: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                               Options.ttl: 2592000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                       Options.enable_blob_files: false
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                           Options.min_blob_size: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:           Options.merge_operator: None
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:        Options.compaction_filter: None
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558d209adb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558d1fbd3350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.compression: LZ4
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:             Options.num_levels: 7
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                           Options.bloom_locality: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                               Options.ttl: 2592000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                       Options.enable_blob_files: false
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                           Options.min_blob_size: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:           Options.merge_operator: None
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:        Options.compaction_filter: None
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558d209adb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558d1fbd3350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.compression: LZ4
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:             Options.num_levels: 7
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                           Options.bloom_locality: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                               Options.ttl: 2592000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                       Options.enable_blob_files: false
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                           Options.min_blob_size: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:           Options.merge_operator: None
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:        Options.compaction_filter: None
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558d209adb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558d1fbd3350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.compression: LZ4
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:             Options.num_levels: 7
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                           Options.bloom_locality: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                               Options.ttl: 2592000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                       Options.enable_blob_files: false
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                           Options.min_blob_size: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:           Options.merge_operator: None
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:        Options.compaction_filter: None
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558d209adb80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558d1fbd29b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.compression: LZ4
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:             Options.num_levels: 7
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                           Options.bloom_locality: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                               Options.ttl: 2592000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                       Options.enable_blob_files: false
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                           Options.min_blob_size: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:           Options.merge_operator: None
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:        Options.compaction_filter: None
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558d209adb80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558d1fbd29b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.compression: LZ4
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:             Options.num_levels: 7
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                           Options.bloom_locality: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                               Options.ttl: 2592000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                       Options.enable_blob_files: false
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                           Options.min_blob_size: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:           Options.merge_operator: None
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:        Options.compaction_filter: None
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558d209adb80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558d1fbd29b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.compression: LZ4
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:             Options.num_levels: 7
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                           Options.bloom_locality: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                               Options.ttl: 2592000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                       Options.enable_blob_files: false
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                           Options.min_blob_size: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 560f10b8-6a3b-47ae-afd2-b8804405c31a
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976393754197, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976393754439, "job": 1, "event": "recovery_finished"}
Nov 24 09:26:33 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Nov 24 09:26:33 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Nov 24 09:26:33 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 24 09:26:33 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: freelist init
Nov 24 09:26:33 compute-0 ceph-osd[82549]: freelist _read_cfg
Nov 24 09:26:33 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 24 09:26:33 compute-0 ceph-osd[82549]: bluefs umount
Nov 24 09:26:33 compute-0 ceph-osd[82549]: bdev(0x558d209d9000 /var/lib/ceph/osd/ceph-0/block) close
Nov 24 09:26:33 compute-0 podman[83227]: 2025-11-24 09:26:33.820792603 +0000 UTC m=+0.056020874 container exec 926e81c0f890a1c1ac5ebf5b0a3fc7d39273a3029701ecf933d5ab782a4c6bc4 (image=quay.io/ceph/ceph:v19, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 24 09:26:33 compute-0 podman[83227]: 2025-11-24 09:26:33.928465343 +0000 UTC m=+0.163693584 container exec_died 926e81c0f890a1c1ac5ebf5b0a3fc7d39273a3029701ecf933d5ab782a4c6bc4 (image=quay.io/ceph/ceph:v19, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mon-compute-0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:26:33 compute-0 ceph-osd[82549]: bdev(0x558d209d9000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 24 09:26:33 compute-0 ceph-osd[82549]: bdev(0x558d209d9000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 24 09:26:33 compute-0 ceph-osd[82549]: bdev(0x558d209d9000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 09:26:33 compute-0 ceph-osd[82549]: bdev(0x558d209d9000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 09:26:33 compute-0 ceph-osd[82549]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 24 09:26:33 compute-0 ceph-osd[82549]: bluefs mount
Nov 24 09:26:33 compute-0 ceph-osd[82549]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 24 09:26:33 compute-0 ceph-osd[82549]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 24 09:26:33 compute-0 ceph-osd[82549]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 24 09:26:33 compute-0 ceph-osd[82549]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 24 09:26:33 compute-0 ceph-osd[82549]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 24 09:26:33 compute-0 ceph-osd[82549]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 24 09:26:33 compute-0 ceph-osd[82549]: bluefs mount shared_bdev_used = 4718592
Nov 24 09:26:33 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: RocksDB version: 7.9.2
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Git sha 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Compile date 2025-07-17 03:12:14
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: DB SUMMARY
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: DB Session ID:  CN0NKLUDD11MECT3EWQF
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: CURRENT file:  CURRENT
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: IDENTITY file:  IDENTITY
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                         Options.error_if_exists: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                       Options.create_if_missing: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                         Options.paranoid_checks: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                                     Options.env: 0x558d20b462a0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                                Options.info_log: 0x558d209ad920
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                Options.max_file_opening_threads: 16
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                              Options.statistics: (nil)
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                               Options.use_fsync: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                       Options.max_log_file_size: 0
Nov 24 09:26:33 compute-0 ceph-osd[82549]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                         Options.allow_fallocate: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                        Options.use_direct_reads: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.create_missing_column_families: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                              Options.db_log_dir: 
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                                 Options.wal_dir: db.wal
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                   Options.advise_random_on_open: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                    Options.write_buffer_manager: 0x558d20aa2c80
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                            Options.rate_limiter: (nil)
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                  Options.unordered_write: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                               Options.row_cache: None
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                              Options.wal_filter: None
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:             Options.allow_ingest_behind: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:             Options.two_write_queues: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:             Options.manual_wal_flush: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:             Options.wal_compression: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:             Options.atomic_flush: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                 Options.log_readahead_size: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:             Options.allow_data_in_errors: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:             Options.db_host_id: __hostname__
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:             Options.max_background_jobs: 4
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:             Options.max_background_compactions: -1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:             Options.max_subcompactions: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                          Options.max_open_files: -1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                          Options.bytes_per_sync: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                  Options.max_background_flushes: -1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Compression algorithms supported:
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         kZSTD supported: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         kXpressCompression supported: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         kBZip2Compression supported: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         kLZ4Compression supported: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         kZlibCompression supported: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         kLZ4HCCompression supported: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         kSnappyCompression supported: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:        Options.compaction_filter: None
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558d20b70600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558d1fbd3350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.compression: LZ4
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:             Options.num_levels: 7
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                           Options.bloom_locality: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                               Options.ttl: 2592000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                       Options.enable_blob_files: false
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                           Options.min_blob_size: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:           Options.merge_operator: None
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:        Options.compaction_filter: None
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558d20b70600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558d1fbd3350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.compression: LZ4
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:             Options.num_levels: 7
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                           Options.bloom_locality: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                               Options.ttl: 2592000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                       Options.enable_blob_files: false
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                           Options.min_blob_size: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:           Options.merge_operator: None
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:        Options.compaction_filter: None
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558d20b70600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558d1fbd3350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.compression: LZ4
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:             Options.num_levels: 7
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                           Options.bloom_locality: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                               Options.ttl: 2592000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                       Options.enable_blob_files: false
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                           Options.min_blob_size: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:           Options.merge_operator: None
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:        Options.compaction_filter: None
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558d20b70600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558d1fbd3350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.compression: LZ4
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:             Options.num_levels: 7
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                           Options.bloom_locality: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                               Options.ttl: 2592000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                       Options.enable_blob_files: false
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                           Options.min_blob_size: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:           Options.merge_operator: None
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:        Options.compaction_filter: None
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558d20b70600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558d1fbd3350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.compression: LZ4
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:             Options.num_levels: 7
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                           Options.bloom_locality: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                               Options.ttl: 2592000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                       Options.enable_blob_files: false
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                           Options.min_blob_size: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:           Options.merge_operator: None
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:        Options.compaction_filter: None
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558d20b70600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558d1fbd3350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.compression: LZ4
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:             Options.num_levels: 7
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                           Options.bloom_locality: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                               Options.ttl: 2592000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                       Options.enable_blob_files: false
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                           Options.min_blob_size: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:           Options.merge_operator: None
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:        Options.compaction_filter: None
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558d20b70600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558d1fbd3350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.compression: LZ4
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:             Options.num_levels: 7
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                           Options.bloom_locality: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                               Options.ttl: 2592000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                       Options.enable_blob_files: false
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                           Options.min_blob_size: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:           Options.merge_operator: None
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:        Options.compaction_filter: None
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558d20b71340)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558d1fbd2f30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.compression: LZ4
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:             Options.num_levels: 7
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                           Options.bloom_locality: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                               Options.ttl: 2592000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                       Options.enable_blob_files: false
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                           Options.min_blob_size: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:           Options.merge_operator: None
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:        Options.compaction_filter: None
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558d20b71340)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558d1fbd2f30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.compression: LZ4
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:             Options.num_levels: 7
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                           Options.bloom_locality: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                               Options.ttl: 2592000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                       Options.enable_blob_files: false
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                           Options.min_blob_size: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:           Options.merge_operator: None
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:        Options.compaction_filter: None
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558d20b71340)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558d1fbd2f30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.compression: LZ4
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:             Options.num_levels: 7
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                           Options.bloom_locality: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                               Options.ttl: 2592000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                       Options.enable_blob_files: false
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                           Options.min_blob_size: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 560f10b8-6a3b-47ae-afd2-b8804405c31a
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976394012982, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976394023572, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976394, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "560f10b8-6a3b-47ae-afd2-b8804405c31a", "db_session_id": "CN0NKLUDD11MECT3EWQF", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976394026128, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976394, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "560f10b8-6a3b-47ae-afd2-b8804405c31a", "db_session_id": "CN0NKLUDD11MECT3EWQF", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976394030540, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976394, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "560f10b8-6a3b-47ae-afd2-b8804405c31a", "db_session_id": "CN0NKLUDD11MECT3EWQF", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976394031919, "job": 1, "event": "recovery_finished"}
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x558d20b74000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: DB pointer 0x558d20b54000
Nov 24 09:26:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 24 09:26:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Nov 24 09:26:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 09:26:34 compute-0 ceph-osd[82549]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd3350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd3350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd3350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd3350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd3350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd3350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd3350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd2f30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd2f30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd2f30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd3350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd3350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 24 09:26:34 compute-0 ceph-osd[82549]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 24 09:26:34 compute-0 ceph-osd[82549]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 24 09:26:34 compute-0 ceph-osd[82549]: _get_class not permitted to load lua
Nov 24 09:26:34 compute-0 ceph-osd[82549]: _get_class not permitted to load sdk
Nov 24 09:26:34 compute-0 ceph-osd[82549]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 24 09:26:34 compute-0 ceph-osd[82549]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 24 09:26:34 compute-0 ceph-osd[82549]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 24 09:26:34 compute-0 ceph-osd[82549]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 24 09:26:34 compute-0 ceph-osd[82549]: osd.0 0 load_pgs
Nov 24 09:26:34 compute-0 ceph-osd[82549]: osd.0 0 load_pgs opened 0 pgs
Nov 24 09:26:34 compute-0 ceph-osd[82549]: osd.0 0 log_to_monitors true
Nov 24 09:26:34 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-0[82545]: 2025-11-24T09:26:34.059+0000 7f15396a9740 -1 osd.0 0 log_to_monitors true
Nov 24 09:26:34 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Nov 24 09:26:34 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1187333864,v1:192.168.122.100:6803/1187333864]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 24 09:26:34 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 09:26:34 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:34 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 09:26:34 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:34 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 09:26:34 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:34 compute-0 sudo[82960]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:34 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:26:34 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:34 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:26:34 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:34 compute-0 sudo[83544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:26:34 compute-0 sudo[83544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:34 compute-0 sudo[83544]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:34 compute-0 sudo[83569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- inventory --format=json-pretty --filter-for-batch
Nov 24 09:26:34 compute-0 sudo[83569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:26:34 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Nov 24 09:26:34 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 24 09:26:34 compute-0 ceph-mon[74331]: from='osd.1 [v2:192.168.122.101:6800/2493412744,v1:192.168.122.101:6801/2493412744]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 24 09:26:34 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:34 compute-0 ceph-mon[74331]: from='osd.0 [v2:192.168.122.100:6802/1187333864,v1:192.168.122.100:6803/1187333864]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 24 09:26:34 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:34 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:34 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:34 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:34 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:34 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/2493412744,v1:192.168.122.101:6801/2493412744]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 24 09:26:34 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1187333864,v1:192.168.122.100:6803/1187333864]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 24 09:26:34 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e6 e6: 2 total, 0 up, 2 in
Nov 24 09:26:34 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e6: 2 total, 0 up, 2 in
Nov 24 09:26:34 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Nov 24 09:26:34 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1187333864,v1:192.168.122.100:6803/1187333864]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 24 09:26:34 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 24 09:26:34 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 24 09:26:34 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 09:26:34 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 24 09:26:34 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 09:26:34 compute-0 ceph-mgr[74626]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 24 09:26:34 compute-0 ceph-mgr[74626]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 09:26:34 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]} v 0)
Nov 24 09:26:34 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/2493412744,v1:192.168.122.101:6801/2493412744]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Nov 24 09:26:34 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-1,root=default}
Nov 24 09:26:34 compute-0 podman[83633]: 2025-11-24 09:26:34.796305193 +0000 UTC m=+0.039895597 container create d0a89c742e918aacb2f41ebe3eda506fc49e76bffcdecb5907bd91e69931c3b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:26:34 compute-0 systemd[1]: Started libpod-conmon-d0a89c742e918aacb2f41ebe3eda506fc49e76bffcdecb5907bd91e69931c3b6.scope.
Nov 24 09:26:34 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:26:34 compute-0 podman[83633]: 2025-11-24 09:26:34.856401771 +0000 UTC m=+0.099992185 container init d0a89c742e918aacb2f41ebe3eda506fc49e76bffcdecb5907bd91e69931c3b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_lederberg, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:26:34 compute-0 podman[83633]: 2025-11-24 09:26:34.863292532 +0000 UTC m=+0.106882936 container start d0a89c742e918aacb2f41ebe3eda506fc49e76bffcdecb5907bd91e69931c3b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_lederberg, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:26:34 compute-0 podman[83633]: 2025-11-24 09:26:34.866322315 +0000 UTC m=+0.109912749 container attach d0a89c742e918aacb2f41ebe3eda506fc49e76bffcdecb5907bd91e69931c3b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_lederberg, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:26:34 compute-0 quizzical_lederberg[83650]: 167 167
Nov 24 09:26:34 compute-0 systemd[1]: libpod-d0a89c742e918aacb2f41ebe3eda506fc49e76bffcdecb5907bd91e69931c3b6.scope: Deactivated successfully.
Nov 24 09:26:34 compute-0 conmon[83650]: conmon d0a89c742e918aacb2f4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d0a89c742e918aacb2f41ebe3eda506fc49e76bffcdecb5907bd91e69931c3b6.scope/container/memory.events
Nov 24 09:26:34 compute-0 podman[83633]: 2025-11-24 09:26:34.871380331 +0000 UTC m=+0.114970735 container died d0a89c742e918aacb2f41ebe3eda506fc49e76bffcdecb5907bd91e69931c3b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_lederberg, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:26:34 compute-0 podman[83633]: 2025-11-24 09:26:34.779036062 +0000 UTC m=+0.022626466 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:26:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad9c017fb16c12873cff0d338add0575a09e18261ecb2117ba12e59b3d41cb59-merged.mount: Deactivated successfully.
Nov 24 09:26:34 compute-0 podman[83633]: 2025-11-24 09:26:34.909671458 +0000 UTC m=+0.153261862 container remove d0a89c742e918aacb2f41ebe3eda506fc49e76bffcdecb5907bd91e69931c3b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_lederberg, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 09:26:34 compute-0 systemd[1]: libpod-conmon-d0a89c742e918aacb2f41ebe3eda506fc49e76bffcdecb5907bd91e69931c3b6.scope: Deactivated successfully.
Nov 24 09:26:34 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:26:35 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 24 09:26:35 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 24 09:26:35 compute-0 podman[83672]: 2025-11-24 09:26:35.068952615 +0000 UTC m=+0.042672062 container create f74b25d4284a1c5e992cecf392df3de354fe6c53db08b5dc8142ded12fac4bc8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Nov 24 09:26:35 compute-0 systemd[1]: Started libpod-conmon-f74b25d4284a1c5e992cecf392df3de354fe6c53db08b5dc8142ded12fac4bc8.scope.
Nov 24 09:26:35 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:26:35 compute-0 podman[83672]: 2025-11-24 09:26:35.048765355 +0000 UTC m=+0.022484822 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:26:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a76ad506701f41775ebd3b4e81fab1d9c526d2a8baa33159d5f3f83df80b3fd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a76ad506701f41775ebd3b4e81fab1d9c526d2a8baa33159d5f3f83df80b3fd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a76ad506701f41775ebd3b4e81fab1d9c526d2a8baa33159d5f3f83df80b3fd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a76ad506701f41775ebd3b4e81fab1d9c526d2a8baa33159d5f3f83df80b3fd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:26:35 compute-0 podman[83672]: 2025-11-24 09:26:35.163867504 +0000 UTC m=+0.137586981 container init f74b25d4284a1c5e992cecf392df3de354fe6c53db08b5dc8142ded12fac4bc8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_moore, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Nov 24 09:26:35 compute-0 podman[83672]: 2025-11-24 09:26:35.176235474 +0000 UTC m=+0.149954931 container start f74b25d4284a1c5e992cecf392df3de354fe6c53db08b5dc8142ded12fac4bc8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_moore, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True)
Nov 24 09:26:35 compute-0 podman[83672]: 2025-11-24 09:26:35.180600488 +0000 UTC m=+0.154319965 container attach f74b25d4284a1c5e992cecf392df3de354fe6c53db08b5dc8142ded12fac4bc8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_moore, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:26:35 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Nov 24 09:26:35 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 24 09:26:35 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1187333864,v1:192.168.122.100:6803/1187333864]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 24 09:26:35 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/2493412744,v1:192.168.122.101:6801/2493412744]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Nov 24 09:26:35 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e7 e7: 2 total, 0 up, 2 in
Nov 24 09:26:35 compute-0 ceph-osd[82549]: osd.0 0 done with init, starting boot process
Nov 24 09:26:35 compute-0 ceph-osd[82549]: osd.0 0 start_boot
Nov 24 09:26:35 compute-0 ceph-osd[82549]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 24 09:26:35 compute-0 ceph-osd[82549]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 24 09:26:35 compute-0 ceph-osd[82549]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 24 09:26:35 compute-0 ceph-osd[82549]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 24 09:26:35 compute-0 ceph-osd[82549]: osd.0 0  bench count 12288000 bsize 4 KiB
Nov 24 09:26:35 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e7: 2 total, 0 up, 2 in
Nov 24 09:26:35 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 24 09:26:35 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 09:26:35 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 24 09:26:35 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 09:26:35 compute-0 ceph-mgr[74626]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 24 09:26:35 compute-0 ceph-mgr[74626]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 09:26:35 compute-0 ceph-mgr[74626]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1187333864; not ready for session (expect reconnect)
Nov 24 09:26:35 compute-0 ceph-mgr[74626]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2493412744; not ready for session (expect reconnect)
Nov 24 09:26:35 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 24 09:26:35 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 09:26:35 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 24 09:26:35 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 09:26:35 compute-0 ceph-mgr[74626]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 24 09:26:35 compute-0 ceph-mgr[74626]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 09:26:35 compute-0 ceph-mon[74331]: from='osd.1 [v2:192.168.122.101:6800/2493412744,v1:192.168.122.101:6801/2493412744]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 24 09:26:35 compute-0 ceph-mon[74331]: from='osd.0 [v2:192.168.122.100:6802/1187333864,v1:192.168.122.100:6803/1187333864]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 24 09:26:35 compute-0 ceph-mon[74331]: osdmap e6: 2 total, 0 up, 2 in
Nov 24 09:26:35 compute-0 ceph-mon[74331]: from='osd.0 [v2:192.168.122.100:6802/1187333864,v1:192.168.122.100:6803/1187333864]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 24 09:26:35 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 09:26:35 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 09:26:35 compute-0 ceph-mon[74331]: from='osd.1 [v2:192.168.122.101:6800/2493412744,v1:192.168.122.101:6801/2493412744]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Nov 24 09:26:35 compute-0 ceph-mon[74331]: pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:26:35 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 09:26:35 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:36 compute-0 focused_moore[83689]: [
Nov 24 09:26:36 compute-0 focused_moore[83689]:     {
Nov 24 09:26:36 compute-0 focused_moore[83689]:         "available": false,
Nov 24 09:26:36 compute-0 focused_moore[83689]:         "being_replaced": false,
Nov 24 09:26:36 compute-0 focused_moore[83689]:         "ceph_device_lvm": false,
Nov 24 09:26:36 compute-0 focused_moore[83689]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 24 09:26:36 compute-0 focused_moore[83689]:         "lsm_data": {},
Nov 24 09:26:36 compute-0 focused_moore[83689]:         "lvs": [],
Nov 24 09:26:36 compute-0 focused_moore[83689]:         "path": "/dev/sr0",
Nov 24 09:26:36 compute-0 focused_moore[83689]:         "rejected_reasons": [
Nov 24 09:26:36 compute-0 focused_moore[83689]:             "Insufficient space (<5GB)",
Nov 24 09:26:36 compute-0 focused_moore[83689]:             "Has a FileSystem"
Nov 24 09:26:36 compute-0 focused_moore[83689]:         ],
Nov 24 09:26:36 compute-0 focused_moore[83689]:         "sys_api": {
Nov 24 09:26:36 compute-0 focused_moore[83689]:             "actuators": null,
Nov 24 09:26:36 compute-0 focused_moore[83689]:             "device_nodes": [
Nov 24 09:26:36 compute-0 focused_moore[83689]:                 "sr0"
Nov 24 09:26:36 compute-0 focused_moore[83689]:             ],
Nov 24 09:26:36 compute-0 focused_moore[83689]:             "devname": "sr0",
Nov 24 09:26:36 compute-0 focused_moore[83689]:             "human_readable_size": "482.00 KB",
Nov 24 09:26:36 compute-0 focused_moore[83689]:             "id_bus": "ata",
Nov 24 09:26:36 compute-0 focused_moore[83689]:             "model": "QEMU DVD-ROM",
Nov 24 09:26:36 compute-0 focused_moore[83689]:             "nr_requests": "2",
Nov 24 09:26:36 compute-0 focused_moore[83689]:             "parent": "/dev/sr0",
Nov 24 09:26:36 compute-0 focused_moore[83689]:             "partitions": {},
Nov 24 09:26:36 compute-0 focused_moore[83689]:             "path": "/dev/sr0",
Nov 24 09:26:36 compute-0 focused_moore[83689]:             "removable": "1",
Nov 24 09:26:36 compute-0 focused_moore[83689]:             "rev": "2.5+",
Nov 24 09:26:36 compute-0 focused_moore[83689]:             "ro": "0",
Nov 24 09:26:36 compute-0 focused_moore[83689]:             "rotational": "1",
Nov 24 09:26:36 compute-0 focused_moore[83689]:             "sas_address": "",
Nov 24 09:26:36 compute-0 focused_moore[83689]:             "sas_device_handle": "",
Nov 24 09:26:36 compute-0 focused_moore[83689]:             "scheduler_mode": "mq-deadline",
Nov 24 09:26:36 compute-0 focused_moore[83689]:             "sectors": 0,
Nov 24 09:26:36 compute-0 focused_moore[83689]:             "sectorsize": "2048",
Nov 24 09:26:36 compute-0 focused_moore[83689]:             "size": 493568.0,
Nov 24 09:26:36 compute-0 focused_moore[83689]:             "support_discard": "2048",
Nov 24 09:26:36 compute-0 focused_moore[83689]:             "type": "disk",
Nov 24 09:26:36 compute-0 focused_moore[83689]:             "vendor": "QEMU"
Nov 24 09:26:36 compute-0 focused_moore[83689]:         }
Nov 24 09:26:36 compute-0 focused_moore[83689]:     }
Nov 24 09:26:36 compute-0 focused_moore[83689]: ]
Nov 24 09:26:36 compute-0 systemd[1]: libpod-f74b25d4284a1c5e992cecf392df3de354fe6c53db08b5dc8142ded12fac4bc8.scope: Deactivated successfully.
Nov 24 09:26:36 compute-0 podman[83672]: 2025-11-24 09:26:36.042444244 +0000 UTC m=+1.016163721 container died f74b25d4284a1c5e992cecf392df3de354fe6c53db08b5dc8142ded12fac4bc8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_moore, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Nov 24 09:26:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a76ad506701f41775ebd3b4e81fab1d9c526d2a8baa33159d5f3f83df80b3fd-merged.mount: Deactivated successfully.
Nov 24 09:26:36 compute-0 podman[83672]: 2025-11-24 09:26:36.180239211 +0000 UTC m=+1.153958658 container remove f74b25d4284a1c5e992cecf392df3de354fe6c53db08b5dc8142ded12fac4bc8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:26:36 compute-0 systemd[1]: libpod-conmon-f74b25d4284a1c5e992cecf392df3de354fe6c53db08b5dc8142ded12fac4bc8.scope: Deactivated successfully.
Nov 24 09:26:36 compute-0 sudo[83569]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:36 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:26:36 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:36 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:26:36 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:36 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:26:36 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:36 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:26:36 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:36 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Nov 24 09:26:36 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 24 09:26:36 compute-0 ceph-mgr[74626]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 128.0M
Nov 24 09:26:36 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 128.0M
Nov 24 09:26:36 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Nov 24 09:26:36 compute-0 ceph-mgr[74626]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Nov 24 09:26:36 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Nov 24 09:26:36 compute-0 ceph-mgr[74626]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1187333864; not ready for session (expect reconnect)
Nov 24 09:26:36 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 24 09:26:36 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 09:26:36 compute-0 ceph-mgr[74626]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 24 09:26:36 compute-0 ceph-mgr[74626]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2493412744; not ready for session (expect reconnect)
Nov 24 09:26:36 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 24 09:26:36 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 09:26:36 compute-0 ceph-mgr[74626]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 09:26:36 compute-0 ceph-mon[74331]: from='osd.0 [v2:192.168.122.100:6802/1187333864,v1:192.168.122.100:6803/1187333864]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 24 09:26:36 compute-0 ceph-mon[74331]: from='osd.1 [v2:192.168.122.101:6800/2493412744,v1:192.168.122.101:6801/2493412744]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Nov 24 09:26:36 compute-0 ceph-mon[74331]: osdmap e7: 2 total, 0 up, 2 in
Nov 24 09:26:36 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 09:26:36 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 09:26:36 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 09:26:36 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 09:26:36 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:36 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:36 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:36 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:36 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:36 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 24 09:26:36 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 09:26:36 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 09:26:36 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:26:37 compute-0 ceph-mgr[74626]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1187333864; not ready for session (expect reconnect)
Nov 24 09:26:37 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 24 09:26:37 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 09:26:37 compute-0 ceph-mgr[74626]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 24 09:26:37 compute-0 ceph-mgr[74626]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2493412744; not ready for session (expect reconnect)
Nov 24 09:26:37 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 24 09:26:37 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 09:26:37 compute-0 ceph-mgr[74626]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 09:26:37 compute-0 ceph-mon[74331]: purged_snaps scrub starts
Nov 24 09:26:37 compute-0 ceph-mon[74331]: purged_snaps scrub ok
Nov 24 09:26:37 compute-0 ceph-mon[74331]: purged_snaps scrub starts
Nov 24 09:26:37 compute-0 ceph-mon[74331]: purged_snaps scrub ok
Nov 24 09:26:37 compute-0 ceph-mon[74331]: Adjusting osd_memory_target on compute-0 to 128.0M
Nov 24 09:26:37 compute-0 ceph-mon[74331]: Unable to set osd_memory_target on compute-0 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Nov 24 09:26:37 compute-0 ceph-mon[74331]: pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:26:37 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 09:26:37 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 09:26:37 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 09:26:37 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:37 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 09:26:37 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:37 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 09:26:37 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:37 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 09:26:37 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:37 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Nov 24 09:26:37 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 24 09:26:37 compute-0 ceph-mgr[74626]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to  5248M
Nov 24 09:26:37 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to  5248M
Nov 24 09:26:37 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Nov 24 09:26:37 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:37 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e7 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:26:38 compute-0 ceph-mgr[74626]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1187333864; not ready for session (expect reconnect)
Nov 24 09:26:38 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 24 09:26:38 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 09:26:38 compute-0 ceph-mgr[74626]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2493412744; not ready for session (expect reconnect)
Nov 24 09:26:38 compute-0 ceph-mgr[74626]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 24 09:26:38 compute-0 ceph-mgr[74626]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 09:26:38 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 24 09:26:38 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 09:26:38 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:38 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:38 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:38 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:38 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 24 09:26:38 compute-0 ceph-mon[74331]: Adjusting osd_memory_target on compute-1 to  5248M
Nov 24 09:26:38 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:38 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 09:26:38 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 09:26:38 compute-0 ceph-osd[82549]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 24.479 iops: 6266.692 elapsed_sec: 0.479
Nov 24 09:26:38 compute-0 ceph-osd[82549]: log_channel(cluster) log [WRN] : OSD bench result of 6266.692144 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 24 09:26:38 compute-0 ceph-osd[82549]: osd.0 0 waiting for initial osdmap
Nov 24 09:26:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-0[82545]: 2025-11-24T09:26:38.880+0000 7f153562c640 -1 osd.0 0 waiting for initial osdmap
Nov 24 09:26:38 compute-0 ceph-osd[82549]: osd.0 7 crush map has features 288514050185494528, adjusting msgr requires for clients
Nov 24 09:26:38 compute-0 ceph-osd[82549]: osd.0 7 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Nov 24 09:26:38 compute-0 ceph-osd[82549]: osd.0 7 crush map has features 3314932999778484224, adjusting msgr requires for osds
Nov 24 09:26:38 compute-0 ceph-osd[82549]: osd.0 7 check_osdmap_features require_osd_release unknown -> squid
Nov 24 09:26:38 compute-0 ceph-osd[82549]: osd.0 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 24 09:26:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-osd-0[82545]: 2025-11-24T09:26:38.909+0000 7f1530c54640 -1 osd.0 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 24 09:26:38 compute-0 ceph-osd[82549]: osd.0 7 set_numa_affinity not setting numa affinity
Nov 24 09:26:38 compute-0 ceph-osd[82549]: osd.0 7 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Nov 24 09:26:38 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:26:39 compute-0 ceph-mgr[74626]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1187333864; not ready for session (expect reconnect)
Nov 24 09:26:39 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 24 09:26:39 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 09:26:39 compute-0 ceph-mgr[74626]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2493412744; not ready for session (expect reconnect)
Nov 24 09:26:39 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 24 09:26:39 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 09:26:39 compute-0 ceph-mgr[74626]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 24 09:26:39 compute-0 ceph-mgr[74626]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 09:26:39 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Nov 24 09:26:39 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 24 09:26:39 compute-0 ceph-mon[74331]: OSD bench result of 6266.692144 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 24 09:26:39 compute-0 ceph-mon[74331]: pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 09:26:39 compute-0 ceph-mon[74331]: OSD bench result of 10987.994700 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 24 09:26:39 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 09:26:39 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 09:26:39 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e8 e8: 2 total, 2 up, 2 in
Nov 24 09:26:39 compute-0 ceph-mon[74331]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/1187333864,v1:192.168.122.100:6803/1187333864] boot
Nov 24 09:26:39 compute-0 ceph-mon[74331]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.101:6800/2493412744,v1:192.168.122.101:6801/2493412744] boot
Nov 24 09:26:39 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e8: 2 total, 2 up, 2 in
Nov 24 09:26:39 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 24 09:26:39 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 09:26:39 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 24 09:26:39 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 09:26:39 compute-0 ceph-osd[82549]: osd.0 8 state: booting -> active
Nov 24 09:26:40 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Nov 24 09:26:40 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 24 09:26:40 compute-0 ceph-mon[74331]: osd.0 [v2:192.168.122.100:6802/1187333864,v1:192.168.122.100:6803/1187333864] boot
Nov 24 09:26:40 compute-0 ceph-mon[74331]: osd.1 [v2:192.168.122.101:6800/2493412744,v1:192.168.122.101:6801/2493412744] boot
Nov 24 09:26:40 compute-0 ceph-mon[74331]: osdmap e8: 2 total, 2 up, 2 in
Nov 24 09:26:40 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 09:26:40 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 09:26:40 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e9 e9: 2 total, 2 up, 2 in
Nov 24 09:26:40 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e9: 2 total, 2 up, 2 in
Nov 24 09:26:40 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v39: 0 pgs: ; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Nov 24 09:26:41 compute-0 ceph-mgr[74626]: [devicehealth INFO root] creating mgr pool
Nov 24 09:26:41 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Nov 24 09:26:41 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 24 09:26:41 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Nov 24 09:26:41 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 24 09:26:41 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 24 09:26:41 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e10 e10: 2 total, 2 up, 2 in
Nov 24 09:26:41 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e10 crush map has features 3314933000852226048, adjusting msgr requires
Nov 24 09:26:41 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Nov 24 09:26:41 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Nov 24 09:26:41 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Nov 24 09:26:41 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e10: 2 total, 2 up, 2 in
Nov 24 09:26:41 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Nov 24 09:26:41 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 24 09:26:41 compute-0 ceph-mon[74331]: osdmap e9: 2 total, 2 up, 2 in
Nov 24 09:26:41 compute-0 ceph-mon[74331]: pgmap v39: 0 pgs: ; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Nov 24 09:26:41 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 24 09:26:42 compute-0 ceph-osd[82549]: osd.0 10 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 24 09:26:42 compute-0 ceph-osd[82549]: osd.0 10 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Nov 24 09:26:42 compute-0 ceph-osd[82549]: osd.0 10 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 24 09:26:42 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Nov 24 09:26:42 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 24 09:26:42 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e11 e11: 2 total, 2 up, 2 in
Nov 24 09:26:42 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e11: 2 total, 2 up, 2 in
Nov 24 09:26:42 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 24 09:26:42 compute-0 ceph-mon[74331]: osdmap e10: 2 total, 2 up, 2 in
Nov 24 09:26:42 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 24 09:26:42 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 24 09:26:42 compute-0 ceph-mon[74331]: osdmap e11: 2 total, 2 up, 2 in
Nov 24 09:26:42 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:26:42 compute-0 ceph-mgr[74626]: [devicehealth INFO root] creating main.db for devicehealth
Nov 24 09:26:42 compute-0 ceph-mgr[74626]: [devicehealth INFO root] Check health
Nov 24 09:26:42 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 24 09:26:42 compute-0 sudo[84878]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Nov 24 09:26:42 compute-0 sudo[84878]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 24 09:26:42 compute-0 sudo[84878]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Nov 24 09:26:42 compute-0 sudo[84878]: pam_unix(sudo:session): session closed for user root
Nov 24 09:26:42 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 24 09:26:42 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Nov 24 09:26:42 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 24 09:26:42 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v42: 1 pgs: 1 unknown; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Nov 24 09:26:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Nov 24 09:26:43 compute-0 ceph-mon[74331]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 24 09:26:43 compute-0 ceph-mon[74331]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 24 09:26:43 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 24 09:26:43 compute-0 ceph-mon[74331]: pgmap v42: 1 pgs: 1 unknown; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Nov 24 09:26:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e12 e12: 2 total, 2 up, 2 in
Nov 24 09:26:43 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 2 up, 2 in
Nov 24 09:26:43 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.mauvni(active, since 78s)
Nov 24 09:26:44 compute-0 ceph-mon[74331]: osdmap e12: 2 total, 2 up, 2 in
Nov 24 09:26:44 compute-0 ceph-mon[74331]: mgrmap e9: compute-0.mauvni(active, since 78s)
Nov 24 09:26:44 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v44: 1 pgs: 1 unknown; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Nov 24 09:26:45 compute-0 ceph-mon[74331]: pgmap v44: 1 pgs: 1 unknown; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Nov 24 09:26:46 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v45: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:26:47 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:26:48 compute-0 ceph-mon[74331]: pgmap v45: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:26:48 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v46: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:26:50 compute-0 ceph-mon[74331]: pgmap v46: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:26:50 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v47: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:26:52 compute-0 ceph-mon[74331]: pgmap v47: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:26:52 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:26:52 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:26:54 compute-0 ceph-mon[74331]: pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:26:54 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:26:55 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 09:26:55 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:55 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 09:26:55 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:55 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 09:26:55 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:55 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 09:26:55 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:55 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Nov 24 09:26:55 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 24 09:26:55 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:26:55 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:26:55 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 09:26:55 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:26:55 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Nov 24 09:26:55 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Nov 24 09:26:55 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:26:55 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:26:55 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:26:55 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:26:55 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:26:55 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:26:55 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:26:55 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:26:56 compute-0 ceph-mon[74331]: pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:26:56 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:56 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:56 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:56 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:56 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 24 09:26:56 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:26:56 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:26:56 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 24 09:26:56 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 24 09:26:56 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:26:57 compute-0 ceph-mon[74331]: Updating compute-2:/etc/ceph/ceph.conf
Nov 24 09:26:57 compute-0 ceph-mon[74331]: Updating compute-2:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:26:57 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 09:26:57 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 09:26:57 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:26:57 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 09:26:57 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:57 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 09:26:57 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:57 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:26:57 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:57 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:26:57 compute-0 ceph-mgr[74626]: [progress INFO root] update: starting ev 8ed1668b-e975-4bbe-b767-2d62437f7169 (Updating mon deployment (+2 -> 3))
Nov 24 09:26:57 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Nov 24 09:26:57 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 24 09:26:57 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Nov 24 09:26:57 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 24 09:26:57 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:26:57 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:26:57 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-2 on compute-2
Nov 24 09:26:57 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-2 on compute-2
Nov 24 09:26:58 compute-0 ceph-mon[74331]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 24 09:26:58 compute-0 ceph-mon[74331]: pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:26:58 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:58 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:58 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:26:58 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 24 09:26:58 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 24 09:26:58 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:26:58 compute-0 ceph-mon[74331]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Nov 24 09:26:58 compute-0 ceph-mon[74331]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 24 09:26:59 compute-0 ceph-mon[74331]: Updating compute-2:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 09:26:59 compute-0 ceph-mon[74331]: pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:26:59 compute-0 ceph-mon[74331]: Deploying daemon mon.compute-2 on compute-2
Nov 24 09:26:59 compute-0 ceph-mon[74331]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Nov 24 09:26:59 compute-0 ceph-mon[74331]: Cluster is now healthy
Nov 24 09:26:59 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:00 compute-0 ceph-mon[74331]: pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:00 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 09:27:00 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:00 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 09:27:00 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Nov 24 09:27:00 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Nov 24 09:27:00 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:00 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Nov 24 09:27:00 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:00 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Nov 24 09:27:00 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 24 09:27:00 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Nov 24 09:27:00 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 24 09:27:00 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:27:00 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:27:00 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-1 on compute-1
Nov 24 09:27:00 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-1 on compute-1
Nov 24 09:27:00 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Nov 24 09:27:00 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).monmap v1 adding/updating compute-2 at [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to monitor cluster
Nov 24 09:27:00 compute-0 ceph-mgr[74626]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2013861849; not ready for session (expect reconnect)
Nov 24 09:27:00 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Nov 24 09:27:00 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 24 09:27:00 compute-0 ceph-mgr[74626]: mgr finish mon failed to return metadata for mon.compute-2: (2) No such file or directory
Nov 24 09:27:00 compute-0 ceph-mon[74331]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Nov 24 09:27:00 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 24 09:27:00 compute-0 ceph-mgr[74626]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Nov 24 09:27:00 compute-0 ceph-mon[74331]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Nov 24 09:27:00 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 24 09:27:00 compute-0 ceph-mon[74331]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Nov 24 09:27:00 compute-0 ceph-mon[74331]: paxos.0).electionLogic(5) init, last seen epoch 5, mid-election, bumping
Nov 24 09:27:00 compute-0 ceph-mon[74331]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 24 09:27:01 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:01 compute-0 ceph-mgr[74626]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2013861849; not ready for session (expect reconnect)
Nov 24 09:27:01 compute-0 ceph-mon[74331]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Nov 24 09:27:01 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 24 09:27:01 compute-0 ceph-mgr[74626]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Nov 24 09:27:02 compute-0 ceph-mon[74331]: mon.compute-0@0(electing) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 09:27:02 compute-0 ceph-mon[74331]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Nov 24 09:27:02 compute-0 ceph-mon[74331]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Nov 24 09:27:02 compute-0 ceph-mon[74331]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Nov 24 09:27:02 compute-0 ceph-mgr[74626]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3124763100; not ready for session (expect reconnect)
Nov 24 09:27:02 compute-0 ceph-mon[74331]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Nov 24 09:27:02 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 24 09:27:02 compute-0 ceph-mgr[74626]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Nov 24 09:27:02 compute-0 sudo[84904]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pynppdkbazkkpyrqvfbtcmuygtclhgpg ; /usr/bin/python3'
Nov 24 09:27:02 compute-0 sudo[84904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:27:02 compute-0 ceph-mgr[74626]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2013861849; not ready for session (expect reconnect)
Nov 24 09:27:02 compute-0 ceph-mon[74331]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Nov 24 09:27:02 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 24 09:27:02 compute-0 ceph-mgr[74626]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Nov 24 09:27:03 compute-0 python3[84906]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:27:03 compute-0 podman[84908]: 2025-11-24 09:27:03.121811315 +0000 UTC m=+0.038681941 container create 59d871d90150489695af6bca9a5d5bef2c4d499ce9b4d2dc345579c32bec1e13 (image=quay.io/ceph/ceph:v19, name=naughty_haibt, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:27:03 compute-0 systemd[1]: Started libpod-conmon-59d871d90150489695af6bca9a5d5bef2c4d499ce9b4d2dc345579c32bec1e13.scope.
Nov 24 09:27:03 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:27:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80233455727e08dc9edcf9b91a6c62737f1709948d30f56f52d0c86070757ab4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80233455727e08dc9edcf9b91a6c62737f1709948d30f56f52d0c86070757ab4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80233455727e08dc9edcf9b91a6c62737f1709948d30f56f52d0c86070757ab4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:03 compute-0 podman[84908]: 2025-11-24 09:27:03.10575484 +0000 UTC m=+0.022625486 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:27:03 compute-0 podman[84908]: 2025-11-24 09:27:03.214452223 +0000 UTC m=+0.131322839 container init 59d871d90150489695af6bca9a5d5bef2c4d499ce9b4d2dc345579c32bec1e13 (image=quay.io/ceph/ceph:v19, name=naughty_haibt, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 24 09:27:03 compute-0 podman[84908]: 2025-11-24 09:27:03.222306284 +0000 UTC m=+0.139176950 container start 59d871d90150489695af6bca9a5d5bef2c4d499ce9b4d2dc345579c32bec1e13 (image=quay.io/ceph/ceph:v19, name=naughty_haibt, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 24 09:27:03 compute-0 podman[84908]: 2025-11-24 09:27:03.226788162 +0000 UTC m=+0.143658788 container attach 59d871d90150489695af6bca9a5d5bef2c4d499ce9b4d2dc345579c32bec1e13 (image=quay.io/ceph/ceph:v19, name=naughty_haibt, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Nov 24 09:27:03 compute-0 ceph-mon[74331]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Nov 24 09:27:03 compute-0 ceph-mon[74331]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Nov 24 09:27:03 compute-0 ceph-mgr[74626]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3124763100; not ready for session (expect reconnect)
Nov 24 09:27:03 compute-0 ceph-mon[74331]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Nov 24 09:27:03 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 24 09:27:03 compute-0 ceph-mgr[74626]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Nov 24 09:27:03 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:03 compute-0 ceph-mon[74331]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Nov 24 09:27:03 compute-0 ceph-mgr[74626]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2013861849; not ready for session (expect reconnect)
Nov 24 09:27:03 compute-0 ceph-mon[74331]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Nov 24 09:27:03 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 24 09:27:03 compute-0 ceph-mgr[74626]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Nov 24 09:27:04 compute-0 ceph-mon[74331]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Nov 24 09:27:04 compute-0 ceph-mon[74331]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Nov 24 09:27:04 compute-0 ceph-mgr[74626]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3124763100; not ready for session (expect reconnect)
Nov 24 09:27:04 compute-0 ceph-mon[74331]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Nov 24 09:27:04 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 24 09:27:04 compute-0 ceph-mgr[74626]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Nov 24 09:27:04 compute-0 ceph-mgr[74626]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2013861849; not ready for session (expect reconnect)
Nov 24 09:27:04 compute-0 ceph-mon[74331]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Nov 24 09:27:04 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 24 09:27:04 compute-0 ceph-mgr[74626]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Nov 24 09:27:05 compute-0 ceph-mgr[74626]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3124763100; not ready for session (expect reconnect)
Nov 24 09:27:05 compute-0 ceph-mon[74331]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Nov 24 09:27:05 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 24 09:27:05 compute-0 ceph-mgr[74626]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Nov 24 09:27:05 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:05 compute-0 ceph-mgr[74626]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2013861849; not ready for session (expect reconnect)
Nov 24 09:27:05 compute-0 ceph-mon[74331]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Nov 24 09:27:05 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 24 09:27:05 compute-0 ceph-mgr[74626]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Nov 24 09:27:05 compute-0 ceph-mon[74331]: paxos.0).electionLogic(7) init, last seen epoch 7, mid-election, bumping
Nov 24 09:27:05 compute-0 ceph-mon[74331]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 24 09:27:05 compute-0 ceph-mon[74331]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 24 09:27:05 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : monmap epoch 2
Nov 24 09:27:05 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : fsid 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:27:05 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : last_changed 2025-11-24T09:27:00.955946+0000
Nov 24 09:27:05 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : created 2025-11-24T09:25:03.414609+0000
Nov 24 09:27:05 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Nov 24 09:27:05 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : election_strategy: 1
Nov 24 09:27:05 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Nov 24 09:27:05 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Nov 24 09:27:05 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 24 09:27:05 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : fsmap 
Nov 24 09:27:05 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 2 up, 2 in
Nov 24 09:27:05 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.mauvni(active, since 100s)
Nov 24 09:27:05 compute-0 ceph-mon[74331]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 24 09:27:06 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:06 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 09:27:06 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:06 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Nov 24 09:27:06 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:06 compute-0 ceph-mgr[74626]: [progress INFO root] complete: finished ev 8ed1668b-e975-4bbe-b767-2d62437f7169 (Updating mon deployment (+2 -> 3))
Nov 24 09:27:06 compute-0 ceph-mgr[74626]: [progress INFO root] Completed event 8ed1668b-e975-4bbe-b767-2d62437f7169 (Updating mon deployment (+2 -> 3)) in 8 seconds
Nov 24 09:27:06 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Nov 24 09:27:06 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:06 compute-0 ceph-mgr[74626]: [progress INFO root] update: starting ev e8559756-67b8-4035-a217-f4e8b616f22e (Updating mgr deployment (+2 -> 3))
Nov 24 09:27:06 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.rzcnzg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Nov 24 09:27:06 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.rzcnzg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 24 09:27:06 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.rzcnzg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 24 09:27:06 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 24 09:27:06 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 09:27:06 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:27:06 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:27:06 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-2.rzcnzg on compute-2
Nov 24 09:27:06 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-2.rzcnzg on compute-2
Nov 24 09:27:06 compute-0 ceph-mon[74331]: Deploying daemon mon.compute-1 on compute-1
Nov 24 09:27:06 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 24 09:27:06 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 24 09:27:06 compute-0 ceph-mon[74331]: mon.compute-0 calling monitor election
Nov 24 09:27:06 compute-0 ceph-mon[74331]: pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:06 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 24 09:27:06 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 24 09:27:06 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 24 09:27:06 compute-0 ceph-mon[74331]: mon.compute-2 calling monitor election
Nov 24 09:27:06 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 24 09:27:06 compute-0 ceph-mon[74331]: pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:06 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 24 09:27:06 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 24 09:27:06 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 24 09:27:06 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 24 09:27:06 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 24 09:27:06 compute-0 ceph-mon[74331]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 24 09:27:06 compute-0 ceph-mon[74331]: monmap epoch 2
Nov 24 09:27:06 compute-0 ceph-mon[74331]: fsid 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:27:06 compute-0 ceph-mon[74331]: last_changed 2025-11-24T09:27:00.955946+0000
Nov 24 09:27:06 compute-0 ceph-mon[74331]: created 2025-11-24T09:25:03.414609+0000
Nov 24 09:27:06 compute-0 ceph-mon[74331]: min_mon_release 19 (squid)
Nov 24 09:27:06 compute-0 ceph-mon[74331]: election_strategy: 1
Nov 24 09:27:06 compute-0 ceph-mon[74331]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Nov 24 09:27:06 compute-0 ceph-mon[74331]: 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Nov 24 09:27:06 compute-0 ceph-mon[74331]: fsmap 
Nov 24 09:27:06 compute-0 ceph-mon[74331]: osdmap e12: 2 total, 2 up, 2 in
Nov 24 09:27:06 compute-0 ceph-mon[74331]: mgrmap e9: compute-0.mauvni(active, since 100s)
Nov 24 09:27:06 compute-0 ceph-mon[74331]: overall HEALTH_OK
Nov 24 09:27:06 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:06 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:06 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:06 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:06 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.rzcnzg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 24 09:27:06 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.rzcnzg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 24 09:27:06 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 09:27:06 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:27:06 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Nov 24 09:27:06 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/969865947' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 24 09:27:06 compute-0 naughty_haibt[84925]: 
Nov 24 09:27:06 compute-0 naughty_haibt[84925]: {"fsid":"84a084c3-61a7-5de7-8207-1f88efa59a64","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":10,"quorum":[0,1],"quorum_names":["compute-0","compute-2"],"quorum_age":0,"monmap":{"epoch":2,"min_mon_release_name":"squid","num_mons":2},"osdmap":{"epoch":12,"num_osds":2,"num_up_osds":2,"osd_up_since":1763976399,"num_in_osds":2,"osd_in_since":1763976381,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":55775232,"bytes_avail":42885509120,"bytes_total":42941284352},"fsmap":{"epoch":1,"btime":"2025-11-24T09:25:05:540478+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-24T09:26:26.946461+0000","services":{}},"progress_events":{"8ed1668b-e975-4bbe-b767-2d62437f7169":{"message":"Updating mon deployment (+2 -> 3) (2s)\n      [==============..............] (remaining: 2s)","progress":0.5,"add_to_ceph_s":true}}}
Nov 24 09:27:06 compute-0 systemd[1]: libpod-59d871d90150489695af6bca9a5d5bef2c4d499ce9b4d2dc345579c32bec1e13.scope: Deactivated successfully.
Nov 24 09:27:06 compute-0 podman[84908]: 2025-11-24 09:27:06.689409338 +0000 UTC m=+3.606279974 container died 59d871d90150489695af6bca9a5d5bef2c4d499ce9b4d2dc345579c32bec1e13 (image=quay.io/ceph/ceph:v19, name=naughty_haibt, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:27:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-80233455727e08dc9edcf9b91a6c62737f1709948d30f56f52d0c86070757ab4-merged.mount: Deactivated successfully.
Nov 24 09:27:06 compute-0 podman[84908]: 2025-11-24 09:27:06.73473314 +0000 UTC m=+3.651603776 container remove 59d871d90150489695af6bca9a5d5bef2c4d499ce9b4d2dc345579c32bec1e13 (image=quay.io/ceph/ceph:v19, name=naughty_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:27:06 compute-0 systemd[1]: libpod-conmon-59d871d90150489695af6bca9a5d5bef2c4d499ce9b4d2dc345579c32bec1e13.scope: Deactivated successfully.
Nov 24 09:27:06 compute-0 sudo[84904]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:06 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Nov 24 09:27:06 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).monmap v2 adding/updating compute-1 at [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to monitor cluster
Nov 24 09:27:06 compute-0 ceph-mgr[74626]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3124763100; not ready for session (expect reconnect)
Nov 24 09:27:06 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Nov 24 09:27:06 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 24 09:27:06 compute-0 ceph-mgr[74626]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Nov 24 09:27:06 compute-0 ceph-mon[74331]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Nov 24 09:27:06 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 24 09:27:06 compute-0 ceph-mon[74331]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Nov 24 09:27:06 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 24 09:27:06 compute-0 ceph-mon[74331]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Nov 24 09:27:06 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 24 09:27:06 compute-0 ceph-mgr[74626]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 24 09:27:06 compute-0 ceph-mon[74331]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Nov 24 09:27:06 compute-0 ceph-mon[74331]: paxos.0).electionLogic(10) init, last seen epoch 10
Nov 24 09:27:06 compute-0 ceph-mon[74331]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 24 09:27:06 compute-0 ceph-mgr[74626]: mgr.server handle_report got status from non-daemon mon.compute-2
Nov 24 09:27:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:27:06.959+0000 7fa44920f640 -1 mgr.server handle_report got status from non-daemon mon.compute-2
Nov 24 09:27:07 compute-0 sudo[84985]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxpxllbeezzclmudelgwbkqjlivgtuuq ; /usr/bin/python3'
Nov 24 09:27:07 compute-0 sudo[84985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:27:07 compute-0 python3[84987]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:27:07 compute-0 podman[84988]: 2025-11-24 09:27:07.351004807 +0000 UTC m=+0.043835728 container create 4a0d803821467a849e66fa251cd5f4487add925ddd5c445f9deec491425010a4 (image=quay.io/ceph/ceph:v19, name=boring_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:27:07 compute-0 systemd[1]: Started libpod-conmon-4a0d803821467a849e66fa251cd5f4487add925ddd5c445f9deec491425010a4.scope.
Nov 24 09:27:07 compute-0 podman[84988]: 2025-11-24 09:27:07.331296642 +0000 UTC m=+0.024127613 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:27:07 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:27:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/568701b731e1e9309cba3da5ac9c7f828673030e68ce21938f153ebb0229a331/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/568701b731e1e9309cba3da5ac9c7f828673030e68ce21938f153ebb0229a331/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:07 compute-0 podman[84988]: 2025-11-24 09:27:07.441922973 +0000 UTC m=+0.134753914 container init 4a0d803821467a849e66fa251cd5f4487add925ddd5c445f9deec491425010a4 (image=quay.io/ceph/ceph:v19, name=boring_jackson, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:27:07 compute-0 podman[84988]: 2025-11-24 09:27:07.447750022 +0000 UTC m=+0.140580933 container start 4a0d803821467a849e66fa251cd5f4487add925ddd5c445f9deec491425010a4 (image=quay.io/ceph/ceph:v19, name=boring_jackson, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True)
Nov 24 09:27:07 compute-0 podman[84988]: 2025-11-24 09:27:07.450795736 +0000 UTC m=+0.143626677 container attach 4a0d803821467a849e66fa251cd5f4487add925ddd5c445f9deec491425010a4 (image=quay.io/ceph/ceph:v19, name=boring_jackson, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Nov 24 09:27:07 compute-0 ceph-mon[74331]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 24 09:27:07 compute-0 ceph-mon[74331]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 24 09:27:07 compute-0 ceph-mgr[74626]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3124763100; not ready for session (expect reconnect)
Nov 24 09:27:07 compute-0 ceph-mon[74331]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Nov 24 09:27:07 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 24 09:27:07 compute-0 ceph-mgr[74626]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 24 09:27:07 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:07 compute-0 ceph-mon[74331]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 24 09:27:07 compute-0 ceph-mon[74331]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 09:27:08 compute-0 ceph-mon[74331]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 24 09:27:08 compute-0 ceph-mon[74331]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 24 09:27:08 compute-0 ceph-mon[74331]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 24 09:27:08 compute-0 ceph-mgr[74626]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3124763100; not ready for session (expect reconnect)
Nov 24 09:27:08 compute-0 ceph-mon[74331]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Nov 24 09:27:08 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 24 09:27:08 compute-0 ceph-mgr[74626]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 24 09:27:08 compute-0 ceph-mon[74331]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 24 09:27:09 compute-0 ceph-mon[74331]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 24 09:27:09 compute-0 ceph-mgr[74626]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3124763100; not ready for session (expect reconnect)
Nov 24 09:27:09 compute-0 ceph-mon[74331]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Nov 24 09:27:09 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 24 09:27:09 compute-0 ceph-mgr[74626]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 24 09:27:09 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:10 compute-0 ceph-mon[74331]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 24 09:27:10 compute-0 ceph-mgr[74626]: [progress INFO root] Writing back 3 completed events
Nov 24 09:27:10 compute-0 ceph-mon[74331]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 24 09:27:10 compute-0 ceph-mon[74331]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 24 09:27:10 compute-0 ceph-mgr[74626]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3124763100; not ready for session (expect reconnect)
Nov 24 09:27:10 compute-0 ceph-mon[74331]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Nov 24 09:27:10 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 24 09:27:10 compute-0 ceph-mgr[74626]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 24 09:27:10 compute-0 ceph-mon[74331]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 24 09:27:11 compute-0 ceph-mon[74331]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 24 09:27:11 compute-0 ceph-mon[74331]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 24 09:27:11 compute-0 ceph-mon[74331]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 24 09:27:11 compute-0 ceph-mgr[74626]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3124763100; not ready for session (expect reconnect)
Nov 24 09:27:11 compute-0 ceph-mon[74331]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Nov 24 09:27:11 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 24 09:27:11 compute-0 ceph-mgr[74626]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 24 09:27:11 compute-0 ceph-mon[74331]: paxos.0).electionLogic(11) init, last seen epoch 11, mid-election, bumping
Nov 24 09:27:11 compute-0 ceph-mon[74331]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 24 09:27:11 compute-0 ceph-mon[74331]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 24 09:27:11 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : monmap epoch 3
Nov 24 09:27:11 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : fsid 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:27:11 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : last_changed 2025-11-24T09:27:06.832853+0000
Nov 24 09:27:11 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : created 2025-11-24T09:25:03.414609+0000
Nov 24 09:27:11 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Nov 24 09:27:11 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : election_strategy: 1
Nov 24 09:27:11 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Nov 24 09:27:11 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Nov 24 09:27:11 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Nov 24 09:27:11 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 24 09:27:11 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : fsmap 
Nov 24 09:27:11 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 2 up, 2 in
Nov 24 09:27:11 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.mauvni(active, since 106s)
Nov 24 09:27:11 compute-0 ceph-mon[74331]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 24 09:27:11 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:11 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:11 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 09:27:12 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 24 09:27:12 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 24 09:27:12 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 24 09:27:12 compute-0 ceph-mon[74331]: mon.compute-0 calling monitor election
Nov 24 09:27:12 compute-0 ceph-mon[74331]: mon.compute-2 calling monitor election
Nov 24 09:27:12 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 24 09:27:12 compute-0 ceph-mon[74331]: pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:12 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 24 09:27:12 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 24 09:27:12 compute-0 ceph-mon[74331]: pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:12 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 24 09:27:12 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 24 09:27:12 compute-0 ceph-mon[74331]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 24 09:27:12 compute-0 ceph-mon[74331]: monmap epoch 3
Nov 24 09:27:12 compute-0 ceph-mon[74331]: fsid 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:27:12 compute-0 ceph-mon[74331]: last_changed 2025-11-24T09:27:06.832853+0000
Nov 24 09:27:12 compute-0 ceph-mon[74331]: created 2025-11-24T09:25:03.414609+0000
Nov 24 09:27:12 compute-0 ceph-mon[74331]: min_mon_release 19 (squid)
Nov 24 09:27:12 compute-0 ceph-mon[74331]: election_strategy: 1
Nov 24 09:27:12 compute-0 ceph-mon[74331]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Nov 24 09:27:12 compute-0 ceph-mon[74331]: 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Nov 24 09:27:12 compute-0 ceph-mon[74331]: 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Nov 24 09:27:12 compute-0 ceph-mon[74331]: fsmap 
Nov 24 09:27:12 compute-0 ceph-mon[74331]: osdmap e12: 2 total, 2 up, 2 in
Nov 24 09:27:12 compute-0 ceph-mon[74331]: mgrmap e9: compute-0.mauvni(active, since 106s)
Nov 24 09:27:12 compute-0 ceph-mon[74331]: overall HEALTH_OK
Nov 24 09:27:12 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:12 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:12 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Nov 24 09:27:12 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:12 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-1.qelqsg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Nov 24 09:27:12 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.qelqsg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 24 09:27:12 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.qelqsg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 24 09:27:12 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 24 09:27:12 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 09:27:12 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:27:12 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:27:12 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-1.qelqsg on compute-1
Nov 24 09:27:12 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-1.qelqsg on compute-1
Nov 24 09:27:12 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Nov 24 09:27:12 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1623978198' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 09:27:12 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:27:12 compute-0 ceph-mgr[74626]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3124763100; not ready for session (expect reconnect)
Nov 24 09:27:12 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Nov 24 09:27:12 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 24 09:27:13 compute-0 ceph-mon[74331]: mon.compute-1 calling monitor election
Nov 24 09:27:13 compute-0 ceph-mon[74331]: pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:13 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:13 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:13 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:13 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:13 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.qelqsg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 24 09:27:13 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.qelqsg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 24 09:27:13 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 09:27:13 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:27:13 compute-0 ceph-mon[74331]: Deploying daemon mgr.compute-1.qelqsg on compute-1
Nov 24 09:27:13 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1623978198' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 09:27:13 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 24 09:27:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Nov 24 09:27:13 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1623978198' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 09:27:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e13 e13: 2 total, 2 up, 2 in
Nov 24 09:27:13 compute-0 boring_jackson[85003]: pool 'vms' created
Nov 24 09:27:13 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e13: 2 total, 2 up, 2 in
Nov 24 09:27:13 compute-0 systemd[1]: libpod-4a0d803821467a849e66fa251cd5f4487add925ddd5c445f9deec491425010a4.scope: Deactivated successfully.
Nov 24 09:27:13 compute-0 podman[84988]: 2025-11-24 09:27:13.065247436 +0000 UTC m=+5.758078357 container died 4a0d803821467a849e66fa251cd5f4487add925ddd5c445f9deec491425010a4 (image=quay.io/ceph/ceph:v19, name=boring_jackson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:27:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-568701b731e1e9309cba3da5ac9c7f828673030e68ce21938f153ebb0229a331-merged.mount: Deactivated successfully.
Nov 24 09:27:13 compute-0 podman[84988]: 2025-11-24 09:27:13.102924044 +0000 UTC m=+5.795754965 container remove 4a0d803821467a849e66fa251cd5f4487add925ddd5c445f9deec491425010a4 (image=quay.io/ceph/ceph:v19, name=boring_jackson, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 09:27:13 compute-0 systemd[1]: libpod-conmon-4a0d803821467a849e66fa251cd5f4487add925ddd5c445f9deec491425010a4.scope: Deactivated successfully.
Nov 24 09:27:13 compute-0 sudo[84985]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:13 compute-0 sudo[85063]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyokftyxpitctfjovtwycwmhhwtjwquu ; /usr/bin/python3'
Nov 24 09:27:13 compute-0 sudo[85063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:27:13 compute-0 python3[85065]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:27:13 compute-0 podman[85066]: 2025-11-24 09:27:13.471030531 +0000 UTC m=+0.035211223 container create b859b410be13796fb039813040938de2b1fead18ff787606c17dc16480b56854 (image=quay.io/ceph/ceph:v19, name=amazing_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:27:13 compute-0 systemd[1]: Started libpod-conmon-b859b410be13796fb039813040938de2b1fead18ff787606c17dc16480b56854.scope.
Nov 24 09:27:13 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:27:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f98af21bb985f47325f79256374580973ad0c95958eb34c537d28df61a387ed/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f98af21bb985f47325f79256374580973ad0c95958eb34c537d28df61a387ed/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:13 compute-0 podman[85066]: 2025-11-24 09:27:13.53634418 +0000 UTC m=+0.100524902 container init b859b410be13796fb039813040938de2b1fead18ff787606c17dc16480b56854 (image=quay.io/ceph/ceph:v19, name=amazing_cori, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Nov 24 09:27:13 compute-0 podman[85066]: 2025-11-24 09:27:13.541369094 +0000 UTC m=+0.105549786 container start b859b410be13796fb039813040938de2b1fead18ff787606c17dc16480b56854 (image=quay.io/ceph/ceph:v19, name=amazing_cori, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:27:13 compute-0 podman[85066]: 2025-11-24 09:27:13.546647796 +0000 UTC m=+0.110828508 container attach b859b410be13796fb039813040938de2b1fead18ff787606c17dc16480b56854 (image=quay.io/ceph/ceph:v19, name=amazing_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:27:13 compute-0 podman[85066]: 2025-11-24 09:27:13.456079772 +0000 UTC m=+0.020260484 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:27:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 09:27:13 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 09:27:13 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Nov 24 09:27:13 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:13 compute-0 ceph-mgr[74626]: [progress INFO root] complete: finished ev e8559756-67b8-4035-a217-f4e8b616f22e (Updating mgr deployment (+2 -> 3))
Nov 24 09:27:13 compute-0 ceph-mgr[74626]: [progress INFO root] Completed event e8559756-67b8-4035-a217-f4e8b616f22e (Updating mgr deployment (+2 -> 3)) in 8 seconds
Nov 24 09:27:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Nov 24 09:27:13 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:13 compute-0 ceph-mgr[74626]: [progress INFO root] update: starting ev e53512f4-e709-4c8d-9980-702bbe0593bf (Updating crash deployment (+1 -> 3))
Nov 24 09:27:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Nov 24 09:27:13 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 24 09:27:13 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 24 09:27:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:27:13 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:27:13 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-2 on compute-2
Nov 24 09:27:13 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-2 on compute-2
Nov 24 09:27:13 compute-0 ceph-mgr[74626]: mgr.server handle_report got status from non-daemon mon.compute-1
Nov 24 09:27:13 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:27:13.834+0000 7fa44920f640 -1 mgr.server handle_report got status from non-daemon mon.compute-1
Nov 24 09:27:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Nov 24 09:27:13 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2842040450' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 09:27:13 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v60: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:14 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Nov 24 09:27:14 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1623978198' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 09:27:14 compute-0 ceph-mon[74331]: osdmap e13: 2 total, 2 up, 2 in
Nov 24 09:27:14 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:14 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:14 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:14 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:14 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 24 09:27:14 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 24 09:27:14 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:27:14 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2842040450' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 09:27:14 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2842040450' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 09:27:14 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e14 e14: 2 total, 2 up, 2 in
Nov 24 09:27:14 compute-0 amazing_cori[85081]: pool 'volumes' created
Nov 24 09:27:14 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e14: 2 total, 2 up, 2 in
Nov 24 09:27:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 14 pg[3.0( empty local-lis/les=0/0 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [0] r=0 lpr=14 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:14 compute-0 systemd[1]: libpod-b859b410be13796fb039813040938de2b1fead18ff787606c17dc16480b56854.scope: Deactivated successfully.
Nov 24 09:27:14 compute-0 podman[85066]: 2025-11-24 09:27:14.091800417 +0000 UTC m=+0.655981179 container died b859b410be13796fb039813040938de2b1fead18ff787606c17dc16480b56854 (image=quay.io/ceph/ceph:v19, name=amazing_cori, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Nov 24 09:27:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f98af21bb985f47325f79256374580973ad0c95958eb34c537d28df61a387ed-merged.mount: Deactivated successfully.
Nov 24 09:27:14 compute-0 podman[85066]: 2025-11-24 09:27:14.13385617 +0000 UTC m=+0.698036852 container remove b859b410be13796fb039813040938de2b1fead18ff787606c17dc16480b56854 (image=quay.io/ceph/ceph:v19, name=amazing_cori, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 24 09:27:14 compute-0 sudo[85063]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:14 compute-0 systemd[1]: libpod-conmon-b859b410be13796fb039813040938de2b1fead18ff787606c17dc16480b56854.scope: Deactivated successfully.
Nov 24 09:27:14 compute-0 sudo[85142]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbgcfaxlopxmaqtgyeizmtnlrljmfbpl ; /usr/bin/python3'
Nov 24 09:27:14 compute-0 sudo[85142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:27:14 compute-0 python3[85144]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:27:14 compute-0 podman[85145]: 2025-11-24 09:27:14.459083238 +0000 UTC m=+0.036408390 container create 574e7d165847e4a3117f5558c88383784e6df806c2da2de6e6d25cda0d81569d (image=quay.io/ceph/ceph:v19, name=compassionate_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Nov 24 09:27:14 compute-0 systemd[1]: Started libpod-conmon-574e7d165847e4a3117f5558c88383784e6df806c2da2de6e6d25cda0d81569d.scope.
Nov 24 09:27:14 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:27:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a335e76e0f247b1c4aff0a3c2298822996e96bd95791304b034c65305d53cb55/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a335e76e0f247b1c4aff0a3c2298822996e96bd95791304b034c65305d53cb55/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:14 compute-0 podman[85145]: 2025-11-24 09:27:14.533833686 +0000 UTC m=+0.111158818 container init 574e7d165847e4a3117f5558c88383784e6df806c2da2de6e6d25cda0d81569d (image=quay.io/ceph/ceph:v19, name=compassionate_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Nov 24 09:27:14 compute-0 podman[85145]: 2025-11-24 09:27:14.53915349 +0000 UTC m=+0.116478632 container start 574e7d165847e4a3117f5558c88383784e6df806c2da2de6e6d25cda0d81569d (image=quay.io/ceph/ceph:v19, name=compassionate_diffie, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 09:27:14 compute-0 podman[85145]: 2025-11-24 09:27:14.44223291 +0000 UTC m=+0.019558052 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:27:14 compute-0 podman[85145]: 2025-11-24 09:27:14.542362808 +0000 UTC m=+0.119687980 container attach 574e7d165847e4a3117f5558c88383784e6df806c2da2de6e6d25cda0d81569d (image=quay.io/ceph/ceph:v19, name=compassionate_diffie, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Nov 24 09:27:14 compute-0 ceph-mon[74331]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 24 09:27:14 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Nov 24 09:27:14 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1077027605' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 09:27:15 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Nov 24 09:27:15 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1077027605' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 09:27:15 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e15 e15: 2 total, 2 up, 2 in
Nov 24 09:27:15 compute-0 compassionate_diffie[85160]: pool 'backups' created
Nov 24 09:27:15 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e15: 2 total, 2 up, 2 in
Nov 24 09:27:15 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 15 pg[4.0( empty local-lis/les=0/0 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [0] r=0 lpr=15 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:15 compute-0 ceph-mon[74331]: Deploying daemon crash.compute-2 on compute-2
Nov 24 09:27:15 compute-0 ceph-mon[74331]: pgmap v60: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:15 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2842040450' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 09:27:15 compute-0 ceph-mon[74331]: osdmap e14: 2 total, 2 up, 2 in
Nov 24 09:27:15 compute-0 ceph-mon[74331]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 24 09:27:15 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1077027605' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 09:27:15 compute-0 systemd[1]: libpod-574e7d165847e4a3117f5558c88383784e6df806c2da2de6e6d25cda0d81569d.scope: Deactivated successfully.
Nov 24 09:27:15 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 15 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [0] r=0 lpr=14 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:15 compute-0 podman[85145]: 2025-11-24 09:27:15.098496947 +0000 UTC m=+0.675822129 container died 574e7d165847e4a3117f5558c88383784e6df806c2da2de6e6d25cda0d81569d (image=quay.io/ceph/ceph:v19, name=compassionate_diffie, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:27:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-a335e76e0f247b1c4aff0a3c2298822996e96bd95791304b034c65305d53cb55-merged.mount: Deactivated successfully.
Nov 24 09:27:15 compute-0 podman[85145]: 2025-11-24 09:27:15.144231682 +0000 UTC m=+0.721556824 container remove 574e7d165847e4a3117f5558c88383784e6df806c2da2de6e6d25cda0d81569d (image=quay.io/ceph/ceph:v19, name=compassionate_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:27:15 compute-0 systemd[1]: libpod-conmon-574e7d165847e4a3117f5558c88383784e6df806c2da2de6e6d25cda0d81569d.scope: Deactivated successfully.
Nov 24 09:27:15 compute-0 sudo[85142]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:15 compute-0 sudo[85224]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plghxvrnbcpeugvmcowyufliicmofemw ; /usr/bin/python3'
Nov 24 09:27:15 compute-0 sudo[85224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:27:15 compute-0 python3[85226]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:27:15 compute-0 podman[85227]: 2025-11-24 09:27:15.534198702 +0000 UTC m=+0.036694719 container create e7d37170eff00816e97f951343a94ec064792355570a06a5c50dd0fe866a06c7 (image=quay.io/ceph/ceph:v19, name=amazing_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Nov 24 09:27:15 compute-0 systemd[1]: Started libpod-conmon-e7d37170eff00816e97f951343a94ec064792355570a06a5c50dd0fe866a06c7.scope.
Nov 24 09:27:15 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:27:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec7c9a360fa83a03b94a9c456d0dbb815c0a255afd102be89a2cff9701540f54/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec7c9a360fa83a03b94a9c456d0dbb815c0a255afd102be89a2cff9701540f54/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:15 compute-0 podman[85227]: 2025-11-24 09:27:15.607164915 +0000 UTC m=+0.109660932 container init e7d37170eff00816e97f951343a94ec064792355570a06a5c50dd0fe866a06c7 (image=quay.io/ceph/ceph:v19, name=amazing_jackson, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Nov 24 09:27:15 compute-0 podman[85227]: 2025-11-24 09:27:15.612560261 +0000 UTC m=+0.115056278 container start e7d37170eff00816e97f951343a94ec064792355570a06a5c50dd0fe866a06c7 (image=quay.io/ceph/ceph:v19, name=amazing_jackson, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 09:27:15 compute-0 podman[85227]: 2025-11-24 09:27:15.518403726 +0000 UTC m=+0.020899753 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:27:15 compute-0 podman[85227]: 2025-11-24 09:27:15.61511981 +0000 UTC m=+0.117615827 container attach e7d37170eff00816e97f951343a94ec064792355570a06a5c50dd0fe866a06c7 (image=quay.io/ceph/ceph:v19, name=amazing_jackson, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Nov 24 09:27:15 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 09:27:15 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:15 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 09:27:15 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:15 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Nov 24 09:27:15 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:15 compute-0 ceph-mgr[74626]: [progress INFO root] complete: finished ev e53512f4-e709-4c8d-9980-702bbe0593bf (Updating crash deployment (+1 -> 3))
Nov 24 09:27:15 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Nov 24 09:27:15 compute-0 ceph-mgr[74626]: [progress INFO root] Completed event e53512f4-e709-4c8d-9980-702bbe0593bf (Updating crash deployment (+1 -> 3)) in 2 seconds
Nov 24 09:27:15 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:15 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 09:27:15 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:27:15 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 09:27:15 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:27:15 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:27:15 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:27:15 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 09:27:15 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:27:15 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:27:15 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:27:15 compute-0 sudo[85266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:27:15 compute-0 sudo[85266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:15 compute-0 sudo[85266]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:15 compute-0 sudo[85291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 24 09:27:15 compute-0 sudo[85291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:15 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v63: 4 pgs: 1 creating+peering, 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:15 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Nov 24 09:27:15 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2174323893' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 09:27:16 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Nov 24 09:27:16 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2174323893' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 09:27:16 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e16 e16: 2 total, 2 up, 2 in
Nov 24 09:27:16 compute-0 amazing_jackson[85243]: pool 'images' created
Nov 24 09:27:16 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e16: 2 total, 2 up, 2 in
Nov 24 09:27:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 16 pg[5.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [0] r=0 lpr=16 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:16 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1077027605' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 09:27:16 compute-0 ceph-mon[74331]: osdmap e15: 2 total, 2 up, 2 in
Nov 24 09:27:16 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:16 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:16 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:16 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:16 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:27:16 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:27:16 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:27:16 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:27:16 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:27:16 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2174323893' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 09:27:16 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2174323893' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 09:27:16 compute-0 ceph-mon[74331]: osdmap e16: 2 total, 2 up, 2 in
Nov 24 09:27:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 16 pg[4.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [0] r=0 lpr=15 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:16 compute-0 systemd[1]: libpod-e7d37170eff00816e97f951343a94ec064792355570a06a5c50dd0fe866a06c7.scope: Deactivated successfully.
Nov 24 09:27:16 compute-0 podman[85227]: 2025-11-24 09:27:16.115738891 +0000 UTC m=+0.618234908 container died e7d37170eff00816e97f951343a94ec064792355570a06a5c50dd0fe866a06c7 (image=quay.io/ceph/ceph:v19, name=amazing_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Nov 24 09:27:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec7c9a360fa83a03b94a9c456d0dbb815c0a255afd102be89a2cff9701540f54-merged.mount: Deactivated successfully.
Nov 24 09:27:16 compute-0 podman[85227]: 2025-11-24 09:27:16.163737176 +0000 UTC m=+0.666233203 container remove e7d37170eff00816e97f951343a94ec064792355570a06a5c50dd0fe866a06c7 (image=quay.io/ceph/ceph:v19, name=amazing_jackson, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:27:16 compute-0 systemd[1]: libpod-conmon-e7d37170eff00816e97f951343a94ec064792355570a06a5c50dd0fe866a06c7.scope: Deactivated successfully.
Nov 24 09:27:16 compute-0 sudo[85224]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:16 compute-0 podman[85369]: 2025-11-24 09:27:16.269752646 +0000 UTC m=+0.038512365 container create 0abd018d8749b01f1bb9bdcf5efd45a7b94d396a8b28df9c4f747fb321b86bf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_carver, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:27:16 compute-0 systemd[1]: Started libpod-conmon-0abd018d8749b01f1bb9bdcf5efd45a7b94d396a8b28df9c4f747fb321b86bf8.scope.
Nov 24 09:27:16 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:27:16 compute-0 podman[85369]: 2025-11-24 09:27:16.254890478 +0000 UTC m=+0.023650207 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:27:16 compute-0 sudo[85411]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdhpiuilgznprxntbicgusdvgdsslkrd ; /usr/bin/python3'
Nov 24 09:27:16 compute-0 podman[85369]: 2025-11-24 09:27:16.358397861 +0000 UTC m=+0.127157630 container init 0abd018d8749b01f1bb9bdcf5efd45a7b94d396a8b28df9c4f747fb321b86bf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_carver, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:27:16 compute-0 sudo[85411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:27:16 compute-0 podman[85369]: 2025-11-24 09:27:16.364650013 +0000 UTC m=+0.133409732 container start 0abd018d8749b01f1bb9bdcf5efd45a7b94d396a8b28df9c4f747fb321b86bf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_carver, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:27:16 compute-0 happy_carver[85403]: 167 167
Nov 24 09:27:16 compute-0 podman[85369]: 2025-11-24 09:27:16.368033077 +0000 UTC m=+0.136803296 container attach 0abd018d8749b01f1bb9bdcf5efd45a7b94d396a8b28df9c4f747fb321b86bf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_carver, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:27:16 compute-0 systemd[1]: libpod-0abd018d8749b01f1bb9bdcf5efd45a7b94d396a8b28df9c4f747fb321b86bf8.scope: Deactivated successfully.
Nov 24 09:27:16 compute-0 podman[85369]: 2025-11-24 09:27:16.368966186 +0000 UTC m=+0.137725935 container died 0abd018d8749b01f1bb9bdcf5efd45a7b94d396a8b28df9c4f747fb321b86bf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_carver, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:27:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-51a3da6a9029193c01dd2318cf06eac494fc9ffbb23ac737853ff2240183160d-merged.mount: Deactivated successfully.
Nov 24 09:27:16 compute-0 podman[85369]: 2025-11-24 09:27:16.403439056 +0000 UTC m=+0.172198785 container remove 0abd018d8749b01f1bb9bdcf5efd45a7b94d396a8b28df9c4f747fb321b86bf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_carver, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:27:16 compute-0 systemd[1]: libpod-conmon-0abd018d8749b01f1bb9bdcf5efd45a7b94d396a8b28df9c4f747fb321b86bf8.scope: Deactivated successfully.
Nov 24 09:27:16 compute-0 python3[85414]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:27:16 compute-0 podman[85435]: 2025-11-24 09:27:16.552945662 +0000 UTC m=+0.040114534 container create 8583146259aa7d886d32fc4d31b02ea732d8ccd14931afe0e23f5150a19ce4d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:27:16 compute-0 podman[85437]: 2025-11-24 09:27:16.557887204 +0000 UTC m=+0.038742552 container create 9aa7dc64d16e7e2494a08b3dce58907524c6442949dfbe853a30eea624c97887 (image=quay.io/ceph/ceph:v19, name=festive_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:27:16 compute-0 systemd[1]: Started libpod-conmon-8583146259aa7d886d32fc4d31b02ea732d8ccd14931afe0e23f5150a19ce4d4.scope.
Nov 24 09:27:16 compute-0 systemd[1]: Started libpod-conmon-9aa7dc64d16e7e2494a08b3dce58907524c6442949dfbe853a30eea624c97887.scope.
Nov 24 09:27:16 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:27:16 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:27:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e15e59d4e3852e7a02e2746c89d3f49c81aa4a0cb5f6facecb5ecc387c511547/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/272d6a8f5a4440748a05d289fba00b4fad0c0b61336987c2502eccd174b9b894/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/272d6a8f5a4440748a05d289fba00b4fad0c0b61336987c2502eccd174b9b894/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e15e59d4e3852e7a02e2746c89d3f49c81aa4a0cb5f6facecb5ecc387c511547/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e15e59d4e3852e7a02e2746c89d3f49c81aa4a0cb5f6facecb5ecc387c511547/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e15e59d4e3852e7a02e2746c89d3f49c81aa4a0cb5f6facecb5ecc387c511547/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e15e59d4e3852e7a02e2746c89d3f49c81aa4a0cb5f6facecb5ecc387c511547/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:16 compute-0 podman[85437]: 2025-11-24 09:27:16.624883454 +0000 UTC m=+0.105738812 container init 9aa7dc64d16e7e2494a08b3dce58907524c6442949dfbe853a30eea624c97887 (image=quay.io/ceph/ceph:v19, name=festive_rhodes, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:27:16 compute-0 podman[85435]: 2025-11-24 09:27:16.628938769 +0000 UTC m=+0.116107671 container init 8583146259aa7d886d32fc4d31b02ea732d8ccd14931afe0e23f5150a19ce4d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:27:16 compute-0 podman[85435]: 2025-11-24 09:27:16.535261839 +0000 UTC m=+0.022430731 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:27:16 compute-0 podman[85437]: 2025-11-24 09:27:16.633125638 +0000 UTC m=+0.113980986 container start 9aa7dc64d16e7e2494a08b3dce58907524c6442949dfbe853a30eea624c97887 (image=quay.io/ceph/ceph:v19, name=festive_rhodes, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Nov 24 09:27:16 compute-0 podman[85437]: 2025-11-24 09:27:16.539051415 +0000 UTC m=+0.019906783 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:27:16 compute-0 podman[85437]: 2025-11-24 09:27:16.636172631 +0000 UTC m=+0.117027999 container attach 9aa7dc64d16e7e2494a08b3dce58907524c6442949dfbe853a30eea624c97887 (image=quay.io/ceph/ceph:v19, name=festive_rhodes, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Nov 24 09:27:16 compute-0 podman[85435]: 2025-11-24 09:27:16.637391779 +0000 UTC m=+0.124560651 container start 8583146259aa7d886d32fc4d31b02ea732d8ccd14931afe0e23f5150a19ce4d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_bardeen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:27:16 compute-0 podman[85435]: 2025-11-24 09:27:16.640082842 +0000 UTC m=+0.127251714 container attach 8583146259aa7d886d32fc4d31b02ea732d8ccd14931afe0e23f5150a19ce4d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:27:16 compute-0 intelligent_bardeen[85468]: --> passed data devices: 0 physical, 1 LVM
Nov 24 09:27:16 compute-0 intelligent_bardeen[85468]: --> All data devices are unavailable
Nov 24 09:27:16 compute-0 systemd[1]: libpod-8583146259aa7d886d32fc4d31b02ea732d8ccd14931afe0e23f5150a19ce4d4.scope: Deactivated successfully.
Nov 24 09:27:16 compute-0 podman[85506]: 2025-11-24 09:27:16.996254521 +0000 UTC m=+0.022352468 container died 8583146259aa7d886d32fc4d31b02ea732d8ccd14931afe0e23f5150a19ce4d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid)
Nov 24 09:27:17 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Nov 24 09:27:17 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/499996439' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 09:27:17 compute-0 ceph-mgr[74626]: [progress INFO root] Writing back 5 completed events
Nov 24 09:27:17 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 24 09:27:17 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:17 compute-0 podman[85506]: 2025-11-24 09:27:17.034761605 +0000 UTC m=+0.060859532 container remove 8583146259aa7d886d32fc4d31b02ea732d8ccd14931afe0e23f5150a19ce4d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Nov 24 09:27:17 compute-0 systemd[1]: libpod-conmon-8583146259aa7d886d32fc4d31b02ea732d8ccd14931afe0e23f5150a19ce4d4.scope: Deactivated successfully.
Nov 24 09:27:17 compute-0 sudo[85291]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:17 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Nov 24 09:27:17 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/499996439' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 09:27:17 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e17 e17: 2 total, 2 up, 2 in
Nov 24 09:27:17 compute-0 festive_rhodes[85469]: pool 'cephfs.cephfs.meta' created
Nov 24 09:27:17 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e17: 2 total, 2 up, 2 in
Nov 24 09:27:17 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 17 pg[6.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [0] r=0 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:17 compute-0 ceph-mon[74331]: pgmap v63: 4 pgs: 1 creating+peering, 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:17 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/499996439' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 09:27:17 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:17 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/499996439' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 09:27:17 compute-0 ceph-mon[74331]: osdmap e17: 2 total, 2 up, 2 in
Nov 24 09:27:17 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 17 pg[5.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [0] r=0 lpr=16 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:17 compute-0 systemd[1]: libpod-9aa7dc64d16e7e2494a08b3dce58907524c6442949dfbe853a30eea624c97887.scope: Deactivated successfully.
Nov 24 09:27:17 compute-0 podman[85437]: 2025-11-24 09:27:17.122447231 +0000 UTC m=+0.603302579 container died 9aa7dc64d16e7e2494a08b3dce58907524c6442949dfbe853a30eea624c97887 (image=quay.io/ceph/ceph:v19, name=festive_rhodes, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:27:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-e15e59d4e3852e7a02e2746c89d3f49c81aa4a0cb5f6facecb5ecc387c511547-merged.mount: Deactivated successfully.
Nov 24 09:27:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-272d6a8f5a4440748a05d289fba00b4fad0c0b61336987c2502eccd174b9b894-merged.mount: Deactivated successfully.
Nov 24 09:27:17 compute-0 sudo[85524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:27:17 compute-0 sudo[85524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:17 compute-0 sudo[85524]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:17 compute-0 podman[85437]: 2025-11-24 09:27:17.163155302 +0000 UTC m=+0.644010650 container remove 9aa7dc64d16e7e2494a08b3dce58907524c6442949dfbe853a30eea624c97887 (image=quay.io/ceph/ceph:v19, name=festive_rhodes, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Nov 24 09:27:17 compute-0 systemd[1]: libpod-conmon-9aa7dc64d16e7e2494a08b3dce58907524c6442949dfbe853a30eea624c97887.scope: Deactivated successfully.
Nov 24 09:27:17 compute-0 sudo[85411]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:17 compute-0 sudo[85560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- lvm list --format json
Nov 24 09:27:17 compute-0 sudo[85560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:17 compute-0 sudo[85608]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gecsiuqevvvhmfzcoadcesyddnbahsqp ; /usr/bin/python3'
Nov 24 09:27:17 compute-0 sudo[85608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:27:17 compute-0 python3[85610]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:27:17 compute-0 podman[85639]: 2025-11-24 09:27:17.614632373 +0000 UTC m=+0.074946595 container create 11b753c791ec43a94446745ebc0ee2dbf8d89323fbaf0ea8675c554a229993e0 (image=quay.io/ceph/ceph:v19, name=modest_lumiere, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:27:17 compute-0 podman[85666]: 2025-11-24 09:27:17.64834337 +0000 UTC m=+0.047300246 container create 9fedabf8ad188d5b47a2629b0e3dedff53c100ce36da4eabfae82325f0f7a642 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_thompson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:27:17 compute-0 systemd[1]: Started libpod-conmon-11b753c791ec43a94446745ebc0ee2dbf8d89323fbaf0ea8675c554a229993e0.scope.
Nov 24 09:27:17 compute-0 podman[85639]: 2025-11-24 09:27:17.567606517 +0000 UTC m=+0.027920789 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:27:17 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:27:17 compute-0 systemd[1]: Started libpod-conmon-9fedabf8ad188d5b47a2629b0e3dedff53c100ce36da4eabfae82325f0f7a642.scope.
Nov 24 09:27:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37cbfc86b9aad56a70e0d9555bfc76b3e3b2519b75d56d73d1419309a422a88e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37cbfc86b9aad56a70e0d9555bfc76b3e3b2519b75d56d73d1419309a422a88e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:17 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd new", "uuid": "8adc21f3-187b-4333-b4ae-3cc82866c3f9"} v 0)
Nov 24 09:27:17 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8adc21f3-187b-4333-b4ae-3cc82866c3f9"}]: dispatch
Nov 24 09:27:17 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:27:17 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Nov 24 09:27:17 compute-0 podman[85639]: 2025-11-24 09:27:17.70366023 +0000 UTC m=+0.163974472 container init 11b753c791ec43a94446745ebc0ee2dbf8d89323fbaf0ea8675c554a229993e0 (image=quay.io/ceph/ceph:v19, name=modest_lumiere, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:27:17 compute-0 podman[85666]: 2025-11-24 09:27:17.704696642 +0000 UTC m=+0.103653538 container init 9fedabf8ad188d5b47a2629b0e3dedff53c100ce36da4eabfae82325f0f7a642 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_thompson, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Nov 24 09:27:17 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "8adc21f3-187b-4333-b4ae-3cc82866c3f9"}]': finished
Nov 24 09:27:17 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e18 e18: 3 total, 2 up, 3 in
Nov 24 09:27:17 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 2 up, 3 in
Nov 24 09:27:17 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 24 09:27:17 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:17 compute-0 ceph-mgr[74626]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 09:27:17 compute-0 podman[85666]: 2025-11-24 09:27:17.712704828 +0000 UTC m=+0.111661704 container start 9fedabf8ad188d5b47a2629b0e3dedff53c100ce36da4eabfae82325f0f7a642 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 24 09:27:17 compute-0 podman[85639]: 2025-11-24 09:27:17.714112332 +0000 UTC m=+0.174426554 container start 11b753c791ec43a94446745ebc0ee2dbf8d89323fbaf0ea8675c554a229993e0 (image=quay.io/ceph/ceph:v19, name=modest_lumiere, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True)
Nov 24 09:27:17 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e18 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:27:17 compute-0 podman[85666]: 2025-11-24 09:27:17.717371191 +0000 UTC m=+0.116328077 container attach 9fedabf8ad188d5b47a2629b0e3dedff53c100ce36da4eabfae82325f0f7a642 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_thompson, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:27:17 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 18 pg[6.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [0] r=0 lpr=17 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:17 compute-0 nostalgic_thompson[85688]: 167 167
Nov 24 09:27:17 compute-0 systemd[1]: libpod-9fedabf8ad188d5b47a2629b0e3dedff53c100ce36da4eabfae82325f0f7a642.scope: Deactivated successfully.
Nov 24 09:27:17 compute-0 conmon[85688]: conmon 9fedabf8ad188d5b47a2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9fedabf8ad188d5b47a2629b0e3dedff53c100ce36da4eabfae82325f0f7a642.scope/container/memory.events
Nov 24 09:27:17 compute-0 podman[85639]: 2025-11-24 09:27:17.721150618 +0000 UTC m=+0.181464860 container attach 11b753c791ec43a94446745ebc0ee2dbf8d89323fbaf0ea8675c554a229993e0 (image=quay.io/ceph/ceph:v19, name=modest_lumiere, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Nov 24 09:27:17 compute-0 podman[85666]: 2025-11-24 09:27:17.72155395 +0000 UTC m=+0.120510826 container died 9fedabf8ad188d5b47a2629b0e3dedff53c100ce36da4eabfae82325f0f7a642 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_thompson, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:27:17 compute-0 podman[85666]: 2025-11-24 09:27:17.631481491 +0000 UTC m=+0.030438387 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:27:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-970f0a112eb0becf8d34ec310278aa969eb400fd139e7f6ff3864d47d23662d3-merged.mount: Deactivated successfully.
Nov 24 09:27:17 compute-0 podman[85666]: 2025-11-24 09:27:17.754289316 +0000 UTC m=+0.153246192 container remove 9fedabf8ad188d5b47a2629b0e3dedff53c100ce36da4eabfae82325f0f7a642 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Nov 24 09:27:17 compute-0 systemd[1]: libpod-conmon-9fedabf8ad188d5b47a2629b0e3dedff53c100ce36da4eabfae82325f0f7a642.scope: Deactivated successfully.
Nov 24 09:27:17 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v67: 6 pgs: 2 unknown, 1 creating+peering, 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:17 compute-0 podman[85733]: 2025-11-24 09:27:17.915496643 +0000 UTC m=+0.045436208 container create 0ba642e83f2bdba85469071139c717561efffdebf3fa2c466b88e5513e0d2cd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_mccarthy, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:27:17 compute-0 systemd[1]: Started libpod-conmon-0ba642e83f2bdba85469071139c717561efffdebf3fa2c466b88e5513e0d2cd0.scope.
Nov 24 09:27:17 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:27:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6c4083903b1571595e4f7882a1a0c2e660e1ae76552eb1050d9a428a455449b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6c4083903b1571595e4f7882a1a0c2e660e1ae76552eb1050d9a428a455449b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6c4083903b1571595e4f7882a1a0c2e660e1ae76552eb1050d9a428a455449b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6c4083903b1571595e4f7882a1a0c2e660e1ae76552eb1050d9a428a455449b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:17 compute-0 podman[85733]: 2025-11-24 09:27:17.898558621 +0000 UTC m=+0.028498206 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:27:18 compute-0 podman[85733]: 2025-11-24 09:27:18.008835862 +0000 UTC m=+0.138775427 container init 0ba642e83f2bdba85469071139c717561efffdebf3fa2c466b88e5513e0d2cd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_mccarthy, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 09:27:18 compute-0 podman[85733]: 2025-11-24 09:27:18.014759714 +0000 UTC m=+0.144699279 container start 0ba642e83f2bdba85469071139c717561efffdebf3fa2c466b88e5513e0d2cd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:27:18 compute-0 podman[85733]: 2025-11-24 09:27:18.017651363 +0000 UTC m=+0.147590948 container attach 0ba642e83f2bdba85469071139c717561efffdebf3fa2c466b88e5513e0d2cd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_mccarthy, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Nov 24 09:27:18 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Nov 24 09:27:18 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2555317958' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 09:27:18 compute-0 ceph-mon[74331]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8adc21f3-187b-4333-b4ae-3cc82866c3f9"}]: dispatch
Nov 24 09:27:18 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/1514770584' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8adc21f3-187b-4333-b4ae-3cc82866c3f9"}]: dispatch
Nov 24 09:27:18 compute-0 ceph-mon[74331]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "8adc21f3-187b-4333-b4ae-3cc82866c3f9"}]': finished
Nov 24 09:27:18 compute-0 ceph-mon[74331]: osdmap e18: 3 total, 2 up, 3 in
Nov 24 09:27:18 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:18 compute-0 ceph-mon[74331]: pgmap v67: 6 pgs: 2 unknown, 1 creating+peering, 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:18 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2555317958' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 09:27:18 compute-0 admiring_mccarthy[85749]: {
Nov 24 09:27:18 compute-0 admiring_mccarthy[85749]:     "0": [
Nov 24 09:27:18 compute-0 admiring_mccarthy[85749]:         {
Nov 24 09:27:18 compute-0 admiring_mccarthy[85749]:             "devices": [
Nov 24 09:27:18 compute-0 admiring_mccarthy[85749]:                 "/dev/loop3"
Nov 24 09:27:18 compute-0 admiring_mccarthy[85749]:             ],
Nov 24 09:27:18 compute-0 admiring_mccarthy[85749]:             "lv_name": "ceph_lv0",
Nov 24 09:27:18 compute-0 admiring_mccarthy[85749]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:27:18 compute-0 admiring_mccarthy[85749]:             "lv_size": "21470642176",
Nov 24 09:27:18 compute-0 admiring_mccarthy[85749]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=84a084c3-61a7-5de7-8207-1f88efa59a64,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 24 09:27:18 compute-0 admiring_mccarthy[85749]:             "lv_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:27:18 compute-0 admiring_mccarthy[85749]:             "name": "ceph_lv0",
Nov 24 09:27:18 compute-0 admiring_mccarthy[85749]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:27:18 compute-0 admiring_mccarthy[85749]:             "tags": {
Nov 24 09:27:18 compute-0 admiring_mccarthy[85749]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:27:18 compute-0 admiring_mccarthy[85749]:                 "ceph.block_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:27:18 compute-0 admiring_mccarthy[85749]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 09:27:18 compute-0 admiring_mccarthy[85749]:                 "ceph.cluster_fsid": "84a084c3-61a7-5de7-8207-1f88efa59a64",
Nov 24 09:27:18 compute-0 admiring_mccarthy[85749]:                 "ceph.cluster_name": "ceph",
Nov 24 09:27:18 compute-0 admiring_mccarthy[85749]:                 "ceph.crush_device_class": "",
Nov 24 09:27:18 compute-0 admiring_mccarthy[85749]:                 "ceph.encrypted": "0",
Nov 24 09:27:18 compute-0 admiring_mccarthy[85749]:                 "ceph.osd_fsid": "4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c",
Nov 24 09:27:18 compute-0 admiring_mccarthy[85749]:                 "ceph.osd_id": "0",
Nov 24 09:27:18 compute-0 admiring_mccarthy[85749]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 09:27:18 compute-0 admiring_mccarthy[85749]:                 "ceph.type": "block",
Nov 24 09:27:18 compute-0 admiring_mccarthy[85749]:                 "ceph.vdo": "0",
Nov 24 09:27:18 compute-0 admiring_mccarthy[85749]:                 "ceph.with_tpm": "0"
Nov 24 09:27:18 compute-0 admiring_mccarthy[85749]:             },
Nov 24 09:27:18 compute-0 admiring_mccarthy[85749]:             "type": "block",
Nov 24 09:27:18 compute-0 admiring_mccarthy[85749]:             "vg_name": "ceph_vg0"
Nov 24 09:27:18 compute-0 admiring_mccarthy[85749]:         }
Nov 24 09:27:18 compute-0 admiring_mccarthy[85749]:     ]
Nov 24 09:27:18 compute-0 admiring_mccarthy[85749]: }
Nov 24 09:27:18 compute-0 systemd[1]: libpod-0ba642e83f2bdba85469071139c717561efffdebf3fa2c466b88e5513e0d2cd0.scope: Deactivated successfully.
Nov 24 09:27:18 compute-0 podman[85733]: 2025-11-24 09:27:18.316655806 +0000 UTC m=+0.446595371 container died 0ba642e83f2bdba85469071139c717561efffdebf3fa2c466b88e5513e0d2cd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_mccarthy, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:27:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6c4083903b1571595e4f7882a1a0c2e660e1ae76552eb1050d9a428a455449b-merged.mount: Deactivated successfully.
Nov 24 09:27:18 compute-0 podman[85733]: 2025-11-24 09:27:18.367370545 +0000 UTC m=+0.497310110 container remove 0ba642e83f2bdba85469071139c717561efffdebf3fa2c466b88e5513e0d2cd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid)
Nov 24 09:27:18 compute-0 systemd[1]: libpod-conmon-0ba642e83f2bdba85469071139c717561efffdebf3fa2c466b88e5513e0d2cd0.scope: Deactivated successfully.
Nov 24 09:27:18 compute-0 sudo[85560]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:18 compute-0 sudo[85774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:27:18 compute-0 sudo[85774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:18 compute-0 sudo[85774]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:18 compute-0 sudo[85799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- raw list --format json
Nov 24 09:27:18 compute-0 sudo[85799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:18 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Nov 24 09:27:18 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2555317958' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 09:27:18 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e19 e19: 3 total, 2 up, 3 in
Nov 24 09:27:18 compute-0 modest_lumiere[85683]: pool 'cephfs.cephfs.data' created
Nov 24 09:27:18 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 2 up, 3 in
Nov 24 09:27:18 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 24 09:27:18 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:18 compute-0 ceph-mgr[74626]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 09:27:18 compute-0 systemd[1]: libpod-11b753c791ec43a94446745ebc0ee2dbf8d89323fbaf0ea8675c554a229993e0.scope: Deactivated successfully.
Nov 24 09:27:18 compute-0 podman[85639]: 2025-11-24 09:27:18.737192514 +0000 UTC m=+1.197506746 container died 11b753c791ec43a94446745ebc0ee2dbf8d89323fbaf0ea8675c554a229993e0 (image=quay.io/ceph/ceph:v19, name=modest_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 09:27:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-37cbfc86b9aad56a70e0d9555bfc76b3e3b2519b75d56d73d1419309a422a88e-merged.mount: Deactivated successfully.
Nov 24 09:27:18 compute-0 podman[85639]: 2025-11-24 09:27:18.771891671 +0000 UTC m=+1.232205893 container remove 11b753c791ec43a94446745ebc0ee2dbf8d89323fbaf0ea8675c554a229993e0 (image=quay.io/ceph/ceph:v19, name=modest_lumiere, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 24 09:27:18 compute-0 systemd[1]: libpod-conmon-11b753c791ec43a94446745ebc0ee2dbf8d89323fbaf0ea8675c554a229993e0.scope: Deactivated successfully.
Nov 24 09:27:18 compute-0 sudo[85608]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:18 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.rzcnzg started
Nov 24 09:27:18 compute-0 ceph-mgr[74626]: mgr.server handle_open ignoring open from mgr.compute-2.rzcnzg 192.168.122.102:0/3572216359; not ready for session (expect reconnect)
Nov 24 09:27:18 compute-0 podman[85872]: 2025-11-24 09:27:18.896524783 +0000 UTC m=+0.042766535 container create 8879308bbd118d79beb2324090e20d2f0064227e051571b9a4cccf3bee394154 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_mirzakhani, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True)
Nov 24 09:27:18 compute-0 systemd[1]: Started libpod-conmon-8879308bbd118d79beb2324090e20d2f0064227e051571b9a4cccf3bee394154.scope.
Nov 24 09:27:18 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:27:18 compute-0 podman[85872]: 2025-11-24 09:27:18.877359174 +0000 UTC m=+0.023600966 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:27:18 compute-0 podman[85872]: 2025-11-24 09:27:18.977243025 +0000 UTC m=+0.123484787 container init 8879308bbd118d79beb2324090e20d2f0064227e051571b9a4cccf3bee394154 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Nov 24 09:27:18 compute-0 sudo[85914]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfgyrcxbmqyddfplwkqoqylkljoxzncv ; /usr/bin/python3'
Nov 24 09:27:18 compute-0 podman[85872]: 2025-11-24 09:27:18.986469398 +0000 UTC m=+0.132711170 container start 8879308bbd118d79beb2324090e20d2f0064227e051571b9a4cccf3bee394154 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:27:18 compute-0 sudo[85914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:27:18 compute-0 sad_mirzakhani[85888]: 167 167
Nov 24 09:27:18 compute-0 systemd[1]: libpod-8879308bbd118d79beb2324090e20d2f0064227e051571b9a4cccf3bee394154.scope: Deactivated successfully.
Nov 24 09:27:18 compute-0 podman[85872]: 2025-11-24 09:27:18.991011409 +0000 UTC m=+0.137253171 container attach 8879308bbd118d79beb2324090e20d2f0064227e051571b9a4cccf3bee394154 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:27:18 compute-0 podman[85872]: 2025-11-24 09:27:18.992030819 +0000 UTC m=+0.138272641 container died 8879308bbd118d79beb2324090e20d2f0064227e051571b9a4cccf3bee394154 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_mirzakhani, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:27:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9cf0d1b4835580462b543188a6210f5f35c1dff781c819fdd30e2f2413b5dbd-merged.mount: Deactivated successfully.
Nov 24 09:27:19 compute-0 podman[85872]: 2025-11-24 09:27:19.040038256 +0000 UTC m=+0.186280018 container remove 8879308bbd118d79beb2324090e20d2f0064227e051571b9a4cccf3bee394154 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_mirzakhani, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:27:19 compute-0 systemd[1]: libpod-conmon-8879308bbd118d79beb2324090e20d2f0064227e051571b9a4cccf3bee394154.scope: Deactivated successfully.
Nov 24 09:27:19 compute-0 python3[85918]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:27:19 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/3983475032' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 24 09:27:19 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2555317958' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 09:27:19 compute-0 ceph-mon[74331]: osdmap e19: 3 total, 2 up, 3 in
Nov 24 09:27:19 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:19 compute-0 ceph-mon[74331]: Standby manager daemon compute-2.rzcnzg started
Nov 24 09:27:19 compute-0 podman[85936]: 2025-11-24 09:27:19.177501092 +0000 UTC m=+0.040850037 container create 4a89f81b6d4b32c7a538d906e14572f56ee9e2474cea9048c5c451f7f086d563 (image=quay.io/ceph/ceph:v19, name=suspicious_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:27:19 compute-0 podman[85946]: 2025-11-24 09:27:19.193000928 +0000 UTC m=+0.043386265 container create 0b1e82e86dc4c223233d31921b521960691023b49182190229cb3abcd58ef4ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_turing, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Nov 24 09:27:19 compute-0 systemd[1]: Started libpod-conmon-4a89f81b6d4b32c7a538d906e14572f56ee9e2474cea9048c5c451f7f086d563.scope.
Nov 24 09:27:19 compute-0 systemd[1]: Started libpod-conmon-0b1e82e86dc4c223233d31921b521960691023b49182190229cb3abcd58ef4ad.scope.
Nov 24 09:27:19 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:27:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5b4df12fff89bebd47df90e1fe6cc4bd4a27ea168e47e433426fde324b355c0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5b4df12fff89bebd47df90e1fe6cc4bd4a27ea168e47e433426fde324b355c0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:19 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:27:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f31174c0c409aabfc70832721b5b347d0cdbc5e49e47e0b2af1cd373528b777d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f31174c0c409aabfc70832721b5b347d0cdbc5e49e47e0b2af1cd373528b777d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f31174c0c409aabfc70832721b5b347d0cdbc5e49e47e0b2af1cd373528b777d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f31174c0c409aabfc70832721b5b347d0cdbc5e49e47e0b2af1cd373528b777d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:19 compute-0 podman[85936]: 2025-11-24 09:27:19.250895768 +0000 UTC m=+0.114244733 container init 4a89f81b6d4b32c7a538d906e14572f56ee9e2474cea9048c5c451f7f086d563 (image=quay.io/ceph/ceph:v19, name=suspicious_napier, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Nov 24 09:27:19 compute-0 podman[85936]: 2025-11-24 09:27:19.157340561 +0000 UTC m=+0.020689506 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:27:19 compute-0 podman[85946]: 2025-11-24 09:27:19.255708856 +0000 UTC m=+0.106094213 container init 0b1e82e86dc4c223233d31921b521960691023b49182190229cb3abcd58ef4ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_turing, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:27:19 compute-0 podman[85936]: 2025-11-24 09:27:19.258192542 +0000 UTC m=+0.121541477 container start 4a89f81b6d4b32c7a538d906e14572f56ee9e2474cea9048c5c451f7f086d563 (image=quay.io/ceph/ceph:v19, name=suspicious_napier, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:27:19 compute-0 podman[85936]: 2025-11-24 09:27:19.261111503 +0000 UTC m=+0.124460438 container attach 4a89f81b6d4b32c7a538d906e14572f56ee9e2474cea9048c5c451f7f086d563 (image=quay.io/ceph/ceph:v19, name=suspicious_napier, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Nov 24 09:27:19 compute-0 podman[85946]: 2025-11-24 09:27:19.261744332 +0000 UTC m=+0.112129669 container start 0b1e82e86dc4c223233d31921b521960691023b49182190229cb3abcd58ef4ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_turing, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:27:19 compute-0 podman[85946]: 2025-11-24 09:27:19.264724023 +0000 UTC m=+0.115109360 container attach 0b1e82e86dc4c223233d31921b521960691023b49182190229cb3abcd58ef4ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_turing, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:27:19 compute-0 podman[85946]: 2025-11-24 09:27:19.172444986 +0000 UTC m=+0.022830353 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:27:19 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Nov 24 09:27:19 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2927635265' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 24 09:27:19 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Nov 24 09:27:19 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2927635265' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 24 09:27:19 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e20 e20: 3 total, 2 up, 3 in
Nov 24 09:27:19 compute-0 suspicious_napier[85969]: enabled application 'rbd' on pool 'vms'
Nov 24 09:27:19 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 2 up, 3 in
Nov 24 09:27:19 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 24 09:27:19 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:19 compute-0 ceph-mgr[74626]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 09:27:19 compute-0 systemd[1]: libpod-4a89f81b6d4b32c7a538d906e14572f56ee9e2474cea9048c5c451f7f086d563.scope: Deactivated successfully.
Nov 24 09:27:19 compute-0 podman[85936]: 2025-11-24 09:27:19.744656528 +0000 UTC m=+0.608005463 container died 4a89f81b6d4b32c7a538d906e14572f56ee9e2474cea9048c5c451f7f086d563 (image=quay.io/ceph/ceph:v19, name=suspicious_napier, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:27:19 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.mauvni(active, since 114s), standbys: compute-2.rzcnzg
Nov 24 09:27:19 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.rzcnzg", "id": "compute-2.rzcnzg"} v 0)
Nov 24 09:27:19 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr metadata", "who": "compute-2.rzcnzg", "id": "compute-2.rzcnzg"}]: dispatch
Nov 24 09:27:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5b4df12fff89bebd47df90e1fe6cc4bd4a27ea168e47e433426fde324b355c0-merged.mount: Deactivated successfully.
Nov 24 09:27:19 compute-0 podman[85936]: 2025-11-24 09:27:19.78148442 +0000 UTC m=+0.644833355 container remove 4a89f81b6d4b32c7a538d906e14572f56ee9e2474cea9048c5c451f7f086d563 (image=quay.io/ceph/ceph:v19, name=suspicious_napier, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Nov 24 09:27:19 compute-0 systemd[1]: libpod-conmon-4a89f81b6d4b32c7a538d906e14572f56ee9e2474cea9048c5c451f7f086d563.scope: Deactivated successfully.
Nov 24 09:27:19 compute-0 sudo[85914]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:19 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v70: 7 pgs: 3 unknown, 1 creating+peering, 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:19 compute-0 lvm[86082]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 09:27:19 compute-0 lvm[86082]: VG ceph_vg0 finished
Nov 24 09:27:19 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.qelqsg started
Nov 24 09:27:19 compute-0 gifted_turing[85974]: {}
Nov 24 09:27:19 compute-0 ceph-mgr[74626]: mgr.server handle_open ignoring open from mgr.compute-1.qelqsg 192.168.122.101:0/1144256300; not ready for session (expect reconnect)
Nov 24 09:27:19 compute-0 sudo[86109]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwkcztgusfwjfzefybwtxpxfxewwipub ; /usr/bin/python3'
Nov 24 09:27:19 compute-0 sudo[86109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:27:19 compute-0 systemd[1]: libpod-0b1e82e86dc4c223233d31921b521960691023b49182190229cb3abcd58ef4ad.scope: Deactivated successfully.
Nov 24 09:27:19 compute-0 podman[85946]: 2025-11-24 09:27:19.975689721 +0000 UTC m=+0.826075058 container died 0b1e82e86dc4c223233d31921b521960691023b49182190229cb3abcd58ef4ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Nov 24 09:27:19 compute-0 systemd[1]: libpod-0b1e82e86dc4c223233d31921b521960691023b49182190229cb3abcd58ef4ad.scope: Consumed 1.132s CPU time.
Nov 24 09:27:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-f31174c0c409aabfc70832721b5b347d0cdbc5e49e47e0b2af1cd373528b777d-merged.mount: Deactivated successfully.
Nov 24 09:27:20 compute-0 podman[85946]: 2025-11-24 09:27:20.017807276 +0000 UTC m=+0.868192613 container remove 0b1e82e86dc4c223233d31921b521960691023b49182190229cb3abcd58ef4ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_turing, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:27:20 compute-0 systemd[1]: libpod-conmon-0b1e82e86dc4c223233d31921b521960691023b49182190229cb3abcd58ef4ad.scope: Deactivated successfully.
Nov 24 09:27:20 compute-0 sudo[85799]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:20 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:27:20 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:20 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:27:20 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:20 compute-0 python3[86111]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:27:20 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2927635265' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 24 09:27:20 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2927635265' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 24 09:27:20 compute-0 ceph-mon[74331]: osdmap e20: 3 total, 2 up, 3 in
Nov 24 09:27:20 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:20 compute-0 ceph-mon[74331]: mgrmap e10: compute-0.mauvni(active, since 114s), standbys: compute-2.rzcnzg
Nov 24 09:27:20 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr metadata", "who": "compute-2.rzcnzg", "id": "compute-2.rzcnzg"}]: dispatch
Nov 24 09:27:20 compute-0 ceph-mon[74331]: pgmap v70: 7 pgs: 3 unknown, 1 creating+peering, 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:20 compute-0 ceph-mon[74331]: Standby manager daemon compute-1.qelqsg started
Nov 24 09:27:20 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:20 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:20 compute-0 podman[86124]: 2025-11-24 09:27:20.151355551 +0000 UTC m=+0.041062573 container create a31a8348d6a091ddb5e278d39cfc5c3dfb118de1b52a9bc0261fd9eb2e3227a7 (image=quay.io/ceph/ceph:v19, name=magical_bose, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Nov 24 09:27:20 compute-0 systemd[1]: Started libpod-conmon-a31a8348d6a091ddb5e278d39cfc5c3dfb118de1b52a9bc0261fd9eb2e3227a7.scope.
Nov 24 09:27:20 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:27:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8cfd458b5f5b796e788e460d4973b287210d5400a1dca06e3f41ec9bec4b2ce/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8cfd458b5f5b796e788e460d4973b287210d5400a1dca06e3f41ec9bec4b2ce/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:20 compute-0 podman[86124]: 2025-11-24 09:27:20.13276734 +0000 UTC m=+0.022474412 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:27:20 compute-0 podman[86124]: 2025-11-24 09:27:20.245376093 +0000 UTC m=+0.135083165 container init a31a8348d6a091ddb5e278d39cfc5c3dfb118de1b52a9bc0261fd9eb2e3227a7 (image=quay.io/ceph/ceph:v19, name=magical_bose, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:27:20 compute-0 podman[86124]: 2025-11-24 09:27:20.254690069 +0000 UTC m=+0.144397101 container start a31a8348d6a091ddb5e278d39cfc5c3dfb118de1b52a9bc0261fd9eb2e3227a7 (image=quay.io/ceph/ceph:v19, name=magical_bose, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:27:20 compute-0 podman[86124]: 2025-11-24 09:27:20.25829372 +0000 UTC m=+0.148000792 container attach a31a8348d6a091ddb5e278d39cfc5c3dfb118de1b52a9bc0261fd9eb2e3227a7 (image=quay.io/ceph/ceph:v19, name=magical_bose, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:27:20 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Nov 24 09:27:20 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2444820917' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 24 09:27:20 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Nov 24 09:27:20 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2444820917' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 24 09:27:20 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e21 e21: 3 total, 2 up, 3 in
Nov 24 09:27:20 compute-0 magical_bose[86139]: enabled application 'rbd' on pool 'volumes'
Nov 24 09:27:20 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 2 up, 3 in
Nov 24 09:27:20 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 24 09:27:20 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:20 compute-0 ceph-mgr[74626]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 09:27:20 compute-0 systemd[1]: libpod-a31a8348d6a091ddb5e278d39cfc5c3dfb118de1b52a9bc0261fd9eb2e3227a7.scope: Deactivated successfully.
Nov 24 09:27:20 compute-0 podman[86124]: 2025-11-24 09:27:20.758601741 +0000 UTC m=+0.648308853 container died a31a8348d6a091ddb5e278d39cfc5c3dfb118de1b52a9bc0261fd9eb2e3227a7 (image=quay.io/ceph/ceph:v19, name=magical_bose, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:27:20 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.mauvni(active, since 115s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:27:20 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.qelqsg", "id": "compute-1.qelqsg"} v 0)
Nov 24 09:27:20 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr metadata", "who": "compute-1.qelqsg", "id": "compute-1.qelqsg"}]: dispatch
Nov 24 09:27:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8cfd458b5f5b796e788e460d4973b287210d5400a1dca06e3f41ec9bec4b2ce-merged.mount: Deactivated successfully.
Nov 24 09:27:20 compute-0 podman[86124]: 2025-11-24 09:27:20.79564575 +0000 UTC m=+0.685352772 container remove a31a8348d6a091ddb5e278d39cfc5c3dfb118de1b52a9bc0261fd9eb2e3227a7 (image=quay.io/ceph/ceph:v19, name=magical_bose, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 24 09:27:20 compute-0 systemd[1]: libpod-conmon-a31a8348d6a091ddb5e278d39cfc5c3dfb118de1b52a9bc0261fd9eb2e3227a7.scope: Deactivated successfully.
Nov 24 09:27:20 compute-0 sudo[86109]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:20 compute-0 sudo[86198]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-saurnowdwgndbmvotjwejqraxhscdsih ; /usr/bin/python3'
Nov 24 09:27:20 compute-0 sudo[86198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:27:21 compute-0 python3[86200]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:27:21 compute-0 podman[86201]: 2025-11-24 09:27:21.214865499 +0000 UTC m=+0.074422979 container create 2fc6956b427451aa7a9d162fc58285d89ea59e6b7a22c106bada63998fa1fb1a (image=quay.io/ceph/ceph:v19, name=vibrant_moser, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 09:27:21 compute-0 podman[86201]: 2025-11-24 09:27:21.161614661 +0000 UTC m=+0.021172171 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:27:21 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2444820917' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 24 09:27:21 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2444820917' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 24 09:27:21 compute-0 ceph-mon[74331]: osdmap e21: 3 total, 2 up, 3 in
Nov 24 09:27:21 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:21 compute-0 ceph-mon[74331]: mgrmap e11: compute-0.mauvni(active, since 115s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:27:21 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr metadata", "who": "compute-1.qelqsg", "id": "compute-1.qelqsg"}]: dispatch
Nov 24 09:27:21 compute-0 systemd[1]: Started libpod-conmon-2fc6956b427451aa7a9d162fc58285d89ea59e6b7a22c106bada63998fa1fb1a.scope.
Nov 24 09:27:21 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:27:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/473ea2fd8553d783f65c1603ee2cc1ed6527364e16736588e5ad720a77bf0326/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/473ea2fd8553d783f65c1603ee2cc1ed6527364e16736588e5ad720a77bf0326/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:21 compute-0 podman[86201]: 2025-11-24 09:27:21.377029334 +0000 UTC m=+0.236586844 container init 2fc6956b427451aa7a9d162fc58285d89ea59e6b7a22c106bada63998fa1fb1a (image=quay.io/ceph/ceph:v19, name=vibrant_moser, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:27:21 compute-0 podman[86201]: 2025-11-24 09:27:21.392009154 +0000 UTC m=+0.251566634 container start 2fc6956b427451aa7a9d162fc58285d89ea59e6b7a22c106bada63998fa1fb1a (image=quay.io/ceph/ceph:v19, name=vibrant_moser, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Nov 24 09:27:21 compute-0 podman[86201]: 2025-11-24 09:27:21.443278391 +0000 UTC m=+0.302835891 container attach 2fc6956b427451aa7a9d162fc58285d89ea59e6b7a22c106bada63998fa1fb1a (image=quay.io/ceph/ceph:v19, name=vibrant_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:27:21 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Nov 24 09:27:21 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3988938670' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 24 09:27:21 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v72: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:22 compute-0 ceph-mon[74331]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 24 09:27:22 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Nov 24 09:27:22 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3988938670' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 24 09:27:22 compute-0 ceph-mon[74331]: pgmap v72: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:22 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3988938670' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 24 09:27:22 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e22 e22: 3 total, 2 up, 3 in
Nov 24 09:27:22 compute-0 vibrant_moser[86217]: enabled application 'rbd' on pool 'backups'
Nov 24 09:27:22 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 2 up, 3 in
Nov 24 09:27:22 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 24 09:27:22 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:22 compute-0 ceph-mgr[74626]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 09:27:22 compute-0 systemd[1]: libpod-2fc6956b427451aa7a9d162fc58285d89ea59e6b7a22c106bada63998fa1fb1a.scope: Deactivated successfully.
Nov 24 09:27:22 compute-0 podman[86201]: 2025-11-24 09:27:22.330452316 +0000 UTC m=+1.190009796 container died 2fc6956b427451aa7a9d162fc58285d89ea59e6b7a22c106bada63998fa1fb1a (image=quay.io/ceph/ceph:v19, name=vibrant_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 24 09:27:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-473ea2fd8553d783f65c1603ee2cc1ed6527364e16736588e5ad720a77bf0326-merged.mount: Deactivated successfully.
Nov 24 09:27:22 compute-0 podman[86201]: 2025-11-24 09:27:22.363721649 +0000 UTC m=+1.223279129 container remove 2fc6956b427451aa7a9d162fc58285d89ea59e6b7a22c106bada63998fa1fb1a (image=quay.io/ceph/ceph:v19, name=vibrant_moser, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Nov 24 09:27:22 compute-0 systemd[1]: libpod-conmon-2fc6956b427451aa7a9d162fc58285d89ea59e6b7a22c106bada63998fa1fb1a.scope: Deactivated successfully.
Nov 24 09:27:22 compute-0 sudo[86198]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:22 compute-0 sudo[86277]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axwjbeximjajlafwyaydsxmjwwmswaui ; /usr/bin/python3'
Nov 24 09:27:22 compute-0 sudo[86277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:27:22 compute-0 python3[86279]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:27:22 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e22 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:27:22 compute-0 podman[86280]: 2025-11-24 09:27:22.730400822 +0000 UTC m=+0.050199614 container create 1cb9c587c738276bb26da12717f699249ab0c3b0c2f1fe6f9e74d45064ca4b56 (image=quay.io/ceph/ceph:v19, name=musing_noether, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 24 09:27:22 compute-0 systemd[1]: Started libpod-conmon-1cb9c587c738276bb26da12717f699249ab0c3b0c2f1fe6f9e74d45064ca4b56.scope.
Nov 24 09:27:22 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:27:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbb135c2b88b5cab3ebdecdcee1e76d3c6f05f7cb71c96470b004d39a493f8d7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbb135c2b88b5cab3ebdecdcee1e76d3c6f05f7cb71c96470b004d39a493f8d7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:22 compute-0 podman[86280]: 2025-11-24 09:27:22.797143424 +0000 UTC m=+0.116942266 container init 1cb9c587c738276bb26da12717f699249ab0c3b0c2f1fe6f9e74d45064ca4b56 (image=quay.io/ceph/ceph:v19, name=musing_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:27:22 compute-0 podman[86280]: 2025-11-24 09:27:22.80320574 +0000 UTC m=+0.123004542 container start 1cb9c587c738276bb26da12717f699249ab0c3b0c2f1fe6f9e74d45064ca4b56 (image=quay.io/ceph/ceph:v19, name=musing_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:27:22 compute-0 podman[86280]: 2025-11-24 09:27:22.708254902 +0000 UTC m=+0.028053714 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:27:22 compute-0 podman[86280]: 2025-11-24 09:27:22.806470571 +0000 UTC m=+0.126269413 container attach 1cb9c587c738276bb26da12717f699249ab0c3b0c2f1fe6f9e74d45064ca4b56 (image=quay.io/ceph/ceph:v19, name=musing_noether, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:27:23 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Nov 24 09:27:23 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3230617921' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 24 09:27:23 compute-0 ceph-mon[74331]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 24 09:27:23 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3988938670' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 24 09:27:23 compute-0 ceph-mon[74331]: osdmap e22: 3 total, 2 up, 3 in
Nov 24 09:27:23 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:23 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3230617921' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 24 09:27:23 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Nov 24 09:27:23 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3230617921' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 24 09:27:23 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e23 e23: 3 total, 2 up, 3 in
Nov 24 09:27:23 compute-0 musing_noether[86295]: enabled application 'rbd' on pool 'images'
Nov 24 09:27:23 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 2 up, 3 in
Nov 24 09:27:23 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 24 09:27:23 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:23 compute-0 ceph-mgr[74626]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 09:27:23 compute-0 systemd[1]: libpod-1cb9c587c738276bb26da12717f699249ab0c3b0c2f1fe6f9e74d45064ca4b56.scope: Deactivated successfully.
Nov 24 09:27:23 compute-0 podman[86280]: 2025-11-24 09:27:23.34383334 +0000 UTC m=+0.663632132 container died 1cb9c587c738276bb26da12717f699249ab0c3b0c2f1fe6f9e74d45064ca4b56 (image=quay.io/ceph/ceph:v19, name=musing_noether, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Nov 24 09:27:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-cbb135c2b88b5cab3ebdecdcee1e76d3c6f05f7cb71c96470b004d39a493f8d7-merged.mount: Deactivated successfully.
Nov 24 09:27:23 compute-0 podman[86280]: 2025-11-24 09:27:23.379174198 +0000 UTC m=+0.698972990 container remove 1cb9c587c738276bb26da12717f699249ab0c3b0c2f1fe6f9e74d45064ca4b56 (image=quay.io/ceph/ceph:v19, name=musing_noether, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 24 09:27:23 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Nov 24 09:27:23 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 24 09:27:23 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:27:23 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:27:23 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-2
Nov 24 09:27:23 compute-0 systemd[1]: libpod-conmon-1cb9c587c738276bb26da12717f699249ab0c3b0c2f1fe6f9e74d45064ca4b56.scope: Deactivated successfully.
Nov 24 09:27:23 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-2
Nov 24 09:27:23 compute-0 sudo[86277]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:23 compute-0 sudo[86355]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzznvwwbxnrgzsiephhncibubshmcjkf ; /usr/bin/python3'
Nov 24 09:27:23 compute-0 sudo[86355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:27:23 compute-0 python3[86357]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:27:23 compute-0 podman[86358]: 2025-11-24 09:27:23.704999815 +0000 UTC m=+0.037601856 container create dd4e012ff204cd51e4da6bf043868ed38ec574c2b5fd7f086e2cc913fb8c27ad (image=quay.io/ceph/ceph:v19, name=dreamy_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:27:23 compute-0 systemd[1]: Started libpod-conmon-dd4e012ff204cd51e4da6bf043868ed38ec574c2b5fd7f086e2cc913fb8c27ad.scope.
Nov 24 09:27:23 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:27:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b25ec17c46a4d4d524f268ac828a832b0e98e0aa8996e148e4107004a14fe27/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b25ec17c46a4d4d524f268ac828a832b0e98e0aa8996e148e4107004a14fe27/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:23 compute-0 podman[86358]: 2025-11-24 09:27:23.775615366 +0000 UTC m=+0.108217407 container init dd4e012ff204cd51e4da6bf043868ed38ec574c2b5fd7f086e2cc913fb8c27ad (image=quay.io/ceph/ceph:v19, name=dreamy_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:27:23 compute-0 podman[86358]: 2025-11-24 09:27:23.782196199 +0000 UTC m=+0.114798240 container start dd4e012ff204cd51e4da6bf043868ed38ec574c2b5fd7f086e2cc913fb8c27ad (image=quay.io/ceph/ceph:v19, name=dreamy_matsumoto, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:27:23 compute-0 podman[86358]: 2025-11-24 09:27:23.687706294 +0000 UTC m=+0.020308365 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:27:23 compute-0 podman[86358]: 2025-11-24 09:27:23.784878641 +0000 UTC m=+0.117480702 container attach dd4e012ff204cd51e4da6bf043868ed38ec574c2b5fd7f086e2cc913fb8c27ad (image=quay.io/ceph/ceph:v19, name=dreamy_matsumoto, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:27:23 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v75: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:24 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Nov 24 09:27:24 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2984871477' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 24 09:27:24 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Nov 24 09:27:24 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3230617921' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 24 09:27:24 compute-0 ceph-mon[74331]: osdmap e23: 3 total, 2 up, 3 in
Nov 24 09:27:24 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:24 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 24 09:27:24 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:27:24 compute-0 ceph-mon[74331]: Deploying daemon osd.2 on compute-2
Nov 24 09:27:24 compute-0 ceph-mon[74331]: pgmap v75: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:24 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2984871477' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 24 09:27:24 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2984871477' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 24 09:27:24 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e24 e24: 3 total, 2 up, 3 in
Nov 24 09:27:24 compute-0 dreamy_matsumoto[86374]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Nov 24 09:27:24 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 2 up, 3 in
Nov 24 09:27:24 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 24 09:27:24 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:24 compute-0 ceph-mgr[74626]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 09:27:24 compute-0 systemd[1]: libpod-dd4e012ff204cd51e4da6bf043868ed38ec574c2b5fd7f086e2cc913fb8c27ad.scope: Deactivated successfully.
Nov 24 09:27:24 compute-0 podman[86358]: 2025-11-24 09:27:24.36805007 +0000 UTC m=+0.700652121 container died dd4e012ff204cd51e4da6bf043868ed38ec574c2b5fd7f086e2cc913fb8c27ad (image=quay.io/ceph/ceph:v19, name=dreamy_matsumoto, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:27:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b25ec17c46a4d4d524f268ac828a832b0e98e0aa8996e148e4107004a14fe27-merged.mount: Deactivated successfully.
Nov 24 09:27:24 compute-0 podman[86358]: 2025-11-24 09:27:24.402874001 +0000 UTC m=+0.735476052 container remove dd4e012ff204cd51e4da6bf043868ed38ec574c2b5fd7f086e2cc913fb8c27ad (image=quay.io/ceph/ceph:v19, name=dreamy_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Nov 24 09:27:24 compute-0 systemd[1]: libpod-conmon-dd4e012ff204cd51e4da6bf043868ed38ec574c2b5fd7f086e2cc913fb8c27ad.scope: Deactivated successfully.
Nov 24 09:27:24 compute-0 sudo[86355]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:24 compute-0 sudo[86434]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlzwbkgporhoiwicikypswxasnttkhec ; /usr/bin/python3'
Nov 24 09:27:24 compute-0 sudo[86434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:27:24 compute-0 python3[86436]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:27:24 compute-0 podman[86437]: 2025-11-24 09:27:24.800543017 +0000 UTC m=+0.043247601 container create a05c2b3644a41425eb0a7d942c26b50db6ffa3d730ac8052c5b0aaa38ed9fb49 (image=quay.io/ceph/ceph:v19, name=ecstatic_wing, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:27:24 compute-0 systemd[1]: Started libpod-conmon-a05c2b3644a41425eb0a7d942c26b50db6ffa3d730ac8052c5b0aaa38ed9fb49.scope.
Nov 24 09:27:24 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:27:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92e0d47668043bd938c6ac05231975a93047d2e56c4668485ed4fe4b066dc588/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92e0d47668043bd938c6ac05231975a93047d2e56c4668485ed4fe4b066dc588/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:24 compute-0 podman[86437]: 2025-11-24 09:27:24.86635874 +0000 UTC m=+0.109063334 container init a05c2b3644a41425eb0a7d942c26b50db6ffa3d730ac8052c5b0aaa38ed9fb49 (image=quay.io/ceph/ceph:v19, name=ecstatic_wing, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid)
Nov 24 09:27:24 compute-0 podman[86437]: 2025-11-24 09:27:24.871328573 +0000 UTC m=+0.114033147 container start a05c2b3644a41425eb0a7d942c26b50db6ffa3d730ac8052c5b0aaa38ed9fb49 (image=quay.io/ceph/ceph:v19, name=ecstatic_wing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:27:24 compute-0 podman[86437]: 2025-11-24 09:27:24.874570313 +0000 UTC m=+0.117274877 container attach a05c2b3644a41425eb0a7d942c26b50db6ffa3d730ac8052c5b0aaa38ed9fb49 (image=quay.io/ceph/ceph:v19, name=ecstatic_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Nov 24 09:27:24 compute-0 podman[86437]: 2025-11-24 09:27:24.778945012 +0000 UTC m=+0.021649596 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:27:25 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Nov 24 09:27:25 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2208436282' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 24 09:27:25 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Nov 24 09:27:25 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2984871477' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 24 09:27:25 compute-0 ceph-mon[74331]: osdmap e24: 3 total, 2 up, 3 in
Nov 24 09:27:25 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:25 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2208436282' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 24 09:27:25 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2208436282' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 24 09:27:25 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e25 e25: 3 total, 2 up, 3 in
Nov 24 09:27:25 compute-0 ecstatic_wing[86452]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Nov 24 09:27:25 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 2 up, 3 in
Nov 24 09:27:25 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 24 09:27:25 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:25 compute-0 ceph-mgr[74626]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 09:27:25 compute-0 systemd[1]: libpod-a05c2b3644a41425eb0a7d942c26b50db6ffa3d730ac8052c5b0aaa38ed9fb49.scope: Deactivated successfully.
Nov 24 09:27:25 compute-0 podman[86437]: 2025-11-24 09:27:25.397219081 +0000 UTC m=+0.639923735 container died a05c2b3644a41425eb0a7d942c26b50db6ffa3d730ac8052c5b0aaa38ed9fb49 (image=quay.io/ceph/ceph:v19, name=ecstatic_wing, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Nov 24 09:27:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-92e0d47668043bd938c6ac05231975a93047d2e56c4668485ed4fe4b066dc588-merged.mount: Deactivated successfully.
Nov 24 09:27:25 compute-0 podman[86437]: 2025-11-24 09:27:25.4365312 +0000 UTC m=+0.679235764 container remove a05c2b3644a41425eb0a7d942c26b50db6ffa3d730ac8052c5b0aaa38ed9fb49 (image=quay.io/ceph/ceph:v19, name=ecstatic_wing, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 09:27:25 compute-0 sudo[86434]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:25 compute-0 systemd[1]: libpod-conmon-a05c2b3644a41425eb0a7d942c26b50db6ffa3d730ac8052c5b0aaa38ed9fb49.scope: Deactivated successfully.
Nov 24 09:27:25 compute-0 ceph-mgr[74626]: [balancer INFO root] Optimize plan auto_2025-11-24_09:27:25
Nov 24 09:27:25 compute-0 ceph-mgr[74626]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 09:27:25 compute-0 ceph-mgr[74626]: [balancer INFO root] do_upmap
Nov 24 09:27:25 compute-0 ceph-mgr[74626]: [balancer INFO root] pools ['volumes', '.mgr', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', 'images']
Nov 24 09:27:25 compute-0 ceph-mgr[74626]: [balancer INFO root] prepared 0/10 upmap changes
Nov 24 09:27:25 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 09:27:25 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Nov 24 09:27:25 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 09:27:25 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Nov 24 09:27:25 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 24 09:27:25 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Nov 24 09:27:25 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 24 09:27:25 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Nov 24 09:27:25 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 24 09:27:25 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Nov 24 09:27:25 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 24 09:27:25 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Nov 24 09:27:25 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 24 09:27:25 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Nov 24 09:27:25 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 24 09:27:25 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Nov 24 09:27:25 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 09:27:25 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 09:27:25 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:27:25 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:27:25 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:27:25 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 09:27:25 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:27:25 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:27:25 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:27:25 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:27:25 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:27:25 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:27:25 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:27:25 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:27:25 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:27:25 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:27:25 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:27:25 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v78: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:26 compute-0 ceph-mon[74331]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled)
Nov 24 09:27:26 compute-0 ceph-mon[74331]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 24 09:27:26 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Nov 24 09:27:26 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 24 09:27:26 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e26 e26: 3 total, 2 up, 3 in
Nov 24 09:27:26 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2208436282' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 24 09:27:26 compute-0 ceph-mon[74331]: osdmap e25: 3 total, 2 up, 3 in
Nov 24 09:27:26 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:26 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 09:27:26 compute-0 ceph-mon[74331]: pgmap v78: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:26 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 2 up, 3 in
Nov 24 09:27:26 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 24 09:27:26 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:26 compute-0 ceph-mgr[74626]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 09:27:26 compute-0 ceph-mgr[74626]: [progress INFO root] update: starting ev cd9ad232-0e78-4353-bfe3-cbf4312033fc (PG autoscaler increasing pool 2 PGs from 1 to 32)
Nov 24 09:27:26 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Nov 24 09:27:26 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 09:27:26 compute-0 python3[86562]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 09:27:26 compute-0 python3[86633]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763976446.2571008-37156-99351685865860/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=ad866aa1f51f395809dd7ac5cb7a56d43c167b49 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:27:27 compute-0 sudo[86733]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhtsmcyorenhbcbvmxwugqfbptrbgwdt ; /usr/bin/python3'
Nov 24 09:27:27 compute-0 sudo[86733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:27:27 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Nov 24 09:27:27 compute-0 ceph-mon[74331]: Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled)
Nov 24 09:27:27 compute-0 ceph-mon[74331]: Cluster is now healthy
Nov 24 09:27:27 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 24 09:27:27 compute-0 ceph-mon[74331]: osdmap e26: 3 total, 2 up, 3 in
Nov 24 09:27:27 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:27 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 09:27:27 compute-0 python3[86735]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 09:27:27 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 24 09:27:27 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e27 e27: 3 total, 2 up, 3 in
Nov 24 09:27:27 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 2 up, 3 in
Nov 24 09:27:27 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 24 09:27:27 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:27 compute-0 ceph-mgr[74626]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 09:27:27 compute-0 ceph-mgr[74626]: [progress INFO root] update: starting ev 7f776739-386b-423d-afac-c6293b5e943a (PG autoscaler increasing pool 3 PGs from 1 to 32)
Nov 24 09:27:27 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Nov 24 09:27:27 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 09:27:27 compute-0 sudo[86733]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:27 compute-0 sudo[86808]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyvcpdgfbaznyafuemupvprpjqikhznk ; /usr/bin/python3'
Nov 24 09:27:27 compute-0 sudo[86808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:27:27 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:27:27 compute-0 python3[86810]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763976447.1856039-37170-178430755617517/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=4ebde8f580094d28406ed7ac11dbee1070630e9c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:27:27 compute-0 sudo[86808]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:27 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v81: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:27 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Nov 24 09:27:27 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 09:27:27 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Nov 24 09:27:27 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 09:27:28 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 09:27:28 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:28 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 09:27:28 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:28 compute-0 sudo[86858]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfgjoynzbnzbomzbdxbpaavssyutpugc ; /usr/bin/python3'
Nov 24 09:27:28 compute-0 sudo[86858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:27:28 compute-0 python3[86860]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:27:28 compute-0 podman[86861]: 2025-11-24 09:27:28.275734169 +0000 UTC m=+0.037251227 container create 43874a1d2d03a1dab2ee90dde8d139260ed8e0b9ac8e6aaba0398f30b2e8d8d7 (image=quay.io/ceph/ceph:v19, name=sweet_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:27:28 compute-0 systemd[1]: Started libpod-conmon-43874a1d2d03a1dab2ee90dde8d139260ed8e0b9ac8e6aaba0398f30b2e8d8d7.scope.
Nov 24 09:27:28 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:27:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e926417b56c370c6d284d553f1a655aafda1203475dec44c1be2215d2ad8aee/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e926417b56c370c6d284d553f1a655aafda1203475dec44c1be2215d2ad8aee/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e926417b56c370c6d284d553f1a655aafda1203475dec44c1be2215d2ad8aee/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:28 compute-0 podman[86861]: 2025-11-24 09:27:28.327304434 +0000 UTC m=+0.088821512 container init 43874a1d2d03a1dab2ee90dde8d139260ed8e0b9ac8e6aaba0398f30b2e8d8d7 (image=quay.io/ceph/ceph:v19, name=sweet_mcclintock, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:27:28 compute-0 podman[86861]: 2025-11-24 09:27:28.332273267 +0000 UTC m=+0.093790325 container start 43874a1d2d03a1dab2ee90dde8d139260ed8e0b9ac8e6aaba0398f30b2e8d8d7 (image=quay.io/ceph/ceph:v19, name=sweet_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:27:28 compute-0 podman[86861]: 2025-11-24 09:27:28.33497229 +0000 UTC m=+0.096489368 container attach 43874a1d2d03a1dab2ee90dde8d139260ed8e0b9ac8e6aaba0398f30b2e8d8d7 (image=quay.io/ceph/ceph:v19, name=sweet_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Nov 24 09:27:28 compute-0 podman[86861]: 2025-11-24 09:27:28.261099738 +0000 UTC m=+0.022616826 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:27:28 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Nov 24 09:27:28 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 24 09:27:28 compute-0 ceph-mon[74331]: osdmap e27: 3 total, 2 up, 3 in
Nov 24 09:27:28 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:28 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 09:27:28 compute-0 ceph-mon[74331]: pgmap v81: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:28 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 09:27:28 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 09:27:28 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:28 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:28 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 24 09:27:28 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 09:27:28 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 09:27:28 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e28 e28: 3 total, 2 up, 3 in
Nov 24 09:27:28 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 2 up, 3 in
Nov 24 09:27:28 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 24 09:27:28 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:28 compute-0 ceph-mgr[74626]: [progress INFO root] update: starting ev e245a622-8a08-4003-a5a4-0402c5deb169 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Nov 24 09:27:28 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Nov 24 09:27:28 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 09:27:28 compute-0 ceph-mgr[74626]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 09:27:28 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Nov 24 09:27:28 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2584835535' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 24 09:27:28 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2584835535' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 24 09:27:28 compute-0 sweet_mcclintock[86876]: 
Nov 24 09:27:28 compute-0 sweet_mcclintock[86876]: [global]
Nov 24 09:27:28 compute-0 sweet_mcclintock[86876]:         fsid = 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:27:28 compute-0 sweet_mcclintock[86876]:         mon_host = 192.168.122.100
Nov 24 09:27:28 compute-0 systemd[1]: libpod-43874a1d2d03a1dab2ee90dde8d139260ed8e0b9ac8e6aaba0398f30b2e8d8d7.scope: Deactivated successfully.
Nov 24 09:27:28 compute-0 podman[86861]: 2025-11-24 09:27:28.693202083 +0000 UTC m=+0.454719201 container died 43874a1d2d03a1dab2ee90dde8d139260ed8e0b9ac8e6aaba0398f30b2e8d8d7 (image=quay.io/ceph/ceph:v19, name=sweet_mcclintock, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 24 09:27:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e926417b56c370c6d284d553f1a655aafda1203475dec44c1be2215d2ad8aee-merged.mount: Deactivated successfully.
Nov 24 09:27:28 compute-0 podman[86861]: 2025-11-24 09:27:28.723907517 +0000 UTC m=+0.485424575 container remove 43874a1d2d03a1dab2ee90dde8d139260ed8e0b9ac8e6aaba0398f30b2e8d8d7 (image=quay.io/ceph/ceph:v19, name=sweet_mcclintock, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 24 09:27:28 compute-0 systemd[1]: libpod-conmon-43874a1d2d03a1dab2ee90dde8d139260ed8e0b9ac8e6aaba0398f30b2e8d8d7.scope: Deactivated successfully.
Nov 24 09:27:28 compute-0 sudo[86858]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:28 compute-0 sudo[86935]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dznbgrqqtailimormrhhcxdqjxrmfomj ; /usr/bin/python3'
Nov 24 09:27:28 compute-0 sudo[86935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:27:29 compute-0 python3[86937]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:27:29 compute-0 podman[86938]: 2025-11-24 09:27:29.089699873 +0000 UTC m=+0.057672094 container create 9d905623972d0aa987200310c4ec57a07da946a5da1c0fe8765808bbc0db5587 (image=quay.io/ceph/ceph:v19, name=flamboyant_ishizaka, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:27:29 compute-0 systemd[1]: Started libpod-conmon-9d905623972d0aa987200310c4ec57a07da946a5da1c0fe8765808bbc0db5587.scope.
Nov 24 09:27:29 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:27:29 compute-0 podman[86938]: 2025-11-24 09:27:29.069949536 +0000 UTC m=+0.037921747 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dd8eae52d2cac7d2c5c233fa4e0e6c19187fdea34c3bf2fbfed1c5b55b4ee21/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dd8eae52d2cac7d2c5c233fa4e0e6c19187fdea34c3bf2fbfed1c5b55b4ee21/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dd8eae52d2cac7d2c5c233fa4e0e6c19187fdea34c3bf2fbfed1c5b55b4ee21/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 28 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=28 pruub=9.917635918s) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active pruub 65.044883728s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 28 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=28 pruub=9.917635918s) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown pruub 65.044883728s@ mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-0 podman[86938]: 2025-11-24 09:27:29.185799708 +0000 UTC m=+0.153771919 container init 9d905623972d0aa987200310c4ec57a07da946a5da1c0fe8765808bbc0db5587 (image=quay.io/ceph/ceph:v19, name=flamboyant_ishizaka, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:27:29 compute-0 podman[86938]: 2025-11-24 09:27:29.195742613 +0000 UTC m=+0.163714804 container start 9d905623972d0aa987200310c4ec57a07da946a5da1c0fe8765808bbc0db5587 (image=quay.io/ceph/ceph:v19, name=flamboyant_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Nov 24 09:27:29 compute-0 podman[86938]: 2025-11-24 09:27:29.198657693 +0000 UTC m=+0.166629874 container attach 9d905623972d0aa987200310c4ec57a07da946a5da1c0fe8765808bbc0db5587 (image=quay.io/ceph/ceph:v19, name=flamboyant_ishizaka, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:27:29 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Nov 24 09:27:29 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 24 09:27:29 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 09:27:29 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 09:27:29 compute-0 ceph-mon[74331]: osdmap e28: 3 total, 2 up, 3 in
Nov 24 09:27:29 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:29 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 09:27:29 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2584835535' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 24 09:27:29 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2584835535' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 24 09:27:29 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 24 09:27:29 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e29 e29: 3 total, 2 up, 3 in
Nov 24 09:27:29 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 2 up, 3 in
Nov 24 09:27:29 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 24 09:27:29 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.1d( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.1f( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.1e( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.1b( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.1c( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.9( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.8( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.1a( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.4( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.3( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.2( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.1( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.5( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.6( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.7( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.a( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.b( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.c( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.d( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.e( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.f( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.10( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.11( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.12( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.13( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.14( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.15( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.16( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.17( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.18( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.19( empty local-lis/les=14/15 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:29 compute-0 ceph-mgr[74626]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 09:27:29 compute-0 ceph-mgr[74626]: [progress INFO root] update: starting ev 97eb0b44-6cb4-454d-a1ab-889bd95648a1 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Nov 24 09:27:29 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} v 0)
Nov 24 09:27:29 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.1b( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.1f( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.1d( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.1c( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.1e( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.4( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.9( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.8( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.1a( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.5( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.2( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.6( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.7( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.0( empty local-lis/les=28/29 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.3( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.c( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.b( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.a( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.d( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.10( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.11( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.e( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.f( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.15( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.12( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.13( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.14( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.1( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.16( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.18( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.17( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 29 pg[3.19( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=14/14 les/c/f=15/15/0 sis=28) [0] r=0 lpr=28 pi=[14,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:29 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0)
Nov 24 09:27:29 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3323422740' entity='client.admin' 
Nov 24 09:27:29 compute-0 flamboyant_ishizaka[86954]: set ssl_option
Nov 24 09:27:29 compute-0 systemd[1]: libpod-9d905623972d0aa987200310c4ec57a07da946a5da1c0fe8765808bbc0db5587.scope: Deactivated successfully.
Nov 24 09:27:29 compute-0 podman[86938]: 2025-11-24 09:27:29.673182932 +0000 UTC m=+0.641155123 container died 9d905623972d0aa987200310c4ec57a07da946a5da1c0fe8765808bbc0db5587 (image=quay.io/ceph/ceph:v19, name=flamboyant_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 09:27:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-2dd8eae52d2cac7d2c5c233fa4e0e6c19187fdea34c3bf2fbfed1c5b55b4ee21-merged.mount: Deactivated successfully.
Nov 24 09:27:29 compute-0 podman[86938]: 2025-11-24 09:27:29.708149837 +0000 UTC m=+0.676122028 container remove 9d905623972d0aa987200310c4ec57a07da946a5da1c0fe8765808bbc0db5587 (image=quay.io/ceph/ceph:v19, name=flamboyant_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:27:29 compute-0 systemd[1]: libpod-conmon-9d905623972d0aa987200310c4ec57a07da946a5da1c0fe8765808bbc0db5587.scope: Deactivated successfully.
Nov 24 09:27:29 compute-0 sudo[86935]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:29 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Nov 24 09:27:29 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Nov 24 09:27:29 compute-0 sudo[87014]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykrlgkcjmrviqpiiqccdztqhhguvoajv ; /usr/bin/python3'
Nov 24 09:27:29 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v84: 69 pgs: 62 unknown, 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:29 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Nov 24 09:27:29 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 09:27:29 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Nov 24 09:27:29 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 09:27:29 compute-0 sudo[87014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:27:30 compute-0 python3[87016]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:27:30 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 09:27:30 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:30 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 09:27:30 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:30 compute-0 podman[87017]: 2025-11-24 09:27:30.097304091 +0000 UTC m=+0.048917695 container create d8a0def7f9f2ad18a31b0bec6a5fc7a1e89e6134a42c33311615c7779c563b46 (image=quay.io/ceph/ceph:v19, name=goofy_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 24 09:27:30 compute-0 systemd[1]: Started libpod-conmon-d8a0def7f9f2ad18a31b0bec6a5fc7a1e89e6134a42c33311615c7779c563b46.scope.
Nov 24 09:27:30 compute-0 sudo[87028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:27:30 compute-0 sudo[87028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:30 compute-0 sudo[87028]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:30 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:27:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b4dbdd3a7a767a0fc8b0405b91d30c9ad3b41de5dd1b773ceb644fa8b0320d8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b4dbdd3a7a767a0fc8b0405b91d30c9ad3b41de5dd1b773ceb644fa8b0320d8/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b4dbdd3a7a767a0fc8b0405b91d30c9ad3b41de5dd1b773ceb644fa8b0320d8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:30 compute-0 podman[87017]: 2025-11-24 09:27:30.074704566 +0000 UTC m=+0.026318270 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:27:30 compute-0 podman[87017]: 2025-11-24 09:27:30.181815289 +0000 UTC m=+0.133428903 container init d8a0def7f9f2ad18a31b0bec6a5fc7a1e89e6134a42c33311615c7779c563b46 (image=quay.io/ceph/ceph:v19, name=goofy_villani, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 24 09:27:30 compute-0 podman[87017]: 2025-11-24 09:27:30.189860037 +0000 UTC m=+0.141473631 container start d8a0def7f9f2ad18a31b0bec6a5fc7a1e89e6134a42c33311615c7779c563b46 (image=quay.io/ceph/ceph:v19, name=goofy_villani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Nov 24 09:27:30 compute-0 podman[87017]: 2025-11-24 09:27:30.192991973 +0000 UTC m=+0.144605567 container attach d8a0def7f9f2ad18a31b0bec6a5fc7a1e89e6134a42c33311615c7779c563b46 (image=quay.io/ceph/ceph:v19, name=goofy_villani, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:27:30 compute-0 sudo[87063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:27:30 compute-0 sudo[87063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:30 compute-0 sudo[87063]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:30 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Nov 24 09:27:30 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 24 09:27:30 compute-0 sudo[87105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:27:30 compute-0 sudo[87105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:30 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Nov 24 09:27:30 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Nov 24 09:27:30 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 09:27:30 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 09:27:30 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 24 09:27:30 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e30 e30: 3 total, 2 up, 3 in
Nov 24 09:27:30 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 2 up, 3 in
Nov 24 09:27:30 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 24 09:27:30 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:30 compute-0 ceph-mgr[74626]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 09:27:30 compute-0 ceph-mgr[74626]: [progress INFO root] update: starting ev 5ee3f637-c6bd-4447-9cb6-b8ffb025dd4e (PG autoscaler increasing pool 6 PGs from 1 to 32)
Nov 24 09:27:30 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Nov 24 09:27:30 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 09:27:30 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 24 09:27:30 compute-0 ceph-mon[74331]: osdmap e29: 3 total, 2 up, 3 in
Nov 24 09:27:30 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:30 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 09:27:30 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3323422740' entity='client.admin' 
Nov 24 09:27:30 compute-0 ceph-mon[74331]: 3.1b scrub starts
Nov 24 09:27:30 compute-0 ceph-mon[74331]: 3.1b scrub ok
Nov 24 09:27:30 compute-0 ceph-mon[74331]: pgmap v84: 69 pgs: 62 unknown, 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:30 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 09:27:30 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 09:27:30 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:30 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:30 compute-0 ceph-mon[74331]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 24 09:27:30 compute-0 ceph-mon[74331]: from='osd.2 [v2:192.168.122.102:6800/4204763159,v1:192.168.122.102:6801/4204763159]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 24 09:27:30 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]} v 0)
Nov 24 09:27:30 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Nov 24 09:27:30 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e30 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-2,root=default}
Nov 24 09:27:30 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.14298 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 09:27:30 compute-0 ceph-mgr[74626]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 24 09:27:30 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 24 09:27:30 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Nov 24 09:27:30 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:30 compute-0 ceph-mgr[74626]: [cephadm INFO root] Saving service ingress.rgw.default spec with placement count:2
Nov 24 09:27:30 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Saving service ingress.rgw.default spec with placement count:2
Nov 24 09:27:30 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Nov 24 09:27:30 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:30 compute-0 goofy_villani[87056]: Scheduled rgw.rgw update...
Nov 24 09:27:30 compute-0 goofy_villani[87056]: Scheduled ingress.rgw.default update...
Nov 24 09:27:30 compute-0 systemd[1]: libpod-d8a0def7f9f2ad18a31b0bec6a5fc7a1e89e6134a42c33311615c7779c563b46.scope: Deactivated successfully.
Nov 24 09:27:30 compute-0 podman[87017]: 2025-11-24 09:27:30.63300175 +0000 UTC m=+0.584615354 container died d8a0def7f9f2ad18a31b0bec6a5fc7a1e89e6134a42c33311615c7779c563b46 (image=quay.io/ceph/ceph:v19, name=goofy_villani, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:27:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b4dbdd3a7a767a0fc8b0405b91d30c9ad3b41de5dd1b773ceb644fa8b0320d8-merged.mount: Deactivated successfully.
Nov 24 09:27:30 compute-0 podman[87017]: 2025-11-24 09:27:30.694248763 +0000 UTC m=+0.645862357 container remove d8a0def7f9f2ad18a31b0bec6a5fc7a1e89e6134a42c33311615c7779c563b46 (image=quay.io/ceph/ceph:v19, name=goofy_villani, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 24 09:27:30 compute-0 systemd[1]: libpod-conmon-d8a0def7f9f2ad18a31b0bec6a5fc7a1e89e6134a42c33311615c7779c563b46.scope: Deactivated successfully.
Nov 24 09:27:30 compute-0 sudo[87014]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:30 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Nov 24 09:27:30 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Nov 24 09:27:30 compute-0 sudo[87105]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:31 compute-0 python3[87249]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_dashboard.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 09:27:31 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 09:27:31 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:31 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 09:27:31 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 30 pg[5.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=30 pruub=9.637361526s) [0] r=0 lpr=30 pi=[16,30)/1 crt=0'0 mlcod 0'0 active pruub 67.065757751s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 30 pg[4.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=30 pruub=8.629290581s) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active pruub 66.057731628s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 30 pg[5.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=30 pruub=9.637361526s) [0] r=0 lpr=30 pi=[16,30)/1 crt=0'0 mlcod 0'0 unknown pruub 67.065757751s@ mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 30 pg[4.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=30 pruub=8.629290581s) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown pruub 66.057731628s@ mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:31 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Nov 24 09:27:31 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 24 09:27:31 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Nov 24 09:27:31 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e31 e31: 3 total, 2 up, 3 in
Nov 24 09:27:31 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 2 up, 3 in
Nov 24 09:27:31 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 24 09:27:31 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:31 compute-0 ceph-mgr[74626]: [progress INFO root] update: starting ev 4fc79748-7802-4afc-9e22-e3c9df4084bf (PG autoscaler increasing pool 7 PGs from 1 to 32)
Nov 24 09:27:31 compute-0 ceph-mgr[74626]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 09:27:31 compute-0 ceph-mgr[74626]: [progress INFO root] complete: finished ev cd9ad232-0e78-4353-bfe3-cbf4312033fc (PG autoscaler increasing pool 2 PGs from 1 to 32)
Nov 24 09:27:31 compute-0 ceph-mgr[74626]: [progress INFO root] Completed event cd9ad232-0e78-4353-bfe3-cbf4312033fc (PG autoscaler increasing pool 2 PGs from 1 to 32) in 5 seconds
Nov 24 09:27:31 compute-0 ceph-mgr[74626]: [progress INFO root] complete: finished ev 7f776739-386b-423d-afac-c6293b5e943a (PG autoscaler increasing pool 3 PGs from 1 to 32)
Nov 24 09:27:31 compute-0 ceph-mgr[74626]: [progress INFO root] Completed event 7f776739-386b-423d-afac-c6293b5e943a (PG autoscaler increasing pool 3 PGs from 1 to 32) in 4 seconds
Nov 24 09:27:31 compute-0 ceph-mgr[74626]: [progress INFO root] complete: finished ev e245a622-8a08-4003-a5a4-0402c5deb169 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Nov 24 09:27:31 compute-0 ceph-mgr[74626]: [progress INFO root] Completed event e245a622-8a08-4003-a5a4-0402c5deb169 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 3 seconds
Nov 24 09:27:31 compute-0 ceph-mgr[74626]: [progress INFO root] complete: finished ev 97eb0b44-6cb4-454d-a1ab-889bd95648a1 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Nov 24 09:27:31 compute-0 ceph-mgr[74626]: [progress INFO root] Completed event 97eb0b44-6cb4-454d-a1ab-889bd95648a1 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 2 seconds
Nov 24 09:27:31 compute-0 ceph-mgr[74626]: [progress INFO root] complete: finished ev 5ee3f637-c6bd-4447-9cb6-b8ffb025dd4e (PG autoscaler increasing pool 6 PGs from 1 to 32)
Nov 24 09:27:31 compute-0 ceph-mgr[74626]: [progress INFO root] Completed event 5ee3f637-c6bd-4447-9cb6-b8ffb025dd4e (PG autoscaler increasing pool 6 PGs from 1 to 32) in 1 seconds
Nov 24 09:27:31 compute-0 ceph-mgr[74626]: [progress INFO root] complete: finished ev 4fc79748-7802-4afc-9e22-e3c9df4084bf (PG autoscaler increasing pool 7 PGs from 1 to 32)
Nov 24 09:27:31 compute-0 ceph-mgr[74626]: [progress INFO root] Completed event 4fc79748-7802-4afc-9e22-e3c9df4084bf (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.1f( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.1e( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.1e( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.1f( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.1e( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.1f( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.19( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.973091125s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 active pruub 71.484703064s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.19( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.973091125s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484703064s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.11( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.18( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.972938538s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 active pruub 71.484680176s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.11( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.18( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.972938538s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484680176s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.10( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.16( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.972725868s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 active pruub 71.484680176s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.11( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.16( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.972725868s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484680176s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.10( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.13( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.13( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.17( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.972568512s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 active pruub 71.484695435s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.12( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.17( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.972568512s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484695435s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.12( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.12( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.15( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.972383499s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 active pruub 71.484611511s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.13( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.14( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.972352028s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 active pruub 71.484634399s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.15( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.972383499s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484611511s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.14( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.972352028s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484634399s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.15( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.13( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.972194672s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 active pruub 71.484626770s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.13( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.972194672s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484626770s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.15( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.10( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.14( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.14( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.14( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.12( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.971965790s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 active pruub 71.484603882s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.12( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.971965790s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484603882s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.17( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.15( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.17( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.11( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.971679688s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 active pruub 71.484474182s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.11( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.971679688s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484474182s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.16( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.16( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.16( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.17( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.9( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.9( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.8( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.8( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.10( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.971239090s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 active pruub 71.484466553s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.10( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.971239090s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484466553s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.8( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.9( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.e( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.971203804s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 active pruub 71.484527588s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.e( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.971203804s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484527588s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.b( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.b( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.d( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.970966339s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 active pruub 71.484458923s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.f( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.971050262s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 active pruub 71.484573364s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.a( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.a( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.f( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.971050262s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484573364s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.a( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.b( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.c( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.970685959s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 active pruub 71.484375000s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.c( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.970685959s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484375000s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.d( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.d( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.b( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.970501900s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 active pruub 71.484367371s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.b( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.970501900s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484367371s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.c( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.a( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.970381737s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 active pruub 71.484367371s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.a( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.970381737s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484367371s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.c( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.d( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.970966339s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484458923s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.d( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.7( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.1( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.1( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.7( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.969978333s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 active pruub 71.484313965s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.7( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.969978333s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484313965s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=31 pruub=9.551289558s) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown pruub 67.065757751s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.6( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.969792366s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 active pruub 71.484275818s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.6( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.969792366s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484275818s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=31 pruub=9.551289558s) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.065757751s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.1( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.0( empty local-lis/les=28/29 n=0 ec=14/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.969704628s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 active pruub 71.484344482s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.3( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.3( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.6( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.2( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.6( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.7( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.7( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.c( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.0( empty local-lis/les=28/29 n=0 ec=14/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.969704628s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484344482s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.5( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.969202995s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 active pruub 71.484252930s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.5( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.969202995s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484252930s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.6( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.1( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.969673157s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 active pruub 71.484664917s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.1( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.969673157s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484664917s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.4( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.5( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.2( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.969043732s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 active pruub 71.484268188s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.4( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.2( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.969043732s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484268188s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.3( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.969053268s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 active pruub 71.484367371s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.5( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.4( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.3( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.969053268s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484367371s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.5( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.3( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.4( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.968572617s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 active pruub 71.484092712s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.e( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.e( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.4( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.968572617s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484092712s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.2( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.2( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.f( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.f( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.f( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.e( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.9( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.968324661s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 active pruub 71.484107971s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.9( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.968324661s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484107971s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.1c( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.1d( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.1c( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.1a( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.968253136s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 active pruub 71.484230042s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.1a( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.968253136s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484230042s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.1d( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.8( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.968511581s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 active pruub 71.484085083s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.8( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.968511581s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484085083s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.1a( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.1d( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.1b( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.963355064s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 active pruub 71.479476929s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.1a( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.1b( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.963355064s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.479476929s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.1b( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.1a( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.1b( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.1c( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.1b( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.1c( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.963098526s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 active pruub 71.479507446s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-mgr[74626]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/4204763159; not ready for session (expect reconnect)
Nov 24 09:27:31 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 24 09:27:31 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.18( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.1c( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.963098526s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.479507446s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.18( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.1d( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.962973595s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 active pruub 71.479492188s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.19( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.1d( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.962973595s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.479492188s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.1e( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.967380524s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 active pruub 71.484077454s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.19( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[5.19( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=31) [] r=-1 lpr=31 pi=[16,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.1e( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.967380524s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484077454s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.1f( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.962697983s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 active pruub 71.479484558s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[3.1f( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=13.962697983s) [] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.479484558s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.18( empty local-lis/les=15/16 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:31 compute-0 ceph-mgr[74626]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 09:27:31 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Nov 24 09:27:31 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 09:27:31 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 09:27:31 compute-0 ceph-mon[74331]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 24 09:27:31 compute-0 ceph-mon[74331]: 2.1f scrub starts
Nov 24 09:27:31 compute-0 ceph-mon[74331]: osdmap e30: 3 total, 2 up, 3 in
Nov 24 09:27:31 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:31 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 09:27:31 compute-0 ceph-mon[74331]: 2.1f scrub ok
Nov 24 09:27:31 compute-0 ceph-mon[74331]: from='osd.2 [v2:192.168.122.102:6800/4204763159,v1:192.168.122.102:6801/4204763159]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Nov 24 09:27:31 compute-0 ceph-mon[74331]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Nov 24 09:27:31 compute-0 ceph-mon[74331]: from='client.14298 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 09:27:31 compute-0 ceph-mon[74331]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 24 09:27:31 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:31 compute-0 ceph-mon[74331]: Saving service ingress.rgw.default spec with placement count:2
Nov 24 09:27:31 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:31 compute-0 ceph-mon[74331]: 3.1c scrub starts
Nov 24 09:27:31 compute-0 ceph-mon[74331]: 3.1c scrub ok
Nov 24 09:27:31 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:31 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:31 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 24 09:27:31 compute-0 ceph-mon[74331]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Nov 24 09:27:31 compute-0 ceph-mon[74331]: osdmap e31: 3 total, 2 up, 3 in
Nov 24 09:27:31 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.1f( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.1e( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:31 compute-0 python3[87320]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763976450.8874357-37189-258286334754128/source dest=/tmp/ceph_dashboard.yml mode=0644 force=True follow=False _original_basename=ceph_monitoring_stack.yml.j2 checksum=2701faaa92cae31b5bbad92984c27e2af7a44b84 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.12( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.11( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.14( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.15( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.10( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.16( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.17( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.9( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.8( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.13( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.b( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.a( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.d( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.7( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.0( empty local-lis/les=30/31 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.1( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.2( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.c( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.6( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.4( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.5( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.3( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.e( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.1d( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.1c( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.1b( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.19( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.1a( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.18( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 31 pg[4.f( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=15/15 les/c/f=16/16/0 sis=30) [0] r=0 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:31 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 09:27:31 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:31 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 09:27:31 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:31 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 4.1e deep-scrub starts
Nov 24 09:27:31 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 4.1e deep-scrub ok
Nov 24 09:27:31 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v87: 131 pgs: 62 unknown, 69 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:31 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Nov 24 09:27:31 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 09:27:31 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} v 0)
Nov 24 09:27:31 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 09:27:32 compute-0 ceph-mgr[74626]: [progress INFO root] Writing back 11 completed events
Nov 24 09:27:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 24 09:27:32 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:32 compute-0 sudo[87368]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlguhbrsvwonooqrllcbvnynpwsfirxv ; /usr/bin/python3'
Nov 24 09:27:32 compute-0 sudo[87368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:27:32 compute-0 python3[87370]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_dashboard.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:27:32 compute-0 podman[87371]: 2025-11-24 09:27:32.449306301 +0000 UTC m=+0.051709050 container create 6798114b5a20e95722f394de01ee5c5ee8d827a80a196dcc89c3f6ba72b5cb58 (image=quay.io/ceph/ceph:v19, name=friendly_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Nov 24 09:27:32 compute-0 systemd[75656]: Starting Mark boot as successful...
Nov 24 09:27:32 compute-0 systemd[1]: Started libpod-conmon-6798114b5a20e95722f394de01ee5c5ee8d827a80a196dcc89c3f6ba72b5cb58.scope.
Nov 24 09:27:32 compute-0 systemd[75656]: Finished Mark boot as successful.
Nov 24 09:27:32 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:27:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff33888eb0b94746925948a254a32fdc23f2e96f1fae8fda02833366757e7873/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff33888eb0b94746925948a254a32fdc23f2e96f1fae8fda02833366757e7873/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff33888eb0b94746925948a254a32fdc23f2e96f1fae8fda02833366757e7873/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:32 compute-0 podman[87371]: 2025-11-24 09:27:32.513497275 +0000 UTC m=+0.115900054 container init 6798114b5a20e95722f394de01ee5c5ee8d827a80a196dcc89c3f6ba72b5cb58 (image=quay.io/ceph/ceph:v19, name=friendly_wilbur, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:27:32 compute-0 podman[87371]: 2025-11-24 09:27:32.521178281 +0000 UTC m=+0.123581030 container start 6798114b5a20e95722f394de01ee5c5ee8d827a80a196dcc89c3f6ba72b5cb58 (image=quay.io/ceph/ceph:v19, name=friendly_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Nov 24 09:27:32 compute-0 podman[87371]: 2025-11-24 09:27:32.428151251 +0000 UTC m=+0.030554050 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:27:32 compute-0 podman[87371]: 2025-11-24 09:27:32.523811912 +0000 UTC m=+0.126214701 container attach 6798114b5a20e95722f394de01ee5c5ee8d827a80a196dcc89c3f6ba72b5cb58 (image=quay.io/ceph/ceph:v19, name=friendly_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Nov 24 09:27:32 compute-0 ceph-mgr[74626]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/4204763159; not ready for session (expect reconnect)
Nov 24 09:27:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 24 09:27:32 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:32 compute-0 ceph-mgr[74626]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 09:27:32 compute-0 ceph-mon[74331]: 2.1c scrub starts
Nov 24 09:27:32 compute-0 ceph-mon[74331]: 2.1c scrub ok
Nov 24 09:27:32 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:32 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:32 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:32 compute-0 ceph-mon[74331]: 4.1e deep-scrub starts
Nov 24 09:27:32 compute-0 ceph-mon[74331]: 4.1e deep-scrub ok
Nov 24 09:27:32 compute-0 ceph-mon[74331]: pgmap v87: 131 pgs: 62 unknown, 69 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:32 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 09:27:32 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 09:27:32 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:32 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Nov 24 09:27:32 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 09:27:32 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 09:27:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e32 e32: 3 total, 2 up, 3 in
Nov 24 09:27:32 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 2 up, 3 in
Nov 24 09:27:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 24 09:27:32 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:32 compute-0 ceph-mgr[74626]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 09:27:32 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 32 pg[6.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=32 pruub=9.049271584s) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 active pruub 67.666213989s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:32 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 32 pg[6.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=32 pruub=9.049271584s) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 unknown pruub 67.666213989s@ mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:27:32 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Nov 24 09:27:32 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Nov 24 09:27:32 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.24131 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 09:27:32 compute-0 ceph-mgr[74626]: [cephadm INFO root] Saving service node-exporter spec with placement *
Nov 24 09:27:32 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Saving service node-exporter spec with placement *
Nov 24 09:27:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Nov 24 09:27:32 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:32 compute-0 ceph-mgr[74626]: [cephadm INFO root] Saving service grafana spec with placement compute-0;count:1
Nov 24 09:27:32 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Saving service grafana spec with placement compute-0;count:1
Nov 24 09:27:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Nov 24 09:27:32 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:32 compute-0 ceph-mgr[74626]: [cephadm INFO root] Saving service prometheus spec with placement compute-0;count:1
Nov 24 09:27:32 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Saving service prometheus spec with placement compute-0;count:1
Nov 24 09:27:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Nov 24 09:27:32 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:32 compute-0 ceph-mgr[74626]: [cephadm INFO root] Saving service alertmanager spec with placement compute-0;count:1
Nov 24 09:27:32 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Saving service alertmanager spec with placement compute-0;count:1
Nov 24 09:27:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Nov 24 09:27:32 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:32 compute-0 friendly_wilbur[87387]: Scheduled node-exporter update...
Nov 24 09:27:32 compute-0 friendly_wilbur[87387]: Scheduled grafana update...
Nov 24 09:27:32 compute-0 friendly_wilbur[87387]: Scheduled prometheus update...
Nov 24 09:27:32 compute-0 friendly_wilbur[87387]: Scheduled alertmanager update...
Nov 24 09:27:32 compute-0 systemd[1]: libpod-6798114b5a20e95722f394de01ee5c5ee8d827a80a196dcc89c3f6ba72b5cb58.scope: Deactivated successfully.
Nov 24 09:27:32 compute-0 podman[87371]: 2025-11-24 09:27:32.990144059 +0000 UTC m=+0.592546848 container died 6798114b5a20e95722f394de01ee5c5ee8d827a80a196dcc89c3f6ba72b5cb58 (image=quay.io/ceph/ceph:v19, name=friendly_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 09:27:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff33888eb0b94746925948a254a32fdc23f2e96f1fae8fda02833366757e7873-merged.mount: Deactivated successfully.
Nov 24 09:27:33 compute-0 podman[87371]: 2025-11-24 09:27:33.030589582 +0000 UTC m=+0.632992341 container remove 6798114b5a20e95722f394de01ee5c5ee8d827a80a196dcc89c3f6ba72b5cb58 (image=quay.io/ceph/ceph:v19, name=friendly_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:27:33 compute-0 systemd[1]: libpod-conmon-6798114b5a20e95722f394de01ee5c5ee8d827a80a196dcc89c3f6ba72b5cb58.scope: Deactivated successfully.
Nov 24 09:27:33 compute-0 sudo[87368]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:33 compute-0 sudo[87446]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crornicnjlmbvgikvbbbesxkwsatgbpx ; /usr/bin/python3'
Nov 24 09:27:33 compute-0 sudo[87446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:27:33 compute-0 ceph-mgr[74626]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/4204763159; not ready for session (expect reconnect)
Nov 24 09:27:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 24 09:27:33 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:33 compute-0 ceph-mgr[74626]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 09:27:33 compute-0 python3[87448]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:27:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Nov 24 09:27:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e33 e33: 3 total, 2 up, 3 in
Nov 24 09:27:33 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 2 up, 3 in
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.1a( empty local-lis/les=17/18 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.1b( empty local-lis/les=17/18 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.18( empty local-lis/les=17/18 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.19( empty local-lis/les=17/18 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.1e( empty local-lis/les=17/18 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.c( empty local-lis/les=17/18 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.d( empty local-lis/les=17/18 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.1f( empty local-lis/les=17/18 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.1( empty local-lis/les=17/18 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.6( empty local-lis/les=17/18 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.7( empty local-lis/les=17/18 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.4( empty local-lis/les=17/18 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.3( empty local-lis/les=17/18 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.2( empty local-lis/les=17/18 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.5( empty local-lis/les=17/18 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.f( empty local-lis/les=17/18 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.9( empty local-lis/les=17/18 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.e( empty local-lis/les=17/18 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.8( empty local-lis/les=17/18 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.a( empty local-lis/les=17/18 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.15( empty local-lis/les=17/18 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.b( empty local-lis/les=17/18 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.14( empty local-lis/les=17/18 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.17( empty local-lis/les=17/18 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.16( empty local-lis/les=17/18 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 24 09:27:33 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.11( empty local-lis/les=17/18 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.10( empty local-lis/les=17/18 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.13( empty local-lis/les=17/18 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.12( empty local-lis/les=17/18 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.1d( empty local-lis/les=17/18 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:33 compute-0 ceph-mgr[74626]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.1c( empty local-lis/les=17/18 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.1a( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.18( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.1b( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.19( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.1e( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.c( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.1( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.d( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.7( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.6( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.4( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.0( empty local-lis/les=32/33 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.3( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.2( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.5( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.9( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.f( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.e( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.8( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.a( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.15( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.14( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.17( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.b( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.16( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.10( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.13( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.12( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.1d( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.1c( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.11( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:33 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 33 pg[6.1f( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=17/17 les/c/f=18/18/0 sis=32) [0] r=0 lpr=32 pi=[17,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:33 compute-0 podman[87449]: 2025-11-24 09:27:33.696160074 +0000 UTC m=+0.044501839 container create f6a62622b10c891fb827b893564d4e369a4b2d77bc7f4473e64bcb6b8c1eeb17 (image=quay.io/ceph/ceph:v19, name=gracious_tharp, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 09:27:33 compute-0 ceph-mon[74331]: purged_snaps scrub starts
Nov 24 09:27:33 compute-0 ceph-mon[74331]: purged_snaps scrub ok
Nov 24 09:27:33 compute-0 ceph-mon[74331]: 2.a deep-scrub starts
Nov 24 09:27:33 compute-0 ceph-mon[74331]: 2.a deep-scrub ok
Nov 24 09:27:33 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 09:27:33 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 09:27:33 compute-0 ceph-mon[74331]: osdmap e32: 3 total, 2 up, 3 in
Nov 24 09:27:33 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:33 compute-0 ceph-mon[74331]: 4.1f scrub starts
Nov 24 09:27:33 compute-0 ceph-mon[74331]: 4.1f scrub ok
Nov 24 09:27:33 compute-0 ceph-mon[74331]: from='client.24131 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 09:27:33 compute-0 ceph-mon[74331]: Saving service node-exporter spec with placement *
Nov 24 09:27:33 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:33 compute-0 ceph-mon[74331]: Saving service grafana spec with placement compute-0;count:1
Nov 24 09:27:33 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:33 compute-0 ceph-mon[74331]: Saving service prometheus spec with placement compute-0;count:1
Nov 24 09:27:33 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:33 compute-0 ceph-mon[74331]: Saving service alertmanager spec with placement compute-0;count:1
Nov 24 09:27:33 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:33 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:33 compute-0 systemd[1]: Started libpod-conmon-f6a62622b10c891fb827b893564d4e369a4b2d77bc7f4473e64bcb6b8c1eeb17.scope.
Nov 24 09:27:33 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:27:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72a033f9c889edfb99e1a8bf25064aa0a804b5e43a5d056e59ebcfb79865924d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72a033f9c889edfb99e1a8bf25064aa0a804b5e43a5d056e59ebcfb79865924d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72a033f9c889edfb99e1a8bf25064aa0a804b5e43a5d056e59ebcfb79865924d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:33 compute-0 podman[87449]: 2025-11-24 09:27:33.764578318 +0000 UTC m=+0.112920103 container init f6a62622b10c891fb827b893564d4e369a4b2d77bc7f4473e64bcb6b8c1eeb17 (image=quay.io/ceph/ceph:v19, name=gracious_tharp, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Nov 24 09:27:33 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Nov 24 09:27:33 compute-0 podman[87449]: 2025-11-24 09:27:33.675768318 +0000 UTC m=+0.024110103 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:27:33 compute-0 podman[87449]: 2025-11-24 09:27:33.774761241 +0000 UTC m=+0.123103006 container start f6a62622b10c891fb827b893564d4e369a4b2d77bc7f4473e64bcb6b8c1eeb17 (image=quay.io/ceph/ceph:v19, name=gracious_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 24 09:27:33 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Nov 24 09:27:33 compute-0 podman[87449]: 2025-11-24 09:27:33.777887498 +0000 UTC m=+0.126229293 container attach f6a62622b10c891fb827b893564d4e369a4b2d77bc7f4473e64bcb6b8c1eeb17 (image=quay.io/ceph/ceph:v19, name=gracious_tharp, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Nov 24 09:27:33 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v90: 193 pgs: 124 unknown, 69 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 09:27:33 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 09:27:33 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Nov 24 09:27:33 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 24 09:27:33 compute-0 ceph-mgr[74626]: [cephadm INFO root] Adjusting osd_memory_target on compute-2 to 127.9M
Nov 24 09:27:33 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-2 to 127.9M
Nov 24 09:27:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Nov 24 09:27:33 compute-0 ceph-mgr[74626]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-2 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Nov 24 09:27:33 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-2 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Nov 24 09:27:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:27:33 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:27:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 09:27:33 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:27:33 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Nov 24 09:27:33 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Nov 24 09:27:33 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Nov 24 09:27:33 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Nov 24 09:27:33 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Nov 24 09:27:33 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Nov 24 09:27:34 compute-0 sudo[87486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 24 09:27:34 compute-0 sudo[87486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:34 compute-0 sudo[87486]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:34 compute-0 sudo[87511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph
Nov 24 09:27:34 compute-0 sudo[87511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:34 compute-0 sudo[87511]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:34 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/server_port}] v 0)
Nov 24 09:27:34 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/32884121' entity='client.admin' 
Nov 24 09:27:34 compute-0 systemd[1]: libpod-f6a62622b10c891fb827b893564d4e369a4b2d77bc7f4473e64bcb6b8c1eeb17.scope: Deactivated successfully.
Nov 24 09:27:34 compute-0 podman[87449]: 2025-11-24 09:27:34.151720061 +0000 UTC m=+0.500061836 container died f6a62622b10c891fb827b893564d4e369a4b2d77bc7f4473e64bcb6b8c1eeb17 (image=quay.io/ceph/ceph:v19, name=gracious_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:27:34 compute-0 sudo[87537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.conf.new
Nov 24 09:27:34 compute-0 sudo[87537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-72a033f9c889edfb99e1a8bf25064aa0a804b5e43a5d056e59ebcfb79865924d-merged.mount: Deactivated successfully.
Nov 24 09:27:34 compute-0 sudo[87537]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:34 compute-0 podman[87449]: 2025-11-24 09:27:34.18358877 +0000 UTC m=+0.531930535 container remove f6a62622b10c891fb827b893564d4e369a4b2d77bc7f4473e64bcb6b8c1eeb17 (image=quay.io/ceph/ceph:v19, name=gracious_tharp, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:27:34 compute-0 systemd[1]: libpod-conmon-f6a62622b10c891fb827b893564d4e369a4b2d77bc7f4473e64bcb6b8c1eeb17.scope: Deactivated successfully.
Nov 24 09:27:34 compute-0 sudo[87446]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:34 compute-0 sudo[87572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:27:34 compute-0 sudo[87572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:34 compute-0 sudo[87572]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:34 compute-0 sudo[87599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.conf.new
Nov 24 09:27:34 compute-0 sudo[87599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:34 compute-0 sudo[87599]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:34 compute-0 sudo[87675]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypppwvqhogfnrkuclieyzshaaumlnwbs ; /usr/bin/python3'
Nov 24 09:27:34 compute-0 sudo[87675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:27:34 compute-0 sudo[87667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.conf.new
Nov 24 09:27:34 compute-0 sudo[87667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:34 compute-0 sudo[87667]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:34 compute-0 sudo[87698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.conf.new
Nov 24 09:27:34 compute-0 sudo[87698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:34 compute-0 sudo[87698]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:34 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:27:34 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:27:34 compute-0 sudo[87723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Nov 24 09:27:34 compute-0 sudo[87723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:34 compute-0 sudo[87723]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:34 compute-0 python3[87693]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl_server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:27:34 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:27:34 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:27:34 compute-0 ceph-mgr[74626]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/4204763159; not ready for session (expect reconnect)
Nov 24 09:27:34 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 24 09:27:34 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:34 compute-0 ceph-mgr[74626]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 09:27:34 compute-0 sudo[87749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config
Nov 24 09:27:34 compute-0 sudo[87749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:34 compute-0 sudo[87749]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:34 compute-0 podman[87748]: 2025-11-24 09:27:34.612038324 +0000 UTC m=+0.053399511 container create 9b19c37a64d4452a3c9b37a030d25df27f6bab8bfc85cbff9de21cf78114f787 (image=quay.io/ceph/ceph:v19, name=blissful_hypatia, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Nov 24 09:27:34 compute-0 systemd[1]: Started libpod-conmon-9b19c37a64d4452a3c9b37a030d25df27f6bab8bfc85cbff9de21cf78114f787.scope.
Nov 24 09:27:34 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:27:34 compute-0 sudo[87786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config
Nov 24 09:27:34 compute-0 sudo[87786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc853598867dc7629bdafaa652e4fea5a09d260f55bf12c77de37ad4844f29d0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc853598867dc7629bdafaa652e4fea5a09d260f55bf12c77de37ad4844f29d0/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc853598867dc7629bdafaa652e4fea5a09d260f55bf12c77de37ad4844f29d0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:34 compute-0 sudo[87786]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:34 compute-0 podman[87748]: 2025-11-24 09:27:34.677084399 +0000 UTC m=+0.118445606 container init 9b19c37a64d4452a3c9b37a030d25df27f6bab8bfc85cbff9de21cf78114f787 (image=quay.io/ceph/ceph:v19, name=blissful_hypatia, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:27:34 compute-0 podman[87748]: 2025-11-24 09:27:34.68518497 +0000 UTC m=+0.126546157 container start 9b19c37a64d4452a3c9b37a030d25df27f6bab8bfc85cbff9de21cf78114f787 (image=quay.io/ceph/ceph:v19, name=blissful_hypatia, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Nov 24 09:27:34 compute-0 podman[87748]: 2025-11-24 09:27:34.591250553 +0000 UTC m=+0.032611760 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:27:34 compute-0 podman[87748]: 2025-11-24 09:27:34.688375475 +0000 UTC m=+0.129736672 container attach 9b19c37a64d4452a3c9b37a030d25df27f6bab8bfc85cbff9de21cf78114f787 (image=quay.io/ceph/ceph:v19, name=blissful_hypatia, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 24 09:27:34 compute-0 ceph-mon[74331]: 2.8 scrub starts
Nov 24 09:27:34 compute-0 ceph-mon[74331]: 2.8 scrub ok
Nov 24 09:27:34 compute-0 ceph-mon[74331]: osdmap e33: 3 total, 2 up, 3 in
Nov 24 09:27:34 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:34 compute-0 ceph-mon[74331]: 4.12 scrub starts
Nov 24 09:27:34 compute-0 ceph-mon[74331]: 4.12 scrub ok
Nov 24 09:27:34 compute-0 ceph-mon[74331]: pgmap v90: 193 pgs: 124 unknown, 69 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 24 09:27:34 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:34 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:34 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 24 09:27:34 compute-0 ceph-mon[74331]: Adjusting osd_memory_target on compute-2 to 127.9M
Nov 24 09:27:34 compute-0 ceph-mon[74331]: Unable to set osd_memory_target on compute-2 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Nov 24 09:27:34 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:27:34 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:27:34 compute-0 ceph-mon[74331]: Updating compute-0:/etc/ceph/ceph.conf
Nov 24 09:27:34 compute-0 ceph-mon[74331]: Updating compute-2:/etc/ceph/ceph.conf
Nov 24 09:27:34 compute-0 ceph-mon[74331]: Updating compute-1:/etc/ceph/ceph.conf
Nov 24 09:27:34 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/32884121' entity='client.admin' 
Nov 24 09:27:34 compute-0 ceph-mon[74331]: OSD bench result of 8908.221181 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 24 09:27:34 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:34 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 4.11 deep-scrub starts
Nov 24 09:27:34 compute-0 sudo[87817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf.new
Nov 24 09:27:34 compute-0 sudo[87817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:34 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 4.11 deep-scrub ok
Nov 24 09:27:34 compute-0 sudo[87817]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:34 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:27:34 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:27:34 compute-0 sudo[87843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:27:34 compute-0 sudo[87843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:34 compute-0 sudo[87843]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:34 compute-0 sudo[87879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf.new
Nov 24 09:27:34 compute-0 sudo[87879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:34 compute-0 sudo[87879]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:34 compute-0 sudo[87935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf.new
Nov 24 09:27:34 compute-0 sudo[87935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:34 compute-0 sudo[87935]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:35 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 09:27:35 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:35 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 09:27:35 compute-0 sudo[87960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf.new
Nov 24 09:27:35 compute-0 sudo[87960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:35 compute-0 sudo[87960]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:35 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:35 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl_server_port}] v 0)
Nov 24 09:27:35 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4251225502' entity='client.admin' 
Nov 24 09:27:35 compute-0 sudo[87985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf.new /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:27:35 compute-0 sudo[87985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:35 compute-0 sudo[87985]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:35 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:27:35 compute-0 systemd[1]: libpod-9b19c37a64d4452a3c9b37a030d25df27f6bab8bfc85cbff9de21cf78114f787.scope: Deactivated successfully.
Nov 24 09:27:35 compute-0 podman[87748]: 2025-11-24 09:27:35.095362577 +0000 UTC m=+0.536723764 container died 9b19c37a64d4452a3c9b37a030d25df27f6bab8bfc85cbff9de21cf78114f787 (image=quay.io/ceph/ceph:v19, name=blissful_hypatia, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:27:35 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:35 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:27:35 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc853598867dc7629bdafaa652e4fea5a09d260f55bf12c77de37ad4844f29d0-merged.mount: Deactivated successfully.
Nov 24 09:27:35 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Nov 24 09:27:35 compute-0 podman[87748]: 2025-11-24 09:27:35.133040965 +0000 UTC m=+0.574402152 container remove 9b19c37a64d4452a3c9b37a030d25df27f6bab8bfc85cbff9de21cf78114f787 (image=quay.io/ceph/ceph:v19, name=blissful_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Nov 24 09:27:35 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.19( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.19( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.18( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.18( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.1d( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.388719559s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.479492188s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.1d( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.388702393s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.479492188s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.1b( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.1b( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.1c( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.388432503s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.479507446s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.1c( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.388412476s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.479507446s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.1e( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.392847061s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484077454s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.1b( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.388176918s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.479476929s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.1d( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.1d( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.1b( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.388110161s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.479476929s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.1e( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.392706871s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484077454s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.1a( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.392699242s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484230042s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.1a( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.392685890s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484230042s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.1f( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.387839317s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.479484558s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.1a( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.1a( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.1f( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.387744904s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.479484558s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.9( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.392244339s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484107971s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.9( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.392231941s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484107971s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.f( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.f( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.1c( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.1c( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.e( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.4( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.391521454s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484092712s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.4( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.391505241s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484092712s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 systemd[1]: libpod-conmon-9b19c37a64d4452a3c9b37a030d25df27f6bab8bfc85cbff9de21cf78114f787.scope: Deactivated successfully.
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.8( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.391305923s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484085083s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.e( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.8( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.391234398s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484085083s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.2( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.3( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.391243935s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484367371s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.2( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.2( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.391034126s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484268188s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.3( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.391226768s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484367371s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.2( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.391006470s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484268188s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.4( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.4( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.1( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.390951157s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484664917s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.5( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.5( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.5( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.390358925s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484252930s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.5( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.390344620s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484252930s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.3( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.3( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.7( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.6( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.390156746s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484275818s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.7( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.6( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.390023232s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484275818s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=34 pruub=5.971401691s) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.065757751s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=34 pruub=5.971389771s) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.065757751s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.7( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.389790535s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484313965s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.1( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.7( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.389773369s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484313965s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.1( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.6( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.0( empty local-lis/les=28/29 n=0 ec=14/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.389618874s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484344482s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.0( empty local-lis/les=28/29 n=0 ec=14/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.389598846s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484344482s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.a( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.389568329s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484367371s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.a( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.389554024s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484367371s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.c( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.c( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.1( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.389585495s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484664917s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.6( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.b( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.388964653s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484367371s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.d( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.d( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.b( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.388946533s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484367371s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.c( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.388635635s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484375000s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.a( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.a( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.d( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.388537407s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484458923s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.d( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.388518333s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484458923s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.c( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.388588905s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484375000s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.8( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.8( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.f( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.388496399s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484573364s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.f( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.388480186s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484573364s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.9( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.9( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.10( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.388186455s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484466553s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.10( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.388172150s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484466553s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.b( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.b( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.16( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.16( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.17( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.11( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.387940407s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484474182s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.17( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.12( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.387973785s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484603882s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.11( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.387893677s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484474182s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.12( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.387960434s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484603882s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.14( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.14( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.13( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.387845993s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484626770s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.15( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.13( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.387832642s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484626770s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.15( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.14( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.387647629s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484634399s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.14( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.387631416s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484634399s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.13( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.13( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.15( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.387495995s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484611511s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.15( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.387479782s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484611511s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.e( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.387264252s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484527588s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.e( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.387248993s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484527588s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.16( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.387361526s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484680176s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.12( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.12( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.16( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.387273788s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484680176s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.11( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.11( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.10( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.10( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.17( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.386822701s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484695435s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-mon[74331]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.102:6800/4204763159,v1:192.168.122.102:6801/4204763159] boot
Nov 24 09:27:35 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.17( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.386800766s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484695435s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.1e( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.1e( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 24 09:27:35 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:35 compute-0 sudo[87675]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.19( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.385016441s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484703064s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.19( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.385003090s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484703064s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.18( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.384840012s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484680176s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[3.18( empty local-lis/les=28/29 n=0 ec=28/14 lis/c=28/28 les/c/f=29/29/0 sis=34 pruub=10.384822845s) [2] r=-1 lpr=34 pi=[28,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.484680176s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.1f( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:35 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 34 pg[5.1f( empty local-lis/les=16/17 n=0 ec=30/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:35 compute-0 sudo[88047]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-peodkwsniewkgtlwfznuatmchpsrchdf ; /usr/bin/python3'
Nov 24 09:27:35 compute-0 sudo[88047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:27:35 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 09:27:35 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:35 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 09:27:35 compute-0 python3[88049]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl false _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:27:35 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:35 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:27:35 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:35 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 09:27:35 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:27:35 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 09:27:35 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:27:35 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:27:35 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:27:35 compute-0 podman[88050]: 2025-11-24 09:27:35.534964897 +0000 UTC m=+0.035331055 container create da2fc13394d23af551c4051600591821342ab23f7a28c6a1d0ba69d6257fa09b (image=quay.io/ceph/ceph:v19, name=sharp_rhodes, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:27:35 compute-0 systemd[1]: Started libpod-conmon-da2fc13394d23af551c4051600591821342ab23f7a28c6a1d0ba69d6257fa09b.scope.
Nov 24 09:27:35 compute-0 sudo[88061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:27:35 compute-0 sudo[88061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:35 compute-0 sudo[88061]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:35 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:27:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c26aee80cf5b65774c26bf1568cacf979c68511702374608321d749b6d52a28c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c26aee80cf5b65774c26bf1568cacf979c68511702374608321d749b6d52a28c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c26aee80cf5b65774c26bf1568cacf979c68511702374608321d749b6d52a28c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:35 compute-0 podman[88050]: 2025-11-24 09:27:35.520958227 +0000 UTC m=+0.021324405 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:27:35 compute-0 podman[88050]: 2025-11-24 09:27:35.620006053 +0000 UTC m=+0.120372211 container init da2fc13394d23af551c4051600591821342ab23f7a28c6a1d0ba69d6257fa09b (image=quay.io/ceph/ceph:v19, name=sharp_rhodes, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:27:35 compute-0 podman[88050]: 2025-11-24 09:27:35.625706707 +0000 UTC m=+0.126072855 container start da2fc13394d23af551c4051600591821342ab23f7a28c6a1d0ba69d6257fa09b (image=quay.io/ceph/ceph:v19, name=sharp_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Nov 24 09:27:35 compute-0 sudo[88093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 24 09:27:35 compute-0 podman[88050]: 2025-11-24 09:27:35.635292034 +0000 UTC m=+0.135658262 container attach da2fc13394d23af551c4051600591821342ab23f7a28c6a1d0ba69d6257fa09b (image=quay.io/ceph/ceph:v19, name=sharp_rhodes, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:27:35 compute-0 sudo[88093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:35 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Nov 24 09:27:35 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Nov 24 09:27:35 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v92: 193 pgs: 64 peering, 129 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:27:35 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl}] v 0)
Nov 24 09:27:35 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/390020590' entity='client.admin' 
Nov 24 09:27:36 compute-0 podman[88179]: 2025-11-24 09:27:36.013128257 +0000 UTC m=+0.042234187 container create eefb164aa198dd83a26bee158ac79b114eee798896a028b6c06f2a8845365b23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:27:36 compute-0 systemd[1]: libpod-da2fc13394d23af551c4051600591821342ab23f7a28c6a1d0ba69d6257fa09b.scope: Deactivated successfully.
Nov 24 09:27:36 compute-0 podman[88050]: 2025-11-24 09:27:36.025044899 +0000 UTC m=+0.525411057 container died da2fc13394d23af551c4051600591821342ab23f7a28c6a1d0ba69d6257fa09b (image=quay.io/ceph/ceph:v19, name=sharp_rhodes, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 09:27:36 compute-0 ceph-mon[74331]: Updating compute-1:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:27:36 compute-0 ceph-mon[74331]: Updating compute-0:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:27:36 compute-0 ceph-mon[74331]: 2.1d scrub starts
Nov 24 09:27:36 compute-0 ceph-mon[74331]: 2.1d scrub ok
Nov 24 09:27:36 compute-0 ceph-mon[74331]: 4.11 deep-scrub starts
Nov 24 09:27:36 compute-0 ceph-mon[74331]: 4.11 deep-scrub ok
Nov 24 09:27:36 compute-0 ceph-mon[74331]: Updating compute-2:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:27:36 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:36 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:36 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/4251225502' entity='client.admin' 
Nov 24 09:27:36 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:36 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:36 compute-0 ceph-mon[74331]: osd.2 [v2:192.168.122.102:6800/4204763159,v1:192.168.122.102:6801/4204763159] boot
Nov 24 09:27:36 compute-0 ceph-mon[74331]: osdmap e34: 3 total, 3 up, 3 in
Nov 24 09:27:36 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:36 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:36 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:36 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:36 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:27:36 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:27:36 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:27:36 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/390020590' entity='client.admin' 
Nov 24 09:27:36 compute-0 systemd[1]: Started libpod-conmon-eefb164aa198dd83a26bee158ac79b114eee798896a028b6c06f2a8845365b23.scope.
Nov 24 09:27:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-c26aee80cf5b65774c26bf1568cacf979c68511702374608321d749b6d52a28c-merged.mount: Deactivated successfully.
Nov 24 09:27:36 compute-0 podman[88050]: 2025-11-24 09:27:36.071656648 +0000 UTC m=+0.572022806 container remove da2fc13394d23af551c4051600591821342ab23f7a28c6a1d0ba69d6257fa09b (image=quay.io/ceph/ceph:v19, name=sharp_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Nov 24 09:27:36 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:27:36 compute-0 systemd[1]: libpod-conmon-da2fc13394d23af551c4051600591821342ab23f7a28c6a1d0ba69d6257fa09b.scope: Deactivated successfully.
Nov 24 09:27:36 compute-0 podman[88179]: 2025-11-24 09:27:36.089444178 +0000 UTC m=+0.118550128 container init eefb164aa198dd83a26bee158ac79b114eee798896a028b6c06f2a8845365b23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:27:36 compute-0 podman[88179]: 2025-11-24 09:27:35.994370655 +0000 UTC m=+0.023476615 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:27:36 compute-0 podman[88179]: 2025-11-24 09:27:36.096023743 +0000 UTC m=+0.125129673 container start eefb164aa198dd83a26bee158ac79b114eee798896a028b6c06f2a8845365b23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_sutherland, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:27:36 compute-0 laughing_sutherland[88208]: 167 167
Nov 24 09:27:36 compute-0 podman[88179]: 2025-11-24 09:27:36.099533956 +0000 UTC m=+0.128639886 container attach eefb164aa198dd83a26bee158ac79b114eee798896a028b6c06f2a8845365b23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_sutherland, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:27:36 compute-0 sudo[88047]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:36 compute-0 systemd[1]: libpod-eefb164aa198dd83a26bee158ac79b114eee798896a028b6c06f2a8845365b23.scope: Deactivated successfully.
Nov 24 09:27:36 compute-0 podman[88179]: 2025-11-24 09:27:36.100412946 +0000 UTC m=+0.129518876 container died eefb164aa198dd83a26bee158ac79b114eee798896a028b6c06f2a8845365b23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Nov 24 09:27:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b5059f675f19fdaa17baa95b8318aa71228b25685e9e49162bfe4b4bed93eab-merged.mount: Deactivated successfully.
Nov 24 09:27:36 compute-0 podman[88179]: 2025-11-24 09:27:36.129670376 +0000 UTC m=+0.158776306 container remove eefb164aa198dd83a26bee158ac79b114eee798896a028b6c06f2a8845365b23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_sutherland, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Nov 24 09:27:36 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Nov 24 09:27:36 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Nov 24 09:27:36 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Nov 24 09:27:36 compute-0 systemd[1]: libpod-conmon-eefb164aa198dd83a26bee158ac79b114eee798896a028b6c06f2a8845365b23.scope: Deactivated successfully.
Nov 24 09:27:36 compute-0 podman[88235]: 2025-11-24 09:27:36.303371634 +0000 UTC m=+0.059733789 container create 3100310f295e50153ec1f37a9cd03a03d3b6b9f74474f7bd2ece964b7df839a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_kepler, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Nov 24 09:27:36 compute-0 systemd[1]: Started libpod-conmon-3100310f295e50153ec1f37a9cd03a03d3b6b9f74474f7bd2ece964b7df839a8.scope.
Nov 24 09:27:36 compute-0 podman[88235]: 2025-11-24 09:27:36.275784304 +0000 UTC m=+0.032146549 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:27:36 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:27:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af9f48e3459846172f248346dd97662de4a6d46eb08090b9e5b7f0c4765b5aed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af9f48e3459846172f248346dd97662de4a6d46eb08090b9e5b7f0c4765b5aed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af9f48e3459846172f248346dd97662de4a6d46eb08090b9e5b7f0c4765b5aed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af9f48e3459846172f248346dd97662de4a6d46eb08090b9e5b7f0c4765b5aed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af9f48e3459846172f248346dd97662de4a6d46eb08090b9e5b7f0c4765b5aed/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:36 compute-0 podman[88235]: 2025-11-24 09:27:36.395998079 +0000 UTC m=+0.152360254 container init 3100310f295e50153ec1f37a9cd03a03d3b6b9f74474f7bd2ece964b7df839a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:27:36 compute-0 podman[88235]: 2025-11-24 09:27:36.408900614 +0000 UTC m=+0.165262779 container start 3100310f295e50153ec1f37a9cd03a03d3b6b9f74474f7bd2ece964b7df839a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_kepler, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Nov 24 09:27:36 compute-0 podman[88235]: 2025-11-24 09:27:36.412717304 +0000 UTC m=+0.169079559 container attach 3100310f295e50153ec1f37a9cd03a03d3b6b9f74474f7bd2ece964b7df839a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_kepler, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 24 09:27:36 compute-0 sudo[88282]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhprzkdtijzxbeghlzelkqopvqlccnsd ; /usr/bin/python3'
Nov 24 09:27:36 compute-0 sudo[88282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:27:36 compute-0 musing_kepler[88252]: --> passed data devices: 0 physical, 1 LVM
Nov 24 09:27:36 compute-0 musing_kepler[88252]: --> All data devices are unavailable
Nov 24 09:27:36 compute-0 systemd[1]: libpod-3100310f295e50153ec1f37a9cd03a03d3b6b9f74474f7bd2ece964b7df839a8.scope: Deactivated successfully.
Nov 24 09:27:36 compute-0 podman[88235]: 2025-11-24 09:27:36.772086162 +0000 UTC m=+0.528448337 container died 3100310f295e50153ec1f37a9cd03a03d3b6b9f74474f7bd2ece964b7df839a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_kepler, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:27:36 compute-0 python3[88284]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a -f 'name=ceph-?(.*)-mgr.*' --format \{\{\.Command\}\} --no-trunc
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:27:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-af9f48e3459846172f248346dd97662de4a6d46eb08090b9e5b7f0c4765b5aed-merged.mount: Deactivated successfully.
Nov 24 09:27:36 compute-0 podman[88235]: 2025-11-24 09:27:36.825463061 +0000 UTC m=+0.581825216 container remove 3100310f295e50153ec1f37a9cd03a03d3b6b9f74474f7bd2ece964b7df839a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_kepler, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Nov 24 09:27:36 compute-0 systemd[1]: libpod-conmon-3100310f295e50153ec1f37a9cd03a03d3b6b9f74474f7bd2ece964b7df839a8.scope: Deactivated successfully.
Nov 24 09:27:36 compute-0 sudo[88093]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:36 compute-0 sudo[88282]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:36 compute-0 sudo[88317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:27:36 compute-0 sudo[88317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:36 compute-0 sudo[88317]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:37 compute-0 sudo[88342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- lvm list --format json
Nov 24 09:27:37 compute-0 sudo[88342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:37 compute-0 ceph-mon[74331]: 2.7 scrub starts
Nov 24 09:27:37 compute-0 ceph-mon[74331]: 2.7 scrub ok
Nov 24 09:27:37 compute-0 ceph-mon[74331]: 4.14 scrub starts
Nov 24 09:27:37 compute-0 ceph-mon[74331]: 4.14 scrub ok
Nov 24 09:27:37 compute-0 ceph-mon[74331]: pgmap v92: 193 pgs: 64 peering, 129 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:27:37 compute-0 ceph-mon[74331]: osdmap e35: 3 total, 3 up, 3 in
Nov 24 09:27:37 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 4.15 deep-scrub starts
Nov 24 09:27:37 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 4.15 deep-scrub ok
Nov 24 09:27:37 compute-0 sudo[88416]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nggyuuqzhtpvltdwugngcizcuctuqomm ; /usr/bin/python3'
Nov 24 09:27:37 compute-0 sudo[88416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:27:37 compute-0 podman[88434]: 2025-11-24 09:27:37.377503994 +0000 UTC m=+0.042772690 container create e298381ef1e74e192f2b4bcded75effd05ce883e076168cd1bfa9757f4f6162b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True)
Nov 24 09:27:37 compute-0 systemd[1]: Started libpod-conmon-e298381ef1e74e192f2b4bcded75effd05ce883e076168cd1bfa9757f4f6162b.scope.
Nov 24 09:27:37 compute-0 python3[88420]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-0.mauvni/server_addr 192.168.122.100
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:27:37 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:27:37 compute-0 podman[88434]: 2025-11-24 09:27:37.358766912 +0000 UTC m=+0.024035628 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:27:37 compute-0 podman[88434]: 2025-11-24 09:27:37.458447324 +0000 UTC m=+0.123716040 container init e298381ef1e74e192f2b4bcded75effd05ce883e076168cd1bfa9757f4f6162b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_ptolemy, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Nov 24 09:27:37 compute-0 podman[88434]: 2025-11-24 09:27:37.466275209 +0000 UTC m=+0.131543905 container start e298381ef1e74e192f2b4bcded75effd05ce883e076168cd1bfa9757f4f6162b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 24 09:27:37 compute-0 podman[88434]: 2025-11-24 09:27:37.471436671 +0000 UTC m=+0.136705387 container attach e298381ef1e74e192f2b4bcded75effd05ce883e076168cd1bfa9757f4f6162b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_ptolemy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:27:37 compute-0 silly_ptolemy[88449]: 167 167
Nov 24 09:27:37 compute-0 systemd[1]: libpod-e298381ef1e74e192f2b4bcded75effd05ce883e076168cd1bfa9757f4f6162b.scope: Deactivated successfully.
Nov 24 09:27:37 compute-0 podman[88434]: 2025-11-24 09:27:37.474516643 +0000 UTC m=+0.139785369 container died e298381ef1e74e192f2b4bcded75effd05ce883e076168cd1bfa9757f4f6162b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_ptolemy, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 24 09:27:37 compute-0 podman[88452]: 2025-11-24 09:27:37.486211299 +0000 UTC m=+0.045169356 container create d2bfedc8d15c9d5c6195c91d8bc3dabad7c14ddf2ce0f65cd989f74176f977e6 (image=quay.io/ceph/ceph:v19, name=gallant_mcclintock, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:27:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-c105e1fb8829fc44f1f1625c0d7bd81ee3ba6b167fe793f814d57459517f64b1-merged.mount: Deactivated successfully.
Nov 24 09:27:37 compute-0 podman[88434]: 2025-11-24 09:27:37.51337371 +0000 UTC m=+0.178642406 container remove e298381ef1e74e192f2b4bcded75effd05ce883e076168cd1bfa9757f4f6162b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_ptolemy, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:27:37 compute-0 systemd[1]: Started libpod-conmon-d2bfedc8d15c9d5c6195c91d8bc3dabad7c14ddf2ce0f65cd989f74176f977e6.scope.
Nov 24 09:27:37 compute-0 systemd[1]: libpod-conmon-e298381ef1e74e192f2b4bcded75effd05ce883e076168cd1bfa9757f4f6162b.scope: Deactivated successfully.
Nov 24 09:27:37 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:27:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ae5d47602d9165ab42b0d5ac6493d1231d51ba1157e17ccf47e49a8a32cec9e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ae5d47602d9165ab42b0d5ac6493d1231d51ba1157e17ccf47e49a8a32cec9e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ae5d47602d9165ab42b0d5ac6493d1231d51ba1157e17ccf47e49a8a32cec9e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:37 compute-0 podman[88452]: 2025-11-24 09:27:37.554351997 +0000 UTC m=+0.113310034 container init d2bfedc8d15c9d5c6195c91d8bc3dabad7c14ddf2ce0f65cd989f74176f977e6 (image=quay.io/ceph/ceph:v19, name=gallant_mcclintock, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Nov 24 09:27:37 compute-0 podman[88452]: 2025-11-24 09:27:37.560040711 +0000 UTC m=+0.118998728 container start d2bfedc8d15c9d5c6195c91d8bc3dabad7c14ddf2ce0f65cd989f74176f977e6 (image=quay.io/ceph/ceph:v19, name=gallant_mcclintock, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:27:37 compute-0 podman[88452]: 2025-11-24 09:27:37.463069913 +0000 UTC m=+0.022027960 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:27:37 compute-0 podman[88452]: 2025-11-24 09:27:37.56339756 +0000 UTC m=+0.122355577 container attach d2bfedc8d15c9d5c6195c91d8bc3dabad7c14ddf2ce0f65cd989f74176f977e6 (image=quay.io/ceph/ceph:v19, name=gallant_mcclintock, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:27:37 compute-0 podman[88496]: 2025-11-24 09:27:37.660247524 +0000 UTC m=+0.039032581 container create 52f576ffb440ac093d016c99fa795c7a87284e36a859160532e4175cd60f0721 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:27:37 compute-0 systemd[1]: Started libpod-conmon-52f576ffb440ac093d016c99fa795c7a87284e36a859160532e4175cd60f0721.scope.
Nov 24 09:27:37 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:27:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e65f53af6eeeef1713577833015f9d51c490d681d54daa3196fa922873095f7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:37 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:27:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e65f53af6eeeef1713577833015f9d51c490d681d54daa3196fa922873095f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e65f53af6eeeef1713577833015f9d51c490d681d54daa3196fa922873095f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e65f53af6eeeef1713577833015f9d51c490d681d54daa3196fa922873095f7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:37 compute-0 podman[88496]: 2025-11-24 09:27:37.736723549 +0000 UTC m=+0.115508646 container init 52f576ffb440ac093d016c99fa795c7a87284e36a859160532e4175cd60f0721 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:27:37 compute-0 podman[88496]: 2025-11-24 09:27:37.642310902 +0000 UTC m=+0.021095979 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:27:37 compute-0 podman[88496]: 2025-11-24 09:27:37.743906208 +0000 UTC m=+0.122691285 container start 52f576ffb440ac093d016c99fa795c7a87284e36a859160532e4175cd60f0721 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_kilby, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:27:37 compute-0 podman[88496]: 2025-11-24 09:27:37.747164945 +0000 UTC m=+0.125950032 container attach 52f576ffb440ac093d016c99fa795c7a87284e36a859160532e4175cd60f0721 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid)
Nov 24 09:27:37 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Nov 24 09:27:37 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Nov 24 09:27:37 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v94: 193 pgs: 64 peering, 129 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:27:37 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-0.mauvni/server_addr}] v 0)
Nov 24 09:27:37 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3786425625' entity='client.admin' 
Nov 24 09:27:37 compute-0 systemd[1]: libpod-d2bfedc8d15c9d5c6195c91d8bc3dabad7c14ddf2ce0f65cd989f74176f977e6.scope: Deactivated successfully.
Nov 24 09:27:37 compute-0 podman[88452]: 2025-11-24 09:27:37.949367225 +0000 UTC m=+0.508325242 container died d2bfedc8d15c9d5c6195c91d8bc3dabad7c14ddf2ce0f65cd989f74176f977e6 (image=quay.io/ceph/ceph:v19, name=gallant_mcclintock, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 09:27:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ae5d47602d9165ab42b0d5ac6493d1231d51ba1157e17ccf47e49a8a32cec9e-merged.mount: Deactivated successfully.
Nov 24 09:27:37 compute-0 podman[88452]: 2025-11-24 09:27:37.986373098 +0000 UTC m=+0.545331115 container remove d2bfedc8d15c9d5c6195c91d8bc3dabad7c14ddf2ce0f65cd989f74176f977e6 (image=quay.io/ceph/ceph:v19, name=gallant_mcclintock, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 09:27:37 compute-0 systemd[1]: libpod-conmon-d2bfedc8d15c9d5c6195c91d8bc3dabad7c14ddf2ce0f65cd989f74176f977e6.scope: Deactivated successfully.
Nov 24 09:27:38 compute-0 sudo[88416]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:38 compute-0 relaxed_kilby[88531]: {
Nov 24 09:27:38 compute-0 relaxed_kilby[88531]:     "0": [
Nov 24 09:27:38 compute-0 relaxed_kilby[88531]:         {
Nov 24 09:27:38 compute-0 relaxed_kilby[88531]:             "devices": [
Nov 24 09:27:38 compute-0 relaxed_kilby[88531]:                 "/dev/loop3"
Nov 24 09:27:38 compute-0 relaxed_kilby[88531]:             ],
Nov 24 09:27:38 compute-0 relaxed_kilby[88531]:             "lv_name": "ceph_lv0",
Nov 24 09:27:38 compute-0 relaxed_kilby[88531]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:27:38 compute-0 relaxed_kilby[88531]:             "lv_size": "21470642176",
Nov 24 09:27:38 compute-0 relaxed_kilby[88531]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=84a084c3-61a7-5de7-8207-1f88efa59a64,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 24 09:27:38 compute-0 relaxed_kilby[88531]:             "lv_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:27:38 compute-0 relaxed_kilby[88531]:             "name": "ceph_lv0",
Nov 24 09:27:38 compute-0 relaxed_kilby[88531]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:27:38 compute-0 relaxed_kilby[88531]:             "tags": {
Nov 24 09:27:38 compute-0 relaxed_kilby[88531]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:27:38 compute-0 relaxed_kilby[88531]:                 "ceph.block_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:27:38 compute-0 relaxed_kilby[88531]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 09:27:38 compute-0 relaxed_kilby[88531]:                 "ceph.cluster_fsid": "84a084c3-61a7-5de7-8207-1f88efa59a64",
Nov 24 09:27:38 compute-0 relaxed_kilby[88531]:                 "ceph.cluster_name": "ceph",
Nov 24 09:27:38 compute-0 relaxed_kilby[88531]:                 "ceph.crush_device_class": "",
Nov 24 09:27:38 compute-0 relaxed_kilby[88531]:                 "ceph.encrypted": "0",
Nov 24 09:27:38 compute-0 relaxed_kilby[88531]:                 "ceph.osd_fsid": "4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c",
Nov 24 09:27:38 compute-0 relaxed_kilby[88531]:                 "ceph.osd_id": "0",
Nov 24 09:27:38 compute-0 relaxed_kilby[88531]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 09:27:38 compute-0 relaxed_kilby[88531]:                 "ceph.type": "block",
Nov 24 09:27:38 compute-0 relaxed_kilby[88531]:                 "ceph.vdo": "0",
Nov 24 09:27:38 compute-0 relaxed_kilby[88531]:                 "ceph.with_tpm": "0"
Nov 24 09:27:38 compute-0 relaxed_kilby[88531]:             },
Nov 24 09:27:38 compute-0 relaxed_kilby[88531]:             "type": "block",
Nov 24 09:27:38 compute-0 relaxed_kilby[88531]:             "vg_name": "ceph_vg0"
Nov 24 09:27:38 compute-0 relaxed_kilby[88531]:         }
Nov 24 09:27:38 compute-0 relaxed_kilby[88531]:     ]
Nov 24 09:27:38 compute-0 relaxed_kilby[88531]: }
Nov 24 09:27:38 compute-0 ceph-mon[74331]: 3.19 deep-scrub starts
Nov 24 09:27:38 compute-0 ceph-mon[74331]: 3.19 deep-scrub ok
Nov 24 09:27:38 compute-0 ceph-mon[74331]: 2.1e scrub starts
Nov 24 09:27:38 compute-0 ceph-mon[74331]: 2.1e scrub ok
Nov 24 09:27:38 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3786425625' entity='client.admin' 
Nov 24 09:27:38 compute-0 systemd[1]: libpod-52f576ffb440ac093d016c99fa795c7a87284e36a859160532e4175cd60f0721.scope: Deactivated successfully.
Nov 24 09:27:38 compute-0 podman[88496]: 2025-11-24 09:27:38.050099512 +0000 UTC m=+0.428884579 container died 52f576ffb440ac093d016c99fa795c7a87284e36a859160532e4175cd60f0721 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_kilby, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True)
Nov 24 09:27:38 compute-0 podman[88496]: 2025-11-24 09:27:38.090135486 +0000 UTC m=+0.468920543 container remove 52f576ffb440ac093d016c99fa795c7a87284e36a859160532e4175cd60f0721 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_kilby, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1)
Nov 24 09:27:38 compute-0 systemd[1]: libpod-conmon-52f576ffb440ac093d016c99fa795c7a87284e36a859160532e4175cd60f0721.scope: Deactivated successfully.
Nov 24 09:27:38 compute-0 sudo[88342]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:38 compute-0 sudo[88566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:27:38 compute-0 sudo[88566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:38 compute-0 sudo[88566]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:38 compute-0 sudo[88591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- raw list --format json
Nov 24 09:27:38 compute-0 sudo[88591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e65f53af6eeeef1713577833015f9d51c490d681d54daa3196fa922873095f7-merged.mount: Deactivated successfully.
Nov 24 09:27:38 compute-0 podman[88656]: 2025-11-24 09:27:38.676258633 +0000 UTC m=+0.047784158 container create 965991824bff98ebd26241200b0ec343d2a1753797de5f6f92a0c8e37577a827 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_easley, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:27:38 compute-0 systemd[1]: Started libpod-conmon-965991824bff98ebd26241200b0ec343d2a1753797de5f6f92a0c8e37577a827.scope.
Nov 24 09:27:38 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:27:38 compute-0 podman[88656]: 2025-11-24 09:27:38.653379764 +0000 UTC m=+0.024905349 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:27:38 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 4.16 deep-scrub starts
Nov 24 09:27:38 compute-0 podman[88656]: 2025-11-24 09:27:38.760693445 +0000 UTC m=+0.132218980 container init 965991824bff98ebd26241200b0ec343d2a1753797de5f6f92a0c8e37577a827 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:27:38 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 4.16 deep-scrub ok
Nov 24 09:27:38 compute-0 podman[88656]: 2025-11-24 09:27:38.770874286 +0000 UTC m=+0.142399811 container start 965991824bff98ebd26241200b0ec343d2a1753797de5f6f92a0c8e37577a827 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_easley, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:27:38 compute-0 sudo[88698]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oukicmidnvzqcbcgbjdvrjujmlkwfjns ; /usr/bin/python3'
Nov 24 09:27:38 compute-0 podman[88656]: 2025-11-24 09:27:38.774627424 +0000 UTC m=+0.146152949 container attach 965991824bff98ebd26241200b0ec343d2a1753797de5f6f92a0c8e37577a827 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_easley, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Nov 24 09:27:38 compute-0 systemd[1]: libpod-965991824bff98ebd26241200b0ec343d2a1753797de5f6f92a0c8e37577a827.scope: Deactivated successfully.
Nov 24 09:27:38 compute-0 exciting_easley[88673]: 167 167
Nov 24 09:27:38 compute-0 conmon[88673]: conmon 965991824bff98ebd262 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-965991824bff98ebd26241200b0ec343d2a1753797de5f6f92a0c8e37577a827.scope/container/memory.events
Nov 24 09:27:38 compute-0 sudo[88698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:27:38 compute-0 podman[88656]: 2025-11-24 09:27:38.776721134 +0000 UTC m=+0.148246659 container died 965991824bff98ebd26241200b0ec343d2a1753797de5f6f92a0c8e37577a827 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_easley, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:27:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-6355292303fc3f7e16798ceab90bdcf8cc6916404154d2b7bb206ea7ef19e718-merged.mount: Deactivated successfully.
Nov 24 09:27:38 compute-0 podman[88656]: 2025-11-24 09:27:38.810341977 +0000 UTC m=+0.181867512 container remove 965991824bff98ebd26241200b0ec343d2a1753797de5f6f92a0c8e37577a827 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 09:27:38 compute-0 systemd[1]: libpod-conmon-965991824bff98ebd26241200b0ec343d2a1753797de5f6f92a0c8e37577a827.scope: Deactivated successfully.
Nov 24 09:27:38 compute-0 python3[88702]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-1.qelqsg/server_addr 192.168.122.101
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:27:38 compute-0 podman[88721]: 2025-11-24 09:27:38.977865588 +0000 UTC m=+0.042493223 container create 2b22ef83d9a95f79ce84aaba44459074d63de12fd4e556ac1f78cac6580a7ce5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_austin, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:27:38 compute-0 podman[88723]: 2025-11-24 09:27:38.983393839 +0000 UTC m=+0.042405521 container create bacc51f21eac3d3c4685917d6cfc755fd41f78effe01a2d00be699fb6eabdf02 (image=quay.io/ceph/ceph:v19, name=eager_varahamihira, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:27:39 compute-0 systemd[1]: Started libpod-conmon-bacc51f21eac3d3c4685917d6cfc755fd41f78effe01a2d00be699fb6eabdf02.scope.
Nov 24 09:27:39 compute-0 systemd[1]: Started libpod-conmon-2b22ef83d9a95f79ce84aaba44459074d63de12fd4e556ac1f78cac6580a7ce5.scope.
Nov 24 09:27:39 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:27:39 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:27:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c57a3fadc29ef42ba0462e4d1a36c5d35df916b711340c889b4e7b840794b09/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f31a438b530eea7e9fbea591d97abb2a83d2654238c5551b57da6c519ab7a440/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f31a438b530eea7e9fbea591d97abb2a83d2654238c5551b57da6c519ab7a440/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f31a438b530eea7e9fbea591d97abb2a83d2654238c5551b57da6c519ab7a440/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c57a3fadc29ef42ba0462e4d1a36c5d35df916b711340c889b4e7b840794b09/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c57a3fadc29ef42ba0462e4d1a36c5d35df916b711340c889b4e7b840794b09/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c57a3fadc29ef42ba0462e4d1a36c5d35df916b711340c889b4e7b840794b09/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:39 compute-0 podman[88723]: 2025-11-24 09:27:39.052293065 +0000 UTC m=+0.111304777 container init bacc51f21eac3d3c4685917d6cfc755fd41f78effe01a2d00be699fb6eabdf02 (image=quay.io/ceph/ceph:v19, name=eager_varahamihira, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Nov 24 09:27:39 compute-0 podman[88721]: 2025-11-24 09:27:39.05503807 +0000 UTC m=+0.119665725 container init 2b22ef83d9a95f79ce84aaba44459074d63de12fd4e556ac1f78cac6580a7ce5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_austin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 09:27:39 compute-0 podman[88721]: 2025-11-24 09:27:38.960935629 +0000 UTC m=+0.025563294 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:27:39 compute-0 podman[88723]: 2025-11-24 09:27:38.96307924 +0000 UTC m=+0.022090922 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:27:39 compute-0 podman[88723]: 2025-11-24 09:27:39.061773168 +0000 UTC m=+0.120784850 container start bacc51f21eac3d3c4685917d6cfc755fd41f78effe01a2d00be699fb6eabdf02 (image=quay.io/ceph/ceph:v19, name=eager_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:27:39 compute-0 podman[88723]: 2025-11-24 09:27:39.066071649 +0000 UTC m=+0.125083351 container attach bacc51f21eac3d3c4685917d6cfc755fd41f78effe01a2d00be699fb6eabdf02 (image=quay.io/ceph/ceph:v19, name=eager_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid)
Nov 24 09:27:39 compute-0 podman[88721]: 2025-11-24 09:27:39.066874488 +0000 UTC m=+0.131502123 container start 2b22ef83d9a95f79ce84aaba44459074d63de12fd4e556ac1f78cac6580a7ce5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_austin, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:27:39 compute-0 ceph-mon[74331]: 4.15 deep-scrub starts
Nov 24 09:27:39 compute-0 ceph-mon[74331]: 4.15 deep-scrub ok
Nov 24 09:27:39 compute-0 ceph-mon[74331]: 5.1e scrub starts
Nov 24 09:27:39 compute-0 ceph-mon[74331]: 5.1e scrub ok
Nov 24 09:27:39 compute-0 ceph-mon[74331]: 2.9 deep-scrub starts
Nov 24 09:27:39 compute-0 ceph-mon[74331]: 2.9 deep-scrub ok
Nov 24 09:27:39 compute-0 ceph-mon[74331]: 4.10 scrub starts
Nov 24 09:27:39 compute-0 ceph-mon[74331]: 4.10 scrub ok
Nov 24 09:27:39 compute-0 ceph-mon[74331]: pgmap v94: 193 pgs: 64 peering, 129 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:27:39 compute-0 podman[88721]: 2025-11-24 09:27:39.070871523 +0000 UTC m=+0.135499178 container attach 2b22ef83d9a95f79ce84aaba44459074d63de12fd4e556ac1f78cac6580a7ce5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325)
Nov 24 09:27:39 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-1.qelqsg/server_addr}] v 0)
Nov 24 09:27:39 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/171270571' entity='client.admin' 
Nov 24 09:27:39 compute-0 systemd[1]: libpod-bacc51f21eac3d3c4685917d6cfc755fd41f78effe01a2d00be699fb6eabdf02.scope: Deactivated successfully.
Nov 24 09:27:39 compute-0 podman[88723]: 2025-11-24 09:27:39.446244838 +0000 UTC m=+0.505256520 container died bacc51f21eac3d3c4685917d6cfc755fd41f78effe01a2d00be699fb6eabdf02 (image=quay.io/ceph/ceph:v19, name=eager_varahamihira, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:27:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-f31a438b530eea7e9fbea591d97abb2a83d2654238c5551b57da6c519ab7a440-merged.mount: Deactivated successfully.
Nov 24 09:27:39 compute-0 podman[88723]: 2025-11-24 09:27:39.488250689 +0000 UTC m=+0.547262371 container remove bacc51f21eac3d3c4685917d6cfc755fd41f78effe01a2d00be699fb6eabdf02 (image=quay.io/ceph/ceph:v19, name=eager_varahamihira, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:27:39 compute-0 systemd[1]: libpod-conmon-bacc51f21eac3d3c4685917d6cfc755fd41f78effe01a2d00be699fb6eabdf02.scope: Deactivated successfully.
Nov 24 09:27:39 compute-0 sudo[88698]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:39 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Nov 24 09:27:39 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Nov 24 09:27:39 compute-0 lvm[88863]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 09:27:39 compute-0 lvm[88863]: VG ceph_vg0 finished
Nov 24 09:27:39 compute-0 trusting_austin[88754]: {}
Nov 24 09:27:39 compute-0 systemd[1]: libpod-2b22ef83d9a95f79ce84aaba44459074d63de12fd4e556ac1f78cac6580a7ce5.scope: Deactivated successfully.
Nov 24 09:27:39 compute-0 podman[88721]: 2025-11-24 09:27:39.770613811 +0000 UTC m=+0.835241456 container died 2b22ef83d9a95f79ce84aaba44459074d63de12fd4e556ac1f78cac6580a7ce5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_austin, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:27:39 compute-0 systemd[1]: libpod-2b22ef83d9a95f79ce84aaba44459074d63de12fd4e556ac1f78cac6580a7ce5.scope: Consumed 1.185s CPU time.
Nov 24 09:27:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c57a3fadc29ef42ba0462e4d1a36c5d35df916b711340c889b4e7b840794b09-merged.mount: Deactivated successfully.
Nov 24 09:27:39 compute-0 podman[88721]: 2025-11-24 09:27:39.812229262 +0000 UTC m=+0.876856907 container remove 2b22ef83d9a95f79ce84aaba44459074d63de12fd4e556ac1f78cac6580a7ce5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_austin, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:27:39 compute-0 systemd[1]: libpod-conmon-2b22ef83d9a95f79ce84aaba44459074d63de12fd4e556ac1f78cac6580a7ce5.scope: Deactivated successfully.
Nov 24 09:27:39 compute-0 sudo[88591]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:39 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:27:39 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:39 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:27:39 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:39 compute-0 ceph-mgr[74626]: [progress INFO root] update: starting ev 0e760861-751c-4f61-940e-a3d49c5696e9 (Updating rgw.rgw deployment (+3 -> 3))
Nov 24 09:27:39 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.qecnjt", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Nov 24 09:27:39 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.qecnjt", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 24 09:27:39 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.qecnjt", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 24 09:27:39 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Nov 24 09:27:39 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v95: 193 pgs: 64 peering, 129 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:27:39 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:39 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:27:39 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:27:39 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-2.qecnjt on compute-2
Nov 24 09:27:39 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-2.qecnjt on compute-2
Nov 24 09:27:40 compute-0 ceph-mon[74331]: 3.18 scrub starts
Nov 24 09:27:40 compute-0 ceph-mon[74331]: 3.18 scrub ok
Nov 24 09:27:40 compute-0 ceph-mon[74331]: 2.6 scrub starts
Nov 24 09:27:40 compute-0 ceph-mon[74331]: 2.6 scrub ok
Nov 24 09:27:40 compute-0 ceph-mon[74331]: 4.16 deep-scrub starts
Nov 24 09:27:40 compute-0 ceph-mon[74331]: 4.16 deep-scrub ok
Nov 24 09:27:40 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/171270571' entity='client.admin' 
Nov 24 09:27:40 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:40 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:40 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.qecnjt", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 24 09:27:40 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.qecnjt", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 24 09:27:40 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:40 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:27:40 compute-0 sudo[88900]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esjqycpzeiuafmkwxckqqbcgxiwarqbh ; /usr/bin/python3'
Nov 24 09:27:40 compute-0 sudo[88900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:27:40 compute-0 python3[88902]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-2.rzcnzg/server_addr 192.168.122.102
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:27:40 compute-0 podman[88903]: 2025-11-24 09:27:40.587361799 +0000 UTC m=+0.037455335 container create db03c38dcaa3e6cc6ae7b1da35f607f9b63fcccbe1328ad101bf0c6fc4de471e (image=quay.io/ceph/ceph:v19, name=zealous_shaw, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 24 09:27:40 compute-0 systemd[1]: Started libpod-conmon-db03c38dcaa3e6cc6ae7b1da35f607f9b63fcccbe1328ad101bf0c6fc4de471e.scope.
Nov 24 09:27:40 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:27:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e81c27bd892c25b6de133e88953739e8fc0b86e206f4a4da86e172a59511115d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e81c27bd892c25b6de133e88953739e8fc0b86e206f4a4da86e172a59511115d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e81c27bd892c25b6de133e88953739e8fc0b86e206f4a4da86e172a59511115d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:40 compute-0 podman[88903]: 2025-11-24 09:27:40.65690889 +0000 UTC m=+0.107002456 container init db03c38dcaa3e6cc6ae7b1da35f607f9b63fcccbe1328ad101bf0c6fc4de471e (image=quay.io/ceph/ceph:v19, name=zealous_shaw, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Nov 24 09:27:40 compute-0 podman[88903]: 2025-11-24 09:27:40.665559363 +0000 UTC m=+0.115652899 container start db03c38dcaa3e6cc6ae7b1da35f607f9b63fcccbe1328ad101bf0c6fc4de471e (image=quay.io/ceph/ceph:v19, name=zealous_shaw, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:27:40 compute-0 podman[88903]: 2025-11-24 09:27:40.573832959 +0000 UTC m=+0.023926535 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:27:40 compute-0 podman[88903]: 2025-11-24 09:27:40.668550194 +0000 UTC m=+0.118643750 container attach db03c38dcaa3e6cc6ae7b1da35f607f9b63fcccbe1328ad101bf0c6fc4de471e (image=quay.io/ceph/ceph:v19, name=zealous_shaw, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 24 09:27:40 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Nov 24 09:27:40 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Nov 24 09:27:41 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-2.rzcnzg/server_addr}] v 0)
Nov 24 09:27:41 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/940045969' entity='client.admin' 
Nov 24 09:27:41 compute-0 systemd[1]: libpod-db03c38dcaa3e6cc6ae7b1da35f607f9b63fcccbe1328ad101bf0c6fc4de471e.scope: Deactivated successfully.
Nov 24 09:27:41 compute-0 podman[88903]: 2025-11-24 09:27:41.068417028 +0000 UTC m=+0.518510564 container died db03c38dcaa3e6cc6ae7b1da35f607f9b63fcccbe1328ad101bf0c6fc4de471e (image=quay.io/ceph/ceph:v19, name=zealous_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 24 09:27:41 compute-0 ceph-mon[74331]: 3.7 scrub starts
Nov 24 09:27:41 compute-0 ceph-mon[74331]: 3.7 scrub ok
Nov 24 09:27:41 compute-0 ceph-mon[74331]: 2.4 deep-scrub starts
Nov 24 09:27:41 compute-0 ceph-mon[74331]: 2.4 deep-scrub ok
Nov 24 09:27:41 compute-0 ceph-mon[74331]: 4.17 scrub starts
Nov 24 09:27:41 compute-0 ceph-mon[74331]: 4.17 scrub ok
Nov 24 09:27:41 compute-0 ceph-mon[74331]: pgmap v95: 193 pgs: 64 peering, 129 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:27:41 compute-0 ceph-mon[74331]: Deploying daemon rgw.rgw.compute-2.qecnjt on compute-2
Nov 24 09:27:41 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/940045969' entity='client.admin' 
Nov 24 09:27:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-e81c27bd892c25b6de133e88953739e8fc0b86e206f4a4da86e172a59511115d-merged.mount: Deactivated successfully.
Nov 24 09:27:41 compute-0 podman[88903]: 2025-11-24 09:27:41.103261489 +0000 UTC m=+0.553355025 container remove db03c38dcaa3e6cc6ae7b1da35f607f9b63fcccbe1328ad101bf0c6fc4de471e (image=quay.io/ceph/ceph:v19, name=zealous_shaw, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1)
Nov 24 09:27:41 compute-0 systemd[1]: libpod-conmon-db03c38dcaa3e6cc6ae7b1da35f607f9b63fcccbe1328ad101bf0c6fc4de471e.scope: Deactivated successfully.
Nov 24 09:27:41 compute-0 sudo[88900]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:41 compute-0 sudo[88978]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzvyvxexrvpnczdjazenbdyiowytiglx ; /usr/bin/python3'
Nov 24 09:27:41 compute-0 sudo[88978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:27:41 compute-0 python3[88980]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:27:41 compute-0 podman[88981]: 2025-11-24 09:27:41.479896374 +0000 UTC m=+0.042752159 container create 46b6dca223defbb115527b24e5c07f31316f75effe6ff7e8dfd6ef77e6b29a6a (image=quay.io/ceph/ceph:v19, name=modest_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:27:41 compute-0 systemd[1]: Started libpod-conmon-46b6dca223defbb115527b24e5c07f31316f75effe6ff7e8dfd6ef77e6b29a6a.scope.
Nov 24 09:27:41 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:27:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df6b9d4f0b9a9bd387bb30ed050d19ee38b426c22ff96db421951593e2b28202/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df6b9d4f0b9a9bd387bb30ed050d19ee38b426c22ff96db421951593e2b28202/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df6b9d4f0b9a9bd387bb30ed050d19ee38b426c22ff96db421951593e2b28202/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:41 compute-0 podman[88981]: 2025-11-24 09:27:41.555004166 +0000 UTC m=+0.117859961 container init 46b6dca223defbb115527b24e5c07f31316f75effe6ff7e8dfd6ef77e6b29a6a (image=quay.io/ceph/ceph:v19, name=modest_hodgkin, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Nov 24 09:27:41 compute-0 podman[88981]: 2025-11-24 09:27:41.461032439 +0000 UTC m=+0.023888224 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:27:41 compute-0 podman[88981]: 2025-11-24 09:27:41.563562318 +0000 UTC m=+0.126418103 container start 46b6dca223defbb115527b24e5c07f31316f75effe6ff7e8dfd6ef77e6b29a6a (image=quay.io/ceph/ceph:v19, name=modest_hodgkin, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:27:41 compute-0 podman[88981]: 2025-11-24 09:27:41.567042851 +0000 UTC m=+0.129898636 container attach 46b6dca223defbb115527b24e5c07f31316f75effe6ff7e8dfd6ef77e6b29a6a (image=quay.io/ceph/ceph:v19, name=modest_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325)
Nov 24 09:27:41 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Nov 24 09:27:41 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Nov 24 09:27:41 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v96: 193 pgs: 1 active+clean+scrubbing, 192 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:27:41 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Nov 24 09:27:41 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 09:27:41 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Nov 24 09:27:41 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 09:27:41 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Nov 24 09:27:41 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 09:27:41 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Nov 24 09:27:41 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 09:27:41 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Nov 24 09:27:41 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 09:27:41 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Nov 24 09:27:41 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 09:27:41 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Nov 24 09:27:41 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3633642607' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Nov 24 09:27:42 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Nov 24 09:27:42 compute-0 ceph-mon[74331]: 2.5 scrub starts
Nov 24 09:27:42 compute-0 ceph-mon[74331]: 2.5 scrub ok
Nov 24 09:27:42 compute-0 ceph-mon[74331]: 5.1 scrub starts
Nov 24 09:27:42 compute-0 ceph-mon[74331]: 5.1 scrub ok
Nov 24 09:27:42 compute-0 ceph-mon[74331]: 4.9 scrub starts
Nov 24 09:27:42 compute-0 ceph-mon[74331]: 4.9 scrub ok
Nov 24 09:27:42 compute-0 ceph-mon[74331]: 2.1 scrub starts
Nov 24 09:27:42 compute-0 ceph-mon[74331]: 4.8 scrub starts
Nov 24 09:27:42 compute-0 ceph-mon[74331]: 4.8 scrub ok
Nov 24 09:27:42 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 09:27:42 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 09:27:42 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 09:27:42 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 09:27:42 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 09:27:42 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 09:27:42 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3633642607' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Nov 24 09:27:42 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 09:27:42 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 09:27:42 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 09:27:42 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 09:27:42 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 09:27:42 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 09:27:42 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Nov 24 09:27:42 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3633642607' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Nov 24 09:27:42 compute-0 modest_hodgkin[88996]: module 'dashboard' is already disabled
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[2.19( empty local-lis/les=0/0 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[7.13( empty local-lis/les=0/0 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[7.10( empty local-lis/les=0/0 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[2.e( empty local-lis/les=0/0 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[7.b( empty local-lis/les=0/0 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[7.9( empty local-lis/les=0/0 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : mgrmap e12: compute-0.mauvni(active, since 2m), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[7.8( empty local-lis/les=0/0 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[7.f( empty local-lis/les=0/0 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[7.e( empty local-lis/les=0/0 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[2.1( empty local-lis/les=0/0 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[7.4( empty local-lis/les=0/0 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[2.6( empty local-lis/les=0/0 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[7.3( empty local-lis/les=0/0 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[7.2( empty local-lis/les=0/0 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[2.4( empty local-lis/les=0/0 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[7.6( empty local-lis/les=0/0 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[2.9( empty local-lis/les=0/0 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[7.1e( empty local-lis/les=0/0 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[7.18( empty local-lis/les=0/0 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[2.1f( empty local-lis/les=0/0 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[7.1b( empty local-lis/les=0/0 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[2.1e( empty local-lis/les=0/0 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[4.18( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=13.443203926s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 81.534309387s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[4.18( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=13.443133354s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.534309387s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[6.1a( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=15.542576790s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 83.633766174s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[6.1a( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=15.542535782s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.633766174s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[6.1b( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=15.542467117s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 83.633804321s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[6.1b( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=15.542437553s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.633804321s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[4.19( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=13.442868233s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 81.534271240s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[4.19( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=13.442751884s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.534271240s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[4.1b( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=13.442740440s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 81.534278870s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[4.1b( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=13.442657471s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.534278870s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[6.19( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=15.542139053s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 83.633811951s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[6.19( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=15.542105675s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.633811951s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[6.1e( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=15.542127609s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 83.633888245s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[6.1e( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=15.542111397s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.633888245s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[4.1c( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=13.442428589s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 81.534240723s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[4.1c( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=13.442381859s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.534240723s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[4.e( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=13.442279816s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 81.534202576s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[4.e( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=13.442267418s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.534202576s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[4.1d( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=13.442244530s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 81.534217834s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[4.1d( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=13.442214012s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.534217834s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[4.1a( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=13.442186356s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 81.534278870s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[4.1a( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=13.442155838s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.534278870s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[6.d( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=15.544475555s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 83.636657715s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[6.d( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=15.544455528s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.636657715s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[4.3( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=13.441978455s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 81.534194946s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[6.1( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=15.544465065s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 83.636718750s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[4.3( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=13.441942215s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.534194946s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[6.1( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=15.544428825s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.636718750s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[4.5( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=13.441839218s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 81.534225464s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[4.5( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=13.441824913s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.534225464s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[6.7( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=15.544202805s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 83.636756897s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[4.2( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=13.441168785s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 81.533752441s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[6.7( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=15.544182777s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.636756897s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[4.2( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=13.441150665s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.533752441s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[4.6( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=13.441260338s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 81.533866882s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[4.6( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=13.441225052s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.533866882s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[6.3( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=15.544201851s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 83.636909485s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[4.1( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=13.441049576s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 81.533752441s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[6.3( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=15.544185638s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.636909485s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[4.1( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=13.441018105s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.533752441s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[6.2( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=15.544174194s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 83.636962891s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[6.2( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=15.544158936s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.636962891s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[6.5( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=15.544056892s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 83.636993408s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[6.5( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=15.544042587s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.636993408s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[4.d( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=13.440648079s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 81.533622742s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[4.d( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=13.440625191s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.533622742s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[4.c( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=13.440795898s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 81.533859253s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[6.e( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=15.543915749s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 83.637031555s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[4.c( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=13.440755844s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.533859253s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[4.a( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=13.440393448s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 81.533554077s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[6.e( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=15.543883324s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.637031555s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[6.8( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=15.543851852s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 83.637046814s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[4.a( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=13.440376282s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.533554077s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[6.8( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=15.543827057s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.637046814s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[4.8( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=13.440102577s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 81.533477783s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[4.8( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=13.440080643s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.533477783s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[6.a( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=15.543637276s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 83.637062073s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[4.9( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=13.439738274s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 81.533203125s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[6.15( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=15.543657303s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 83.637207031s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[4.9( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=13.439698219s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.533203125s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[6.15( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=15.543642998s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.637207031s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[4.15( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=13.439448357s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 81.533065796s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[6.a( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=15.543416023s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.637062073s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[4.15( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=13.439414978s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.533065796s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[6.17( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=15.543442726s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 83.637184143s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[6.17( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=15.543422699s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.637184143s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[4.14( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=13.439282417s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 81.533073425s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[4.13( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=13.439645767s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 81.533477783s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[4.13( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=13.439628601s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.533477783s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[4.14( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=13.439249039s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.533073425s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[6.12( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=15.543375015s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 83.637359619s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[4.1f( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=13.430747032s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 81.524749756s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[6.12( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=15.543353081s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.637359619s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[4.1f( empty local-lis/les=30/31 n=0 ec=30/15 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=13.430724144s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.524749756s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[6.1c( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=15.543182373s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 83.637367249s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[6.1c( empty local-lis/les=32/33 n=0 ec=32/17 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=15.543160439s) [2] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.637367249s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[3.18( empty local-lis/les=0/0 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[5.1e( empty local-lis/les=0/0 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[3.7( empty local-lis/les=0/0 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[3.6( empty local-lis/les=0/0 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[3.19( empty local-lis/les=0/0 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[5.3( empty local-lis/les=0/0 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[5.19( empty local-lis/les=0/0 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[3.4( empty local-lis/les=0/0 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[5.5( empty local-lis/les=0/0 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[3.1f( empty local-lis/les=0/0 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[3.1e( empty local-lis/les=0/0 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[3.2( empty local-lis/les=0/0 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[3.1( empty local-lis/les=0/0 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[5.1d( empty local-lis/les=0/0 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[5.6( empty local-lis/les=0/0 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[5.c( empty local-lis/les=0/0 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[5.a( empty local-lis/les=0/0 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[3.b( empty local-lis/les=0/0 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[3.12( empty local-lis/les=0/0 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-0 systemd[1]: libpod-46b6dca223defbb115527b24e5c07f31316f75effe6ff7e8dfd6ef77e6b29a6a.scope: Deactivated successfully.
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[5.14( empty local-lis/les=0/0 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[5.17( empty local-lis/les=0/0 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 36 pg[3.17( empty local-lis/les=0/0 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:42 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 09:27:42 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:42 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 09:27:42 compute-0 podman[89021]: 2025-11-24 09:27:42.209336372 +0000 UTC m=+0.036730417 container died 46b6dca223defbb115527b24e5c07f31316f75effe6ff7e8dfd6ef77e6b29a6a (image=quay.io/ceph/ceph:v19, name=modest_hodgkin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Nov 24 09:27:42 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:42 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Nov 24 09:27:42 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:42 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.vproll", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Nov 24 09:27:42 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.vproll", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 24 09:27:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-df6b9d4f0b9a9bd387bb30ed050d19ee38b426c22ff96db421951593e2b28202-merged.mount: Deactivated successfully.
Nov 24 09:27:42 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.vproll", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 24 09:27:42 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Nov 24 09:27:42 compute-0 podman[89021]: 2025-11-24 09:27:42.253873593 +0000 UTC m=+0.081267608 container remove 46b6dca223defbb115527b24e5c07f31316f75effe6ff7e8dfd6ef77e6b29a6a (image=quay.io/ceph/ceph:v19, name=modest_hodgkin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid)
Nov 24 09:27:42 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:42 compute-0 systemd[1]: libpod-conmon-46b6dca223defbb115527b24e5c07f31316f75effe6ff7e8dfd6ef77e6b29a6a.scope: Deactivated successfully.
Nov 24 09:27:42 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:27:42 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:27:42 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-1.vproll on compute-1
Nov 24 09:27:42 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-1.vproll on compute-1
Nov 24 09:27:42 compute-0 sudo[88978]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:42 compute-0 sudo[89060]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lddokzcinhohhxodjmxpionsxxpxlelr ; /usr/bin/python3'
Nov 24 09:27:42 compute-0 sudo[89060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:27:42 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 6.18 scrub starts
Nov 24 09:27:42 compute-0 python3[89062]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:27:42 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 6.18 scrub ok
Nov 24 09:27:42 compute-0 podman[89063]: 2025-11-24 09:27:42.71750371 +0000 UTC m=+0.043287012 container create 72e61eabaa6ff736003db5da9cc9da8bca59397d5d4bc0a32933a9b0cb8b815a (image=quay.io/ceph/ceph:v19, name=suspicious_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:27:42 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:27:42 compute-0 systemd[1]: Started libpod-conmon-72e61eabaa6ff736003db5da9cc9da8bca59397d5d4bc0a32933a9b0cb8b815a.scope.
Nov 24 09:27:42 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:27:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34f9a6004f79003523ba78f4f626d623a6be7f9dad939badaffc0b10a7afc818/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34f9a6004f79003523ba78f4f626d623a6be7f9dad939badaffc0b10a7afc818/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34f9a6004f79003523ba78f4f626d623a6be7f9dad939badaffc0b10a7afc818/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:42 compute-0 podman[89063]: 2025-11-24 09:27:42.785278369 +0000 UTC m=+0.111061681 container init 72e61eabaa6ff736003db5da9cc9da8bca59397d5d4bc0a32933a9b0cb8b815a (image=quay.io/ceph/ceph:v19, name=suspicious_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Nov 24 09:27:42 compute-0 podman[89063]: 2025-11-24 09:27:42.79253183 +0000 UTC m=+0.118315132 container start 72e61eabaa6ff736003db5da9cc9da8bca59397d5d4bc0a32933a9b0cb8b815a (image=quay.io/ceph/ceph:v19, name=suspicious_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 09:27:42 compute-0 podman[89063]: 2025-11-24 09:27:42.695732757 +0000 UTC m=+0.021516079 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:27:42 compute-0 podman[89063]: 2025-11-24 09:27:42.795623563 +0000 UTC m=+0.121406895 container attach 72e61eabaa6ff736003db5da9cc9da8bca59397d5d4bc0a32933a9b0cb8b815a (image=quay.io/ceph/ceph:v19, name=suspicious_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default)
Nov 24 09:27:43 compute-0 ceph-mon[74331]: 2.1 scrub ok
Nov 24 09:27:43 compute-0 ceph-mon[74331]: 5.3 scrub starts
Nov 24 09:27:43 compute-0 ceph-mon[74331]: 5.3 scrub ok
Nov 24 09:27:43 compute-0 ceph-mon[74331]: pgmap v96: 193 pgs: 1 active+clean+scrubbing, 192 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:27:43 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 09:27:43 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 09:27:43 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 09:27:43 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 09:27:43 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 09:27:43 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 09:27:43 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3633642607' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Nov 24 09:27:43 compute-0 ceph-mon[74331]: osdmap e36: 3 total, 3 up, 3 in
Nov 24 09:27:43 compute-0 ceph-mon[74331]: mgrmap e12: compute-0.mauvni(active, since 2m), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:27:43 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:43 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:43 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:43 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.vproll", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 24 09:27:43 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.vproll", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 24 09:27:43 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:43 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:27:43 compute-0 ceph-mon[74331]: 6.18 scrub starts
Nov 24 09:27:43 compute-0 ceph-mon[74331]: 6.18 scrub ok
Nov 24 09:27:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Nov 24 09:27:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Nov 24 09:27:43 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Nov 24 09:27:43 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 37 pg[8.0( empty local-lis/les=0/0 n=0 ec=37/37 lis/c=0/0 les/c/f=0/0/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Nov 24 09:27:43 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 37 pg[2.1e( empty local-lis/les=36/37 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 37 pg[3.1f( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 37 pg[7.1b( empty local-lis/les=36/37 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 37 pg[3.1e( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 37 pg[2.1f( empty local-lis/les=36/37 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 37 pg[2.19( empty local-lis/les=36/37 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 37 pg[5.1d( empty local-lis/les=36/37 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 37 pg[7.1e( empty local-lis/les=36/37 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 37 pg[3.4( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 37 pg[2.9( empty local-lis/les=36/37 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 37 pg[5.19( empty local-lis/les=36/37 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 37 pg[5.5( empty local-lis/les=36/37 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 37 pg[7.13( empty local-lis/les=36/37 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 37 pg[7.6( empty local-lis/les=36/37 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 37 pg[7.10( empty local-lis/les=36/37 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 37 pg[3.1( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 37 pg[2.4( empty local-lis/les=36/37 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 37 pg[3.6( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 37 pg[3.7( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 37 pg[7.2( empty local-lis/les=36/37 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 37 pg[7.3( empty local-lis/les=36/37 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 37 pg[2.6( empty local-lis/les=36/37 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 37 pg[3.2( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 37 pg[7.4( empty local-lis/les=36/37 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 37 pg[5.6( empty local-lis/les=36/37 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 37 pg[2.1( empty local-lis/les=36/37 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 37 pg[5.3( empty local-lis/les=36/37 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 37 pg[2.e( empty local-lis/les=36/37 n=0 ec=28/13 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 37 pg[5.c( empty local-lis/les=36/37 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 37 pg[7.e( empty local-lis/les=36/37 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 37 pg[7.8( empty local-lis/les=36/37 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 37 pg[5.a( empty local-lis/les=36/37 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 37 pg[7.f( empty local-lis/les=36/37 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 37 pg[3.b( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 37 pg[7.9( empty local-lis/les=36/37 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 37 pg[5.17( empty local-lis/les=36/37 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 37 pg[7.b( empty local-lis/les=36/37 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.qecnjt' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 24 09:27:43 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 37 pg[7.18( empty local-lis/les=36/37 n=0 ec=32/19 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 37 pg[3.12( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 37 pg[5.14( empty local-lis/les=36/37 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 37 pg[3.17( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 37 pg[3.18( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 37 pg[3.19( empty local-lis/les=36/37 n=0 ec=28/14 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 37 pg[5.1e( empty local-lis/les=36/37 n=0 ec=30/16 lis/c=34/34 les/c/f=35/35/0 sis=36) [0] r=0 lpr=36 pi=[34,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Nov 24 09:27:43 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2662573742' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Nov 24 09:27:43 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 6.1f scrub starts
Nov 24 09:27:43 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 6.1f scrub ok
Nov 24 09:27:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 09:27:43 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 09:27:43 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Nov 24 09:27:43 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.zlrxyg", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Nov 24 09:27:43 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.zlrxyg", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 24 09:27:43 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.zlrxyg", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 24 09:27:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Nov 24 09:27:43 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:27:43 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:27:43 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.zlrxyg on compute-0
Nov 24 09:27:43 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.zlrxyg on compute-0
Nov 24 09:27:43 compute-0 sudo[89103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:27:43 compute-0 sudo[89103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:43 compute-0 sudo[89103]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:43 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v99: 194 pgs: 1 unknown, 1 active+clean+scrubbing, 192 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:27:43 compute-0 sudo[89128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:27:43 compute-0 sudo[89128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:44 compute-0 ceph-mon[74331]: Deploying daemon rgw.rgw.compute-1.vproll on compute-1
Nov 24 09:27:44 compute-0 ceph-mon[74331]: 2.1a scrub starts
Nov 24 09:27:44 compute-0 ceph-mon[74331]: 2.1a scrub ok
Nov 24 09:27:44 compute-0 ceph-mon[74331]: 5.0 scrub starts
Nov 24 09:27:44 compute-0 ceph-mon[74331]: 5.0 scrub ok
Nov 24 09:27:44 compute-0 ceph-mon[74331]: osdmap e37: 3 total, 3 up, 3 in
Nov 24 09:27:44 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/3262419163' entity='client.rgw.rgw.compute-2.qecnjt' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 24 09:27:44 compute-0 ceph-mon[74331]: from='client.? ' entity='client.rgw.rgw.compute-2.qecnjt' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 24 09:27:44 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2662573742' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Nov 24 09:27:44 compute-0 ceph-mon[74331]: 6.1f scrub starts
Nov 24 09:27:44 compute-0 ceph-mon[74331]: 6.1f scrub ok
Nov 24 09:27:44 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:44 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:44 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:44 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.zlrxyg", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 24 09:27:44 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.zlrxyg", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 24 09:27:44 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:44 compute-0 ceph-mon[74331]: from='mgr.14122 192.168.122.100:0/2808195857' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:27:44 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Nov 24 09:27:44 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2662573742' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Nov 24 09:27:44 compute-0 ceph-mgr[74626]: mgr handle_mgr_map respawning because set of enabled modules changed!
Nov 24 09:27:44 compute-0 ceph-mgr[74626]: mgr respawn  e: '/usr/bin/ceph-mgr'
Nov 24 09:27:44 compute-0 ceph-mgr[74626]: mgr respawn  0: '/usr/bin/ceph-mgr'
Nov 24 09:27:44 compute-0 ceph-mgr[74626]: mgr respawn  1: '-n'
Nov 24 09:27:44 compute-0 ceph-mgr[74626]: mgr respawn  2: 'mgr.compute-0.mauvni'
Nov 24 09:27:44 compute-0 ceph-mgr[74626]: mgr respawn  3: '-f'
Nov 24 09:27:44 compute-0 ceph-mgr[74626]: mgr respawn  4: '--setuser'
Nov 24 09:27:44 compute-0 ceph-mgr[74626]: mgr respawn  5: 'ceph'
Nov 24 09:27:44 compute-0 ceph-mgr[74626]: mgr respawn  6: '--setgroup'
Nov 24 09:27:44 compute-0 ceph-mgr[74626]: mgr respawn  7: 'ceph'
Nov 24 09:27:44 compute-0 ceph-mgr[74626]: mgr respawn  8: '--default-log-to-file=false'
Nov 24 09:27:44 compute-0 ceph-mgr[74626]: mgr respawn  9: '--default-log-to-journald=true'
Nov 24 09:27:44 compute-0 ceph-mgr[74626]: mgr respawn  10: '--default-log-to-stderr=false'
Nov 24 09:27:44 compute-0 ceph-mgr[74626]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Nov 24 09:27:44 compute-0 ceph-mgr[74626]: mgr respawn  exe_path /proc/self/exe
Nov 24 09:27:44 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : mgrmap e13: compute-0.mauvni(active, since 2m), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:27:44 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.qecnjt' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 24 09:27:44 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Nov 24 09:27:44 compute-0 systemd[1]: libpod-72e61eabaa6ff736003db5da9cc9da8bca59397d5d4bc0a32933a9b0cb8b815a.scope: Deactivated successfully.
Nov 24 09:27:44 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Nov 24 09:27:44 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 38 pg[8.0( empty local-lis/les=37/38 n=0 ec=37/37 lis/c=0/0 les/c/f=0/0/0 sis=37) [0] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:44 compute-0 podman[89169]: 2025-11-24 09:27:44.211948746 +0000 UTC m=+0.024331285 container died 72e61eabaa6ff736003db5da9cc9da8bca59397d5d4bc0a32933a9b0cb8b815a (image=quay.io/ceph/ceph:v19, name=suspicious_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Nov 24 09:27:44 compute-0 sshd-session[75733]: Connection closed by 192.168.122.100 port 34720
Nov 24 09:27:44 compute-0 sshd-session[75907]: Connection closed by 192.168.122.100 port 34766
Nov 24 09:27:44 compute-0 sshd-session[75963]: Connection closed by 192.168.122.100 port 34786
Nov 24 09:27:44 compute-0 sshd-session[75762]: Connection closed by 192.168.122.100 port 34732
Nov 24 09:27:44 compute-0 sshd-session[75791]: Connection closed by 192.168.122.100 port 34736
Nov 24 09:27:44 compute-0 sshd-session[75878]: Connection closed by 192.168.122.100 port 34762
Nov 24 09:27:44 compute-0 sshd-session[75674]: Connection closed by 192.168.122.100 port 34708
Nov 24 09:27:44 compute-0 sshd-session[75849]: Connection closed by 192.168.122.100 port 34750
Nov 24 09:27:44 compute-0 sshd-session[75934]: Connection closed by 192.168.122.100 port 34778
Nov 24 09:27:44 compute-0 sshd-session[75704]: Connection closed by 192.168.122.100 port 34716
Nov 24 09:27:44 compute-0 sshd-session[75820]: Connection closed by 192.168.122.100 port 34748
Nov 24 09:27:44 compute-0 sshd-session[75675]: Connection closed by 192.168.122.100 port 34710
Nov 24 09:27:44 compute-0 sshd-session[75788]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 24 09:27:44 compute-0 sshd-session[75759]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 24 09:27:44 compute-0 sshd-session[75875]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 24 09:27:44 compute-0 sshd-session[75701]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 24 09:27:44 compute-0 systemd[1]: session-27.scope: Deactivated successfully.
Nov 24 09:27:44 compute-0 systemd[1]: session-26.scope: Deactivated successfully.
Nov 24 09:27:44 compute-0 sshd-session[75960]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 24 09:27:44 compute-0 sshd-session[75661]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 24 09:27:44 compute-0 systemd-logind[822]: Session 27 logged out. Waiting for processes to exit.
Nov 24 09:27:44 compute-0 systemd[1]: session-30.scope: Deactivated successfully.
Nov 24 09:27:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-34f9a6004f79003523ba78f4f626d623a6be7f9dad939badaffc0b10a7afc818-merged.mount: Deactivated successfully.
Nov 24 09:27:44 compute-0 systemd[1]: session-24.scope: Deactivated successfully.
Nov 24 09:27:44 compute-0 systemd-logind[822]: Session 26 logged out. Waiting for processes to exit.
Nov 24 09:27:44 compute-0 systemd-logind[822]: Session 30 logged out. Waiting for processes to exit.
Nov 24 09:27:44 compute-0 systemd-logind[822]: Session 24 logged out. Waiting for processes to exit.
Nov 24 09:27:44 compute-0 systemd-logind[822]: Session 33 logged out. Waiting for processes to exit.
Nov 24 09:27:44 compute-0 systemd-logind[822]: Session 23 logged out. Waiting for processes to exit.
Nov 24 09:27:44 compute-0 systemd[1]: session-23.scope: Deactivated successfully.
Nov 24 09:27:44 compute-0 systemd-logind[822]: Removed session 27.
Nov 24 09:27:44 compute-0 systemd-logind[822]: Removed session 26.
Nov 24 09:27:44 compute-0 sshd-session[75817]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 24 09:27:44 compute-0 systemd-logind[822]: Removed session 30.
Nov 24 09:27:44 compute-0 sshd-session[75652]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 24 09:27:44 compute-0 podman[89169]: 2025-11-24 09:27:44.264373143 +0000 UTC m=+0.076755662 container remove 72e61eabaa6ff736003db5da9cc9da8bca59397d5d4bc0a32933a9b0cb8b815a (image=quay.io/ceph/ceph:v19, name=suspicious_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 24 09:27:44 compute-0 systemd-logind[822]: Removed session 24.
Nov 24 09:27:44 compute-0 sshd-session[75931]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 24 09:27:44 compute-0 sshd-session[75904]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 24 09:27:44 compute-0 systemd[1]: session-28.scope: Deactivated successfully.
Nov 24 09:27:44 compute-0 sshd-session[75730]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 24 09:27:44 compute-0 sshd-session[75846]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 24 09:27:44 compute-0 systemd[1]: session-32.scope: Deactivated successfully.
Nov 24 09:27:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ignoring --setuser ceph since I am not root
Nov 24 09:27:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ignoring --setgroup ceph since I am not root
Nov 24 09:27:44 compute-0 systemd[1]: session-31.scope: Deactivated successfully.
Nov 24 09:27:44 compute-0 systemd[1]: session-25.scope: Deactivated successfully.
Nov 24 09:27:44 compute-0 systemd[1]: libpod-conmon-72e61eabaa6ff736003db5da9cc9da8bca59397d5d4bc0a32933a9b0cb8b815a.scope: Deactivated successfully.
Nov 24 09:27:44 compute-0 systemd[1]: session-29.scope: Deactivated successfully.
Nov 24 09:27:44 compute-0 systemd[1]: session-21.scope: Deactivated successfully.
Nov 24 09:27:44 compute-0 systemd-logind[822]: Session 28 logged out. Waiting for processes to exit.
Nov 24 09:27:44 compute-0 systemd-logind[822]: Session 32 logged out. Waiting for processes to exit.
Nov 24 09:27:44 compute-0 systemd-logind[822]: Session 31 logged out. Waiting for processes to exit.
Nov 24 09:27:44 compute-0 ceph-mgr[74626]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Nov 24 09:27:44 compute-0 systemd-logind[822]: Session 21 logged out. Waiting for processes to exit.
Nov 24 09:27:44 compute-0 ceph-mgr[74626]: pidfile_write: ignore empty --pid-file
Nov 24 09:27:44 compute-0 systemd-logind[822]: Session 29 logged out. Waiting for processes to exit.
Nov 24 09:27:44 compute-0 systemd-logind[822]: Session 25 logged out. Waiting for processes to exit.
Nov 24 09:27:44 compute-0 systemd-logind[822]: Removed session 23.
Nov 24 09:27:44 compute-0 systemd-logind[822]: Removed session 28.
Nov 24 09:27:44 compute-0 systemd-logind[822]: Removed session 32.
Nov 24 09:27:44 compute-0 systemd-logind[822]: Removed session 31.
Nov 24 09:27:44 compute-0 systemd-logind[822]: Removed session 25.
Nov 24 09:27:44 compute-0 systemd-logind[822]: Removed session 29.
Nov 24 09:27:44 compute-0 systemd-logind[822]: Removed session 21.
Nov 24 09:27:44 compute-0 sudo[89060]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:44 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'alerts'
Nov 24 09:27:44 compute-0 podman[89232]: 2025-11-24 09:27:44.378434864 +0000 UTC m=+0.034048285 container create ca7b5d794f935e265594a9d0caff6782198a782d2523f3c37039ded0db2d166c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_vaughan, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:27:44 compute-0 systemd[1]: Started libpod-conmon-ca7b5d794f935e265594a9d0caff6782198a782d2523f3c37039ded0db2d166c.scope.
Nov 24 09:27:44 compute-0 ceph-mgr[74626]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 24 09:27:44 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'balancer'
Nov 24 09:27:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:27:44.421+0000 7f279ac3b140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 24 09:27:44 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:27:44 compute-0 podman[89232]: 2025-11-24 09:27:44.364486805 +0000 UTC m=+0.020100246 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:27:44 compute-0 podman[89232]: 2025-11-24 09:27:44.462336433 +0000 UTC m=+0.117949874 container init ca7b5d794f935e265594a9d0caff6782198a782d2523f3c37039ded0db2d166c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True)
Nov 24 09:27:44 compute-0 podman[89232]: 2025-11-24 09:27:44.470981107 +0000 UTC m=+0.126594528 container start ca7b5d794f935e265594a9d0caff6782198a782d2523f3c37039ded0db2d166c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_vaughan, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Nov 24 09:27:44 compute-0 podman[89232]: 2025-11-24 09:27:44.474327066 +0000 UTC m=+0.129940507 container attach ca7b5d794f935e265594a9d0caff6782198a782d2523f3c37039ded0db2d166c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:27:44 compute-0 magical_vaughan[89250]: 167 167
Nov 24 09:27:44 compute-0 systemd[1]: libpod-ca7b5d794f935e265594a9d0caff6782198a782d2523f3c37039ded0db2d166c.scope: Deactivated successfully.
Nov 24 09:27:44 compute-0 podman[89232]: 2025-11-24 09:27:44.479078238 +0000 UTC m=+0.134691659 container died ca7b5d794f935e265594a9d0caff6782198a782d2523f3c37039ded0db2d166c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_vaughan, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:27:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-d277e37d5bced334f185a459b465fada78a1d0a373cfdf18009df301de3fad57-merged.mount: Deactivated successfully.
Nov 24 09:27:44 compute-0 ceph-mgr[74626]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 24 09:27:44 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'cephadm'
Nov 24 09:27:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:27:44.508+0000 7f279ac3b140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 24 09:27:44 compute-0 podman[89232]: 2025-11-24 09:27:44.513451729 +0000 UTC m=+0.169065150 container remove ca7b5d794f935e265594a9d0caff6782198a782d2523f3c37039ded0db2d166c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_vaughan, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:27:44 compute-0 systemd[1]: libpod-conmon-ca7b5d794f935e265594a9d0caff6782198a782d2523f3c37039ded0db2d166c.scope: Deactivated successfully.
Nov 24 09:27:44 compute-0 systemd[1]: Reloading.
Nov 24 09:27:44 compute-0 systemd-rc-local-generator[89312]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:27:44 compute-0 systemd-sysv-generator[89317]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:27:44 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 6.c scrub starts
Nov 24 09:27:44 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 6.c scrub ok
Nov 24 09:27:44 compute-0 sudo[89323]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgvjmdlmlcntdnbgdjcdquytwqxhsqpm ; /usr/bin/python3'
Nov 24 09:27:44 compute-0 sudo[89323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:27:44 compute-0 ceph-mon[74331]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 24 09:27:44 compute-0 systemd[1]: Reloading.
Nov 24 09:27:44 compute-0 systemd-sysv-generator[89366]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:27:44 compute-0 systemd-rc-local-generator[89362]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:27:44 compute-0 python3[89329]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-username admin _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:27:45 compute-0 podman[89379]: 2025-11-24 09:27:45.018269119 +0000 UTC m=+0.039723739 container create 9c8404155e355cd1971c90dda53ae43ab37268c6dd24552dbb8753e27d2ecf2b (image=quay.io/ceph/ceph:v19, name=kind_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 24 09:27:45 compute-0 systemd[1]: Started libpod-conmon-9c8404155e355cd1971c90dda53ae43ab37268c6dd24552dbb8753e27d2ecf2b.scope.
Nov 24 09:27:45 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.zlrxyg for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:27:45 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:27:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcac0a27329cb58b0cfaf5f074e0f27ff58407b7b882480cd180f4613eb05a39/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcac0a27329cb58b0cfaf5f074e0f27ff58407b7b882480cd180f4613eb05a39/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcac0a27329cb58b0cfaf5f074e0f27ff58407b7b882480cd180f4613eb05a39/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:45 compute-0 podman[89379]: 2025-11-24 09:27:45.09636315 +0000 UTC m=+0.117817830 container init 9c8404155e355cd1971c90dda53ae43ab37268c6dd24552dbb8753e27d2ecf2b (image=quay.io/ceph/ceph:v19, name=kind_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:27:45 compute-0 podman[89379]: 2025-11-24 09:27:45.001671437 +0000 UTC m=+0.023126097 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:27:45 compute-0 ceph-mon[74331]: 7.1c scrub starts
Nov 24 09:27:45 compute-0 ceph-mon[74331]: 7.1c scrub ok
Nov 24 09:27:45 compute-0 ceph-mon[74331]: 3.8 scrub starts
Nov 24 09:27:45 compute-0 ceph-mon[74331]: 3.8 scrub ok
Nov 24 09:27:45 compute-0 ceph-mon[74331]: Deploying daemon rgw.rgw.compute-0.zlrxyg on compute-0
Nov 24 09:27:45 compute-0 ceph-mon[74331]: pgmap v99: 194 pgs: 1 unknown, 1 active+clean+scrubbing, 192 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:27:45 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2662573742' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Nov 24 09:27:45 compute-0 ceph-mon[74331]: mgrmap e13: compute-0.mauvni(active, since 2m), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:27:45 compute-0 ceph-mon[74331]: from='client.? ' entity='client.rgw.rgw.compute-2.qecnjt' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 24 09:27:45 compute-0 ceph-mon[74331]: osdmap e38: 3 total, 3 up, 3 in
Nov 24 09:27:45 compute-0 ceph-mon[74331]: 6.c scrub starts
Nov 24 09:27:45 compute-0 ceph-mon[74331]: 6.c scrub ok
Nov 24 09:27:45 compute-0 ceph-mon[74331]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 24 09:27:45 compute-0 podman[89379]: 2025-11-24 09:27:45.123243835 +0000 UTC m=+0.144698465 container start 9c8404155e355cd1971c90dda53ae43ab37268c6dd24552dbb8753e27d2ecf2b (image=quay.io/ceph/ceph:v19, name=kind_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Nov 24 09:27:45 compute-0 podman[89379]: 2025-11-24 09:27:45.127456084 +0000 UTC m=+0.148910744 container attach 9c8404155e355cd1971c90dda53ae43ab37268c6dd24552dbb8753e27d2ecf2b (image=quay.io/ceph/ceph:v19, name=kind_sutherland, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:27:45 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Nov 24 09:27:45 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Nov 24 09:27:45 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Nov 24 09:27:45 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 39 pg[9.0( empty local-lis/les=0/0 n=0 ec=39/39 lis/c=0/0 les/c/f=0/0/0 sis=39) [0] r=0 lpr=39 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:45 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Nov 24 09:27:45 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.vproll' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 24 09:27:45 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Nov 24 09:27:45 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.qecnjt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 24 09:27:45 compute-0 podman[89461]: 2025-11-24 09:27:45.263068873 +0000 UTC m=+0.036731117 container create db2712baa80b604e57148f5857f4a525fbe9af463c05d2ca7a501282570da624 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-rgw-rgw-compute-0-zlrxyg, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 24 09:27:45 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'crash'
Nov 24 09:27:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9deb111c2892233c9d4aaa4d655db715bd00e2a460962b563345d761a4b26d42/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9deb111c2892233c9d4aaa4d655db715bd00e2a460962b563345d761a4b26d42/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9deb111c2892233c9d4aaa4d655db715bd00e2a460962b563345d761a4b26d42/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9deb111c2892233c9d4aaa4d655db715bd00e2a460962b563345d761a4b26d42/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.zlrxyg supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:45 compute-0 podman[89461]: 2025-11-24 09:27:45.320328484 +0000 UTC m=+0.093990768 container init db2712baa80b604e57148f5857f4a525fbe9af463c05d2ca7a501282570da624 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-rgw-rgw-compute-0-zlrxyg, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 24 09:27:45 compute-0 podman[89461]: 2025-11-24 09:27:45.325712771 +0000 UTC m=+0.099375025 container start db2712baa80b604e57148f5857f4a525fbe9af463c05d2ca7a501282570da624 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-rgw-rgw-compute-0-zlrxyg, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid)
Nov 24 09:27:45 compute-0 bash[89461]: db2712baa80b604e57148f5857f4a525fbe9af463c05d2ca7a501282570da624
Nov 24 09:27:45 compute-0 podman[89461]: 2025-11-24 09:27:45.247091346 +0000 UTC m=+0.020753620 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:27:45 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.zlrxyg for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:27:45 compute-0 ceph-mgr[74626]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 24 09:27:45 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'dashboard'
Nov 24 09:27:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:27:45.378+0000 7f279ac3b140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 24 09:27:45 compute-0 radosgw[89481]: deferred set uid:gid to 167:167 (ceph:ceph)
Nov 24 09:27:45 compute-0 radosgw[89481]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process radosgw, pid 2
Nov 24 09:27:45 compute-0 radosgw[89481]: framework: beast
Nov 24 09:27:45 compute-0 radosgw[89481]: framework conf key: endpoint, val: 192.168.122.100:8082
Nov 24 09:27:45 compute-0 radosgw[89481]: init_numa not setting numa affinity
Nov 24 09:27:45 compute-0 sudo[89128]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:45 compute-0 systemd[1]: session-33.scope: Deactivated successfully.
Nov 24 09:27:45 compute-0 systemd[1]: session-33.scope: Consumed 25.274s CPU time.
Nov 24 09:27:45 compute-0 systemd-logind[822]: Removed session 33.
Nov 24 09:27:45 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 4.f scrub starts
Nov 24 09:27:45 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 4.f scrub ok
Nov 24 09:27:45 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'devicehealth'
Nov 24 09:27:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:27:46.064+0000 7f279ac3b140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 24 09:27:46 compute-0 ceph-mgr[74626]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 24 09:27:46 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'diskprediction_local'
Nov 24 09:27:46 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Nov 24 09:27:46 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.vproll' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 24 09:27:46 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.qecnjt' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 24 09:27:46 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Nov 24 09:27:46 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Nov 24 09:27:46 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 40 pg[9.0( empty local-lis/les=39/40 n=0 ec=39/39 lis/c=0/0 les/c/f=0/0/0 sis=39) [0] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:46 compute-0 ceph-mon[74331]: 2.17 scrub starts
Nov 24 09:27:46 compute-0 ceph-mon[74331]: 2.17 scrub ok
Nov 24 09:27:46 compute-0 ceph-mon[74331]: 5.e deep-scrub starts
Nov 24 09:27:46 compute-0 ceph-mon[74331]: 5.e deep-scrub ok
Nov 24 09:27:46 compute-0 ceph-mon[74331]: osdmap e39: 3 total, 3 up, 3 in
Nov 24 09:27:46 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/2580956473' entity='client.rgw.rgw.compute-1.vproll' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 24 09:27:46 compute-0 ceph-mon[74331]: from='client.? ' entity='client.rgw.rgw.compute-1.vproll' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 24 09:27:46 compute-0 ceph-mon[74331]: from='client.? ' entity='client.rgw.rgw.compute-2.qecnjt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 24 09:27:46 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/2761939167' entity='client.rgw.rgw.compute-2.qecnjt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 24 09:27:46 compute-0 ceph-mon[74331]: 4.f scrub starts
Nov 24 09:27:46 compute-0 ceph-mon[74331]: 4.f scrub ok
Nov 24 09:27:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 24 09:27:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 24 09:27:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]:   from numpy import show_config as show_numpy_config
Nov 24 09:27:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:27:46.244+0000 7f279ac3b140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 24 09:27:46 compute-0 ceph-mgr[74626]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 24 09:27:46 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'influx'
Nov 24 09:27:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:27:46.327+0000 7f279ac3b140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 24 09:27:46 compute-0 ceph-mgr[74626]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 24 09:27:46 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'insights'
Nov 24 09:27:46 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'iostat'
Nov 24 09:27:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:27:46.486+0000 7f279ac3b140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 24 09:27:46 compute-0 ceph-mgr[74626]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 24 09:27:46 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'k8sevents'
Nov 24 09:27:46 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Nov 24 09:27:46 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Nov 24 09:27:46 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'localpool'
Nov 24 09:27:46 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'mds_autoscaler'
Nov 24 09:27:47 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'mirroring'
Nov 24 09:27:47 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Nov 24 09:27:47 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Nov 24 09:27:47 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Nov 24 09:27:47 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Nov 24 09:27:47 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2097383266' entity='client.rgw.rgw.compute-0.zlrxyg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 24 09:27:47 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Nov 24 09:27:47 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.vproll' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 24 09:27:47 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Nov 24 09:27:47 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.qecnjt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 24 09:27:47 compute-0 ceph-mon[74331]: 7.12 scrub starts
Nov 24 09:27:47 compute-0 ceph-mon[74331]: 7.12 scrub ok
Nov 24 09:27:47 compute-0 ceph-mon[74331]: 5.4 scrub starts
Nov 24 09:27:47 compute-0 ceph-mon[74331]: 5.4 scrub ok
Nov 24 09:27:47 compute-0 ceph-mon[74331]: from='client.? ' entity='client.rgw.rgw.compute-1.vproll' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 24 09:27:47 compute-0 ceph-mon[74331]: from='client.? ' entity='client.rgw.rgw.compute-2.qecnjt' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 24 09:27:47 compute-0 ceph-mon[74331]: osdmap e40: 3 total, 3 up, 3 in
Nov 24 09:27:47 compute-0 ceph-mon[74331]: 2.16 scrub starts
Nov 24 09:27:47 compute-0 ceph-mon[74331]: 2.16 scrub ok
Nov 24 09:27:47 compute-0 ceph-mon[74331]: 4.4 scrub starts
Nov 24 09:27:47 compute-0 ceph-mon[74331]: 4.4 scrub ok
Nov 24 09:27:47 compute-0 ceph-mon[74331]: osdmap e41: 3 total, 3 up, 3 in
Nov 24 09:27:47 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2097383266' entity='client.rgw.rgw.compute-0.zlrxyg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 24 09:27:47 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/2580956473' entity='client.rgw.rgw.compute-1.vproll' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 24 09:27:47 compute-0 ceph-mon[74331]: from='client.? ' entity='client.rgw.rgw.compute-1.vproll' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 24 09:27:47 compute-0 ceph-mon[74331]: from='client.? ' entity='client.rgw.rgw.compute-2.qecnjt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 24 09:27:47 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/2761939167' entity='client.rgw.rgw.compute-2.qecnjt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 24 09:27:47 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'nfs'
Nov 24 09:27:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:27:47.521+0000 7f279ac3b140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 24 09:27:47 compute-0 ceph-mgr[74626]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 24 09:27:47 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'orchestrator'
Nov 24 09:27:47 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Nov 24 09:27:47 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Nov 24 09:27:47 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:27:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:27:47.760+0000 7f279ac3b140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 24 09:27:47 compute-0 ceph-mgr[74626]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 24 09:27:47 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'osd_perf_query'
Nov 24 09:27:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:27:47.839+0000 7f279ac3b140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 24 09:27:47 compute-0 ceph-mgr[74626]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 24 09:27:47 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'osd_support'
Nov 24 09:27:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:27:47.908+0000 7f279ac3b140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 24 09:27:47 compute-0 ceph-mgr[74626]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 24 09:27:47 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'pg_autoscaler'
Nov 24 09:27:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:27:47.985+0000 7f279ac3b140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 24 09:27:47 compute-0 ceph-mgr[74626]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 24 09:27:47 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'progress'
Nov 24 09:27:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:27:48.055+0000 7f279ac3b140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 24 09:27:48 compute-0 ceph-mgr[74626]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 24 09:27:48 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'prometheus'
Nov 24 09:27:48 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Nov 24 09:27:48 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2097383266' entity='client.rgw.rgw.compute-0.zlrxyg' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 24 09:27:48 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.vproll' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 24 09:27:48 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.qecnjt' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 24 09:27:48 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Nov 24 09:27:48 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Nov 24 09:27:48 compute-0 ceph-mon[74331]: 3.1d deep-scrub starts
Nov 24 09:27:48 compute-0 ceph-mon[74331]: 3.1d deep-scrub ok
Nov 24 09:27:48 compute-0 ceph-mon[74331]: 2.14 scrub starts
Nov 24 09:27:48 compute-0 ceph-mon[74331]: 2.14 scrub ok
Nov 24 09:27:48 compute-0 ceph-mon[74331]: 6.6 scrub starts
Nov 24 09:27:48 compute-0 ceph-mon[74331]: 6.6 scrub ok
Nov 24 09:27:48 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2097383266' entity='client.rgw.rgw.compute-0.zlrxyg' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 24 09:27:48 compute-0 ceph-mon[74331]: from='client.? ' entity='client.rgw.rgw.compute-1.vproll' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 24 09:27:48 compute-0 ceph-mon[74331]: from='client.? ' entity='client.rgw.rgw.compute-2.qecnjt' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 24 09:27:48 compute-0 ceph-mon[74331]: osdmap e42: 3 total, 3 up, 3 in
Nov 24 09:27:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:27:48.398+0000 7f279ac3b140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 24 09:27:48 compute-0 ceph-mgr[74626]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 24 09:27:48 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'rbd_support'
Nov 24 09:27:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:27:48.500+0000 7f279ac3b140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 24 09:27:48 compute-0 ceph-mgr[74626]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 24 09:27:48 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'restful'
Nov 24 09:27:48 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 6.4 deep-scrub starts
Nov 24 09:27:48 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 6.4 deep-scrub ok
Nov 24 09:27:48 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'rgw'
Nov 24 09:27:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:27:48.951+0000 7f279ac3b140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 24 09:27:48 compute-0 ceph-mgr[74626]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 24 09:27:48 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'rook'
Nov 24 09:27:49 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Nov 24 09:27:49 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Nov 24 09:27:49 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Nov 24 09:27:49 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Nov 24 09:27:49 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2097383266' entity='client.rgw.rgw.compute-0.zlrxyg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 24 09:27:49 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Nov 24 09:27:49 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.vproll' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 24 09:27:49 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Nov 24 09:27:49 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.qecnjt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 24 09:27:49 compute-0 ceph-mon[74331]: 5.1a scrub starts
Nov 24 09:27:49 compute-0 ceph-mon[74331]: 5.1a scrub ok
Nov 24 09:27:49 compute-0 ceph-mon[74331]: 7.17 scrub starts
Nov 24 09:27:49 compute-0 ceph-mon[74331]: 7.17 scrub ok
Nov 24 09:27:49 compute-0 ceph-mon[74331]: 6.4 deep-scrub starts
Nov 24 09:27:49 compute-0 ceph-mon[74331]: 6.4 deep-scrub ok
Nov 24 09:27:49 compute-0 ceph-mon[74331]: osdmap e43: 3 total, 3 up, 3 in
Nov 24 09:27:49 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2097383266' entity='client.rgw.rgw.compute-0.zlrxyg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 24 09:27:49 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/2580956473' entity='client.rgw.rgw.compute-1.vproll' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 24 09:27:49 compute-0 ceph-mon[74331]: from='client.? ' entity='client.rgw.rgw.compute-1.vproll' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 24 09:27:49 compute-0 ceph-mon[74331]: from='client.? ' entity='client.rgw.rgw.compute-2.qecnjt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 24 09:27:49 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/2761939167' entity='client.rgw.rgw.compute-2.qecnjt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 24 09:27:49 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 43 pg[11.0( empty local-lis/les=0/0 n=0 ec=43/43 lis/c=0/0 les/c/f=0/0/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:27:49 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:27:49.535+0000 7f279ac3b140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 24 09:27:49 compute-0 ceph-mgr[74626]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 24 09:27:49 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'selftest'
Nov 24 09:27:49 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:27:49.606+0000 7f279ac3b140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 24 09:27:49 compute-0 ceph-mgr[74626]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 24 09:27:49 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'snap_schedule'
Nov 24 09:27:49 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Nov 24 09:27:49 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Nov 24 09:27:49 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:27:49.691+0000 7f279ac3b140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 24 09:27:49 compute-0 ceph-mgr[74626]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 24 09:27:49 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'stats'
Nov 24 09:27:49 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'status'
Nov 24 09:27:49 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:27:49.849+0000 7f279ac3b140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 24 09:27:49 compute-0 ceph-mgr[74626]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 24 09:27:49 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'telegraf'
Nov 24 09:27:49 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:27:49.920+0000 7f279ac3b140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 24 09:27:49 compute-0 ceph-mgr[74626]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 24 09:27:49 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'telemetry'
Nov 24 09:27:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:27:50.084+0000 7f279ac3b140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'test_orchestrator'
Nov 24 09:27:50 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Nov 24 09:27:50 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2097383266' entity='client.rgw.rgw.compute-0.zlrxyg' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 24 09:27:50 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.vproll' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 24 09:27:50 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.qecnjt' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 24 09:27:50 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Nov 24 09:27:50 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Nov 24 09:27:50 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Nov 24 09:27:50 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2097383266' entity='client.rgw.rgw.compute-0.zlrxyg' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 24 09:27:50 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Nov 24 09:27:50 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.vproll' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 24 09:27:50 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Nov 24 09:27:50 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.qecnjt' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 24 09:27:50 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 44 pg[11.0( empty local-lis/les=43/44 n=0 ec=43/43 lis/c=0/0 les/c/f=0/0/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:27:50 compute-0 ceph-mon[74331]: 3.1a scrub starts
Nov 24 09:27:50 compute-0 ceph-mon[74331]: 3.1a scrub ok
Nov 24 09:27:50 compute-0 ceph-mon[74331]: 2.11 scrub starts
Nov 24 09:27:50 compute-0 ceph-mon[74331]: 2.11 scrub ok
Nov 24 09:27:50 compute-0 ceph-mon[74331]: 6.0 scrub starts
Nov 24 09:27:50 compute-0 ceph-mon[74331]: 6.0 scrub ok
Nov 24 09:27:50 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2097383266' entity='client.rgw.rgw.compute-0.zlrxyg' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 24 09:27:50 compute-0 ceph-mon[74331]: from='client.? ' entity='client.rgw.rgw.compute-1.vproll' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 24 09:27:50 compute-0 ceph-mon[74331]: from='client.? ' entity='client.rgw.rgw.compute-2.qecnjt' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 24 09:27:50 compute-0 ceph-mon[74331]: osdmap e44: 3 total, 3 up, 3 in
Nov 24 09:27:50 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2097383266' entity='client.rgw.rgw.compute-0.zlrxyg' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 24 09:27:50 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/2580956473' entity='client.rgw.rgw.compute-1.vproll' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 24 09:27:50 compute-0 ceph-mon[74331]: from='client.? ' entity='client.rgw.rgw.compute-1.vproll' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 24 09:27:50 compute-0 ceph-mon[74331]: from='client.? ' entity='client.rgw.rgw.compute-2.qecnjt' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 24 09:27:50 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/2761939167' entity='client.rgw.rgw.compute-2.qecnjt' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 24 09:27:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:27:50.307+0000 7f279ac3b140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'volumes'
Nov 24 09:27:50 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.rzcnzg restarted
Nov 24 09:27:50 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.rzcnzg started
Nov 24 09:27:50 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.qelqsg restarted
Nov 24 09:27:50 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.qelqsg started
Nov 24 09:27:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:27:50.578+0000 7f279ac3b140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'zabbix'
Nov 24 09:27:50 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 4.0 deep-scrub starts
Nov 24 09:27:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:27:50.649+0000 7f279ac3b140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 24 09:27:50 compute-0 ceph-mon[74331]: log_channel(cluster) log [INF] : Active manager daemon compute-0.mauvni restarted
Nov 24 09:27:50 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Nov 24 09:27:50 compute-0 ceph-mon[74331]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.mauvni
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: ms_deliver_dispatch: unhandled message 0x55ea74735860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Nov 24 09:27:50 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 4.0 deep-scrub ok
Nov 24 09:27:50 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2097383266' entity='client.rgw.rgw.compute-0.zlrxyg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 24 09:27:50 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.vproll' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 24 09:27:50 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.qecnjt' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 24 09:27:50 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: mgr handle_mgr_map Activating!
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: mgr handle_mgr_map I am now activating
Nov 24 09:27:50 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Nov 24 09:27:50 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : mgrmap e14: compute-0.mauvni(active, starting, since 0.0337997s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:27:50 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Nov 24 09:27:50 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 24 09:27:50 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Nov 24 09:27:50 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 24 09:27:50 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Nov 24 09:27:50 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 24 09:27:50 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.mauvni", "id": "compute-0.mauvni"} v 0)
Nov 24 09:27:50 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr metadata", "who": "compute-0.mauvni", "id": "compute-0.mauvni"}]: dispatch
Nov 24 09:27:50 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.rzcnzg", "id": "compute-2.rzcnzg"} v 0)
Nov 24 09:27:50 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr metadata", "who": "compute-2.rzcnzg", "id": "compute-2.rzcnzg"}]: dispatch
Nov 24 09:27:50 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.qelqsg", "id": "compute-1.qelqsg"} v 0)
Nov 24 09:27:50 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr metadata", "who": "compute-1.qelqsg", "id": "compute-1.qelqsg"}]: dispatch
Nov 24 09:27:50 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 24 09:27:50 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 09:27:50 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 24 09:27:50 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 09:27:50 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 24 09:27:50 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:50 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Nov 24 09:27:50 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 24 09:27:50 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).mds e1 all = 1
Nov 24 09:27:50 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Nov 24 09:27:50 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 24 09:27:50 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Nov 24 09:27:50 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: balancer
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:27:50 compute-0 ceph-mon[74331]: log_channel(cluster) log [INF] : Manager daemon compute-0.mauvni is now available
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: [balancer INFO root] Starting
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: [balancer INFO root] Optimize plan auto_2025-11-24_09:27:50
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: cephadm
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: crash
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: dashboard
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: [dashboard INFO access_control] Loading user roles DB version=2
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: [dashboard INFO sso] Loading SSO DB version=1
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: [dashboard INFO root] Configured CherryPy, starting engine...
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: devicehealth
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: iostat
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: [devicehealth INFO root] Starting
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: nfs
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: orchestrator
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: pg_autoscaler
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: progress
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: [progress INFO root] Loading...
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f271f7efbb0>, <progress.module.GhostEvent object at 0x7f271f7efd00>, <progress.module.GhostEvent object at 0x7f271f7efca0>, <progress.module.GhostEvent object at 0x7f271f7efe50>, <progress.module.GhostEvent object at 0x7f271f7efeb0>, <progress.module.GhostEvent object at 0x7f271f7efee0>, <progress.module.GhostEvent object at 0x7f271f7eff10>, <progress.module.GhostEvent object at 0x7f271f7eff40>, <progress.module.GhostEvent object at 0x7f271f7eff70>, <progress.module.GhostEvent object at 0x7f271f7effa0>, <progress.module.GhostEvent object at 0x7f271f7effd0>] historic events
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: [progress INFO root] Loaded OSDMap, ready.
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: [rbd_support INFO root] recovery thread starting
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: [rbd_support INFO root] starting setup
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: rbd_support
Nov 24 09:27:50 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mauvni/mirror_snapshot_schedule"} v 0)
Nov 24 09:27:50 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mauvni/mirror_snapshot_schedule"}]: dispatch
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: restful
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: status
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: telemetry
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: [restful INFO root] server_addr: :: server_port: 8003
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: [restful WARNING root] server not running: no certificate configured
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: [rbd_support INFO root] PerfHandler: starting
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_task_task: vms, start_after=
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_task_task: volumes, start_after=
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_task_task: backups, start_after=
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_task_task: images, start_after=
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TaskHandler: starting
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: volumes
Nov 24 09:27:50 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mauvni/trash_purge_schedule"} v 0)
Nov 24 09:27:50 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mauvni/trash_purge_schedule"}]: dispatch
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: [rbd_support INFO root] setup complete
Nov 24 09:27:50 compute-0 radosgw[89481]: v1 topic migration: starting v1 topic migration..
Nov 24 09:27:50 compute-0 radosgw[89481]: v1 topic migration: finished v1 topic migration
Nov 24 09:27:50 compute-0 radosgw[89481]: LDAP not started since no server URIs were provided in the configuration.
Nov 24 09:27:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-rgw-rgw-compute-0-zlrxyg[89477]: 2025-11-24T09:27:50.886+0000 7fdafa07f980 -1 LDAP not started since no server URIs were provided in the configuration.
Nov 24 09:27:50 compute-0 radosgw[89481]: framework: beast
Nov 24 09:27:50 compute-0 radosgw[89481]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Nov 24 09:27:50 compute-0 radosgw[89481]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Nov 24 09:27:50 compute-0 radosgw[89481]: starting handler: beast
Nov 24 09:27:50 compute-0 radosgw[89481]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Nov 24 09:27:50 compute-0 radosgw[89481]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Nov 24 09:27:50 compute-0 radosgw[89481]: set uid:gid to 167:167 (ceph:ceph)
Nov 24 09:27:50 compute-0 radosgw[89481]: mgrc service_daemon_register rgw.14394 metadata {arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.zlrxyg,kernel_description=#1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025,kernel_version=5.14.0-639.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=0565e2b2-234e-414b-b909-932048ceb050,zone_name=default,zonegroup_id=5f03f326-32a0-4275-804c-1875d841eeca,zonegroup_name=default}
Nov 24 09:27:50 compute-0 radosgw[89481]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Nov 24 09:27:50 compute-0 radosgw[89481]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Nov 24 09:27:50 compute-0 radosgw[89481]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Nov 24 09:27:50 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Nov 24 09:27:51 compute-0 radosgw[89481]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Nov 24 09:27:51 compute-0 radosgw[89481]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Nov 24 09:27:51 compute-0 sshd-session[90231]: Accepted publickey for ceph-admin from 192.168.122.100 port 56872 ssh2: RSA SHA256:d901dNHY28a6fGfVJZBiZ/6DokdrVSFZakqDQ7cQMIA
Nov 24 09:27:51 compute-0 systemd-logind[822]: New session 34 of user ceph-admin.
Nov 24 09:27:51 compute-0 systemd[1]: Started Session 34 of User ceph-admin.
Nov 24 09:27:51 compute-0 sshd-session[90231]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 09:27:51 compute-0 ceph-mon[74331]: 3.9 scrub starts
Nov 24 09:27:51 compute-0 ceph-mon[74331]: 3.9 scrub ok
Nov 24 09:27:51 compute-0 ceph-mon[74331]: Standby manager daemon compute-2.rzcnzg restarted
Nov 24 09:27:51 compute-0 ceph-mon[74331]: Standby manager daemon compute-2.rzcnzg started
Nov 24 09:27:51 compute-0 ceph-mon[74331]: Standby manager daemon compute-1.qelqsg restarted
Nov 24 09:27:51 compute-0 ceph-mon[74331]: Standby manager daemon compute-1.qelqsg started
Nov 24 09:27:51 compute-0 ceph-mon[74331]: 7.15 scrub starts
Nov 24 09:27:51 compute-0 ceph-mon[74331]: 7.15 scrub ok
Nov 24 09:27:51 compute-0 ceph-mon[74331]: 4.0 deep-scrub starts
Nov 24 09:27:51 compute-0 ceph-mon[74331]: Active manager daemon compute-0.mauvni restarted
Nov 24 09:27:51 compute-0 ceph-mon[74331]: Activating manager daemon compute-0.mauvni
Nov 24 09:27:51 compute-0 ceph-mon[74331]: 4.0 deep-scrub ok
Nov 24 09:27:51 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2097383266' entity='client.rgw.rgw.compute-0.zlrxyg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 24 09:27:51 compute-0 ceph-mon[74331]: from='client.? ' entity='client.rgw.rgw.compute-1.vproll' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 24 09:27:51 compute-0 ceph-mon[74331]: from='client.? ' entity='client.rgw.rgw.compute-2.qecnjt' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 24 09:27:51 compute-0 ceph-mon[74331]: osdmap e45: 3 total, 3 up, 3 in
Nov 24 09:27:51 compute-0 ceph-mon[74331]: mgrmap e14: compute-0.mauvni(active, starting, since 0.0337997s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:27:51 compute-0 ceph-mon[74331]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 24 09:27:51 compute-0 ceph-mon[74331]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 24 09:27:51 compute-0 ceph-mon[74331]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 24 09:27:51 compute-0 ceph-mon[74331]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr metadata", "who": "compute-0.mauvni", "id": "compute-0.mauvni"}]: dispatch
Nov 24 09:27:51 compute-0 ceph-mon[74331]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr metadata", "who": "compute-2.rzcnzg", "id": "compute-2.rzcnzg"}]: dispatch
Nov 24 09:27:51 compute-0 ceph-mon[74331]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr metadata", "who": "compute-1.qelqsg", "id": "compute-1.qelqsg"}]: dispatch
Nov 24 09:27:51 compute-0 ceph-mon[74331]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 09:27:51 compute-0 ceph-mon[74331]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 09:27:51 compute-0 ceph-mon[74331]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:27:51 compute-0 ceph-mon[74331]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 24 09:27:51 compute-0 ceph-mon[74331]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 24 09:27:51 compute-0 ceph-mon[74331]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 24 09:27:51 compute-0 ceph-mon[74331]: Manager daemon compute-0.mauvni is now available
Nov 24 09:27:51 compute-0 ceph-mon[74331]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mauvni/mirror_snapshot_schedule"}]: dispatch
Nov 24 09:27:51 compute-0 ceph-mon[74331]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mauvni/trash_purge_schedule"}]: dispatch
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.module] Engine started.
Nov 24 09:27:51 compute-0 sudo[90248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:27:51 compute-0 sudo[90248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:51 compute-0 sudo[90248]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:51 compute-0 sudo[90273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 24 09:27:51 compute-0 sudo[90273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:51 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Nov 24 09:27:51 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.24203 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-username", "value": "admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 09:27:51 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : mgrmap e15: compute-0.mauvni(active, since 1.05886s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:27:51 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_USERNAME}] v 0)
Nov 24 09:27:51 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v3: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:27:51 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:51 compute-0 kind_sutherland[89396]: Option GRAFANA_API_USERNAME updated
Nov 24 09:27:51 compute-0 systemd[1]: libpod-9c8404155e355cd1971c90dda53ae43ab37268c6dd24552dbb8753e27d2ecf2b.scope: Deactivated successfully.
Nov 24 09:27:51 compute-0 podman[89379]: 2025-11-24 09:27:51.761600251 +0000 UTC m=+6.783054891 container died 9c8404155e355cd1971c90dda53ae43ab37268c6dd24552dbb8753e27d2ecf2b (image=quay.io/ceph/ceph:v19, name=kind_sutherland, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Nov 24 09:27:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-dcac0a27329cb58b0cfaf5f074e0f27ff58407b7b882480cd180f4613eb05a39-merged.mount: Deactivated successfully.
Nov 24 09:27:51 compute-0 podman[89379]: 2025-11-24 09:27:51.82174382 +0000 UTC m=+6.843198440 container remove 9c8404155e355cd1971c90dda53ae43ab37268c6dd24552dbb8753e27d2ecf2b (image=quay.io/ceph/ceph:v19, name=kind_sutherland, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 09:27:51 compute-0 systemd[1]: libpod-conmon-9c8404155e355cd1971c90dda53ae43ab37268c6dd24552dbb8753e27d2ecf2b.scope: Deactivated successfully.
Nov 24 09:27:51 compute-0 sudo[89323]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:51 compute-0 podman[90379]: 2025-11-24 09:27:51.960550574 +0000 UTC m=+0.059787842 container exec 926e81c0f890a1c1ac5ebf5b0a3fc7d39273a3029701ecf933d5ab782a4c6bc4 (image=quay.io/ceph/ceph:v19, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Nov 24 09:27:52 compute-0 ceph-mgr[74626]: [cephadm INFO cherrypy.error] [24/Nov/2025:09:27:52] ENGINE Bus STARTING
Nov 24 09:27:52 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : [24/Nov/2025:09:27:52] ENGINE Bus STARTING
Nov 24 09:27:52 compute-0 sudo[90423]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mddcvpjjlovijalglomgnqhuufhuqoje ; /usr/bin/python3'
Nov 24 09:27:52 compute-0 sudo[90423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:27:52 compute-0 podman[90379]: 2025-11-24 09:27:52.089585048 +0000 UTC m=+0.188822306 container exec_died 926e81c0f890a1c1ac5ebf5b0a3fc7d39273a3029701ecf933d5ab782a4c6bc4 (image=quay.io/ceph/ceph:v19, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mon-compute-0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:27:52 compute-0 ceph-mgr[74626]: [cephadm INFO cherrypy.error] [24/Nov/2025:09:27:52] ENGINE Serving on http://192.168.122.100:8765
Nov 24 09:27:52 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : [24/Nov/2025:09:27:52] ENGINE Serving on http://192.168.122.100:8765
Nov 24 09:27:52 compute-0 python3[90444]: ansible-ansible.legacy.command Invoked with stdin=/home/grafana_password.yml stdin_add_newline=False _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-password -i - _uses_shell=False strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None
Nov 24 09:27:52 compute-0 ceph-mon[74331]: 3.0 scrub starts
Nov 24 09:27:52 compute-0 ceph-mon[74331]: 3.0 scrub ok
Nov 24 09:27:52 compute-0 ceph-mon[74331]: 2.3 scrub starts
Nov 24 09:27:52 compute-0 ceph-mon[74331]: 2.3 scrub ok
Nov 24 09:27:52 compute-0 ceph-mon[74331]: 4.7 scrub starts
Nov 24 09:27:52 compute-0 ceph-mon[74331]: 4.7 scrub ok
Nov 24 09:27:52 compute-0 ceph-mon[74331]: mgrmap e15: compute-0.mauvni(active, since 1.05886s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:27:52 compute-0 ceph-mon[74331]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:52 compute-0 ceph-mgr[74626]: [cephadm INFO cherrypy.error] [24/Nov/2025:09:27:52] ENGINE Serving on https://192.168.122.100:7150
Nov 24 09:27:52 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : [24/Nov/2025:09:27:52] ENGINE Serving on https://192.168.122.100:7150
Nov 24 09:27:52 compute-0 ceph-mgr[74626]: [cephadm INFO cherrypy.error] [24/Nov/2025:09:27:52] ENGINE Bus STARTED
Nov 24 09:27:52 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : [24/Nov/2025:09:27:52] ENGINE Bus STARTED
Nov 24 09:27:52 compute-0 ceph-mgr[74626]: [cephadm INFO cherrypy.error] [24/Nov/2025:09:27:52] ENGINE Client ('192.168.122.100', 44150) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 24 09:27:52 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : [24/Nov/2025:09:27:52] ENGINE Client ('192.168.122.100', 44150) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 24 09:27:52 compute-0 podman[90483]: 2025-11-24 09:27:52.314393722 +0000 UTC m=+0.047816330 container create 2ecad0c5aa92166be619e26009719d64827a590ab733401a687248031ade6eea (image=quay.io/ceph/ceph:v19, name=romantic_khayyam, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:27:52 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 09:27:52 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:52 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 09:27:52 compute-0 systemd[1]: Started libpod-conmon-2ecad0c5aa92166be619e26009719d64827a590ab733401a687248031ade6eea.scope.
Nov 24 09:27:52 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:52 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:27:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/800124ef28053f7c4bd68b6021ed5d0fb3cb2f32ce4ddbddccefbf7f9a26ce8f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/800124ef28053f7c4bd68b6021ed5d0fb3cb2f32ce4ddbddccefbf7f9a26ce8f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/800124ef28053f7c4bd68b6021ed5d0fb3cb2f32ce4ddbddccefbf7f9a26ce8f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:52 compute-0 podman[90483]: 2025-11-24 09:27:52.291897771 +0000 UTC m=+0.025320399 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:27:52 compute-0 podman[90483]: 2025-11-24 09:27:52.402970151 +0000 UTC m=+0.136392759 container init 2ecad0c5aa92166be619e26009719d64827a590ab733401a687248031ade6eea (image=quay.io/ceph/ceph:v19, name=romantic_khayyam, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 24 09:27:52 compute-0 podman[90483]: 2025-11-24 09:27:52.412524287 +0000 UTC m=+0.145946885 container start 2ecad0c5aa92166be619e26009719d64827a590ab733401a687248031ade6eea (image=quay.io/ceph/ceph:v19, name=romantic_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Nov 24 09:27:52 compute-0 podman[90483]: 2025-11-24 09:27:52.415535108 +0000 UTC m=+0.148957706 container attach 2ecad0c5aa92166be619e26009719d64827a590ab733401a687248031ade6eea (image=quay.io/ceph/ceph:v19, name=romantic_khayyam, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:27:52 compute-0 sudo[90273]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:52 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:27:52 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:52 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:27:52 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:52 compute-0 sudo[90565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:27:52 compute-0 sudo[90565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:52 compute-0 sudo[90565]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:52 compute-0 sudo[90590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:27:52 compute-0 sudo[90590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:52 compute-0 ceph-mon[74331]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 24 09:27:52 compute-0 ceph-mon[74331]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 24 09:27:52 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v4: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:27:52 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 6.f scrub starts
Nov 24 09:27:52 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:27:52 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 6.f scrub ok
Nov 24 09:27:52 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.14418 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-password", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 09:27:52 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_PASSWORD}] v 0)
Nov 24 09:27:52 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:52 compute-0 romantic_khayyam[90524]: Option GRAFANA_API_PASSWORD updated
Nov 24 09:27:52 compute-0 systemd[1]: libpod-2ecad0c5aa92166be619e26009719d64827a590ab733401a687248031ade6eea.scope: Deactivated successfully.
Nov 24 09:27:52 compute-0 podman[90483]: 2025-11-24 09:27:52.878970681 +0000 UTC m=+0.612393309 container died 2ecad0c5aa92166be619e26009719d64827a590ab733401a687248031ade6eea (image=quay.io/ceph/ceph:v19, name=romantic_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325)
Nov 24 09:27:52 compute-0 ceph-mgr[74626]: [devicehealth INFO root] Check health
Nov 24 09:27:52 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 09:27:52 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:52 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 09:27:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-800124ef28053f7c4bd68b6021ed5d0fb3cb2f32ce4ddbddccefbf7f9a26ce8f-merged.mount: Deactivated successfully.
Nov 24 09:27:52 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:52 compute-0 podman[90483]: 2025-11-24 09:27:52.922846966 +0000 UTC m=+0.656269574 container remove 2ecad0c5aa92166be619e26009719d64827a590ab733401a687248031ade6eea (image=quay.io/ceph/ceph:v19, name=romantic_khayyam, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:27:52 compute-0 systemd[1]: libpod-conmon-2ecad0c5aa92166be619e26009719d64827a590ab733401a687248031ade6eea.scope: Deactivated successfully.
Nov 24 09:27:52 compute-0 sudo[90423]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:53 compute-0 sudo[90590]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:53 compute-0 sudo[90694]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgzcnxlxangfgzciwcgydhyisxpiargr ; /usr/bin/python3'
Nov 24 09:27:53 compute-0 sudo[90694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:27:53 compute-0 sudo[90693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:27:53 compute-0 sudo[90693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:53 compute-0 sudo[90693]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:53 compute-0 sudo[90721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Nov 24 09:27:53 compute-0 ceph-mon[74331]: 5.d scrub starts
Nov 24 09:27:53 compute-0 sudo[90721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:53 compute-0 ceph-mon[74331]: 5.d scrub ok
Nov 24 09:27:53 compute-0 ceph-mon[74331]: [24/Nov/2025:09:27:52] ENGINE Bus STARTING
Nov 24 09:27:53 compute-0 ceph-mon[74331]: [24/Nov/2025:09:27:52] ENGINE Serving on http://192.168.122.100:8765
Nov 24 09:27:53 compute-0 ceph-mon[74331]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:53 compute-0 ceph-mon[74331]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:53 compute-0 ceph-mon[74331]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:53 compute-0 ceph-mon[74331]: 2.0 scrub starts
Nov 24 09:27:53 compute-0 ceph-mon[74331]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:53 compute-0 ceph-mon[74331]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 24 09:27:53 compute-0 ceph-mon[74331]: Cluster is now healthy
Nov 24 09:27:53 compute-0 ceph-mon[74331]: 6.f scrub starts
Nov 24 09:27:53 compute-0 ceph-mon[74331]: 6.f scrub ok
Nov 24 09:27:53 compute-0 ceph-mon[74331]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:53 compute-0 ceph-mon[74331]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:53 compute-0 ceph-mon[74331]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:53 compute-0 python3[90708]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-alertmanager-api-host http://192.168.122.100:9093
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:27:53 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 09:27:53 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:53 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 09:27:53 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:53 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Nov 24 09:27:53 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 24 09:27:53 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Nov 24 09:27:53 compute-0 podman[90746]: 2025-11-24 09:27:53.403274209 +0000 UTC m=+0.048456643 container create ddfc108d97d5a59e06ffe6fd656eaef6ca33bdac9a030736cb7a20275a8749ad (image=quay.io/ceph/ceph:v19, name=gracious_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 09:27:53 compute-0 systemd[1]: Started libpod-conmon-ddfc108d97d5a59e06ffe6fd656eaef6ca33bdac9a030736cb7a20275a8749ad.scope.
Nov 24 09:27:53 compute-0 podman[90746]: 2025-11-24 09:27:53.383982545 +0000 UTC m=+0.029165019 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:27:53 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:27:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7a860ba2053e196d5e25a8e084b21256d79068c0b0a1aec695f6c1df4c9a958/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7a860ba2053e196d5e25a8e084b21256d79068c0b0a1aec695f6c1df4c9a958/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7a860ba2053e196d5e25a8e084b21256d79068c0b0a1aec695f6c1df4c9a958/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:53 compute-0 podman[90746]: 2025-11-24 09:27:53.49571162 +0000 UTC m=+0.140894054 container init ddfc108d97d5a59e06ffe6fd656eaef6ca33bdac9a030736cb7a20275a8749ad (image=quay.io/ceph/ceph:v19, name=gracious_nightingale, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:27:53 compute-0 podman[90746]: 2025-11-24 09:27:53.503148656 +0000 UTC m=+0.148331080 container start ddfc108d97d5a59e06ffe6fd656eaef6ca33bdac9a030736cb7a20275a8749ad (image=quay.io/ceph/ceph:v19, name=gracious_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Nov 24 09:27:53 compute-0 podman[90746]: 2025-11-24 09:27:53.506144117 +0000 UTC m=+0.151326551 container attach ddfc108d97d5a59e06ffe6fd656eaef6ca33bdac9a030736cb7a20275a8749ad (image=quay.io/ceph/ceph:v19, name=gracious_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:27:53 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : mgrmap e16: compute-0.mauvni(active, since 2s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:27:53 compute-0 sudo[90721]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:53 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:27:53 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:53 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:27:53 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:53 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Nov 24 09:27:53 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 09:27:53 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Nov 24 09:27:53 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Nov 24 09:27:53 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.14430 -' entity='client.admin' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.122.100:9093", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 09:27:53 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ALERTMANAGER_API_HOST}] v 0)
Nov 24 09:27:53 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:53 compute-0 gracious_nightingale[90762]: Option ALERTMANAGER_API_HOST updated
Nov 24 09:27:53 compute-0 systemd[1]: libpod-ddfc108d97d5a59e06ffe6fd656eaef6ca33bdac9a030736cb7a20275a8749ad.scope: Deactivated successfully.
Nov 24 09:27:53 compute-0 podman[90746]: 2025-11-24 09:27:53.874577718 +0000 UTC m=+0.519760142 container died ddfc108d97d5a59e06ffe6fd656eaef6ca33bdac9a030736cb7a20275a8749ad (image=quay.io/ceph/ceph:v19, name=gracious_nightingale, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Nov 24 09:27:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7a860ba2053e196d5e25a8e084b21256d79068c0b0a1aec695f6c1df4c9a958-merged.mount: Deactivated successfully.
Nov 24 09:27:53 compute-0 podman[90746]: 2025-11-24 09:27:53.912222486 +0000 UTC m=+0.557404910 container remove ddfc108d97d5a59e06ffe6fd656eaef6ca33bdac9a030736cb7a20275a8749ad (image=quay.io/ceph/ceph:v19, name=gracious_nightingale, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Nov 24 09:27:53 compute-0 systemd[1]: libpod-conmon-ddfc108d97d5a59e06ffe6fd656eaef6ca33bdac9a030736cb7a20275a8749ad.scope: Deactivated successfully.
Nov 24 09:27:53 compute-0 sudo[90694]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:54 compute-0 sudo[90838]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llnrmqszjbpwecgnpjedfflbpqhbdrfi ; /usr/bin/python3'
Nov 24 09:27:54 compute-0 sudo[90838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:27:54 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 09:27:54 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:54 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 09:27:54 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:54 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Nov 24 09:27:54 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 24 09:27:54 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:27:54 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:27:54 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 09:27:54 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:27:54 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Nov 24 09:27:54 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Nov 24 09:27:54 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Nov 24 09:27:54 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Nov 24 09:27:54 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Nov 24 09:27:54 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Nov 24 09:27:54 compute-0 python3[90840]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-prometheus-api-host http://192.168.122.100:9092
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:27:54 compute-0 ceph-mon[74331]: [24/Nov/2025:09:27:52] ENGINE Serving on https://192.168.122.100:7150
Nov 24 09:27:54 compute-0 ceph-mon[74331]: [24/Nov/2025:09:27:52] ENGINE Bus STARTED
Nov 24 09:27:54 compute-0 ceph-mon[74331]: [24/Nov/2025:09:27:52] ENGINE Client ('192.168.122.100', 44150) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 24 09:27:54 compute-0 ceph-mon[74331]: 2.0 scrub ok
Nov 24 09:27:54 compute-0 ceph-mon[74331]: 5.b scrub starts
Nov 24 09:27:54 compute-0 ceph-mon[74331]: 5.b scrub ok
Nov 24 09:27:54 compute-0 ceph-mon[74331]: pgmap v4: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:27:54 compute-0 ceph-mon[74331]: from='client.14418 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-password", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 09:27:54 compute-0 ceph-mon[74331]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:54 compute-0 ceph-mon[74331]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:54 compute-0 ceph-mon[74331]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 24 09:27:54 compute-0 ceph-mon[74331]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Nov 24 09:27:54 compute-0 ceph-mon[74331]: 7.0 scrub starts
Nov 24 09:27:54 compute-0 ceph-mon[74331]: 7.0 scrub ok
Nov 24 09:27:54 compute-0 ceph-mon[74331]: mgrmap e16: compute-0.mauvni(active, since 2s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:27:54 compute-0 ceph-mon[74331]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:54 compute-0 ceph-mon[74331]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:54 compute-0 ceph-mon[74331]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 09:27:54 compute-0 ceph-mon[74331]: 6.9 scrub starts
Nov 24 09:27:54 compute-0 ceph-mon[74331]: 6.9 scrub ok
Nov 24 09:27:54 compute-0 ceph-mon[74331]: from='client.14430 -' entity='client.admin' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.122.100:9093", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 09:27:54 compute-0 ceph-mon[74331]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:54 compute-0 ceph-mon[74331]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:54 compute-0 ceph-mon[74331]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:54 compute-0 ceph-mon[74331]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 24 09:27:54 compute-0 ceph-mon[74331]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:27:54 compute-0 ceph-mon[74331]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:27:54 compute-0 ceph-mon[74331]: Updating compute-0:/etc/ceph/ceph.conf
Nov 24 09:27:54 compute-0 ceph-mon[74331]: Updating compute-1:/etc/ceph/ceph.conf
Nov 24 09:27:54 compute-0 ceph-mon[74331]: Updating compute-2:/etc/ceph/ceph.conf
Nov 24 09:27:54 compute-0 sudo[90841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 24 09:27:54 compute-0 sudo[90841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:54 compute-0 sudo[90841]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:54 compute-0 podman[90847]: 2025-11-24 09:27:54.344991286 +0000 UTC m=+0.043286223 container create c0ef910ac64e6709730a9afd75afb7025800c7cc0b5f7c90b73c44ac6b18eaa3 (image=quay.io/ceph/ceph:v19, name=jovial_rhodes, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:27:54 compute-0 systemd[1]: Started libpod-conmon-c0ef910ac64e6709730a9afd75afb7025800c7cc0b5f7c90b73c44ac6b18eaa3.scope.
Nov 24 09:27:54 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:27:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abb3afe560e98ca6e5a426d504e7047b4914e618f3a332e64965956c51a73fdf/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abb3afe560e98ca6e5a426d504e7047b4914e618f3a332e64965956c51a73fdf/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abb3afe560e98ca6e5a426d504e7047b4914e618f3a332e64965956c51a73fdf/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:54 compute-0 sudo[90879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph
Nov 24 09:27:54 compute-0 sudo[90879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:54 compute-0 podman[90847]: 2025-11-24 09:27:54.411805102 +0000 UTC m=+0.110100059 container init c0ef910ac64e6709730a9afd75afb7025800c7cc0b5f7c90b73c44ac6b18eaa3 (image=quay.io/ceph/ceph:v19, name=jovial_rhodes, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:27:54 compute-0 sudo[90879]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:54 compute-0 podman[90847]: 2025-11-24 09:27:54.417611529 +0000 UTC m=+0.115906466 container start c0ef910ac64e6709730a9afd75afb7025800c7cc0b5f7c90b73c44ac6b18eaa3 (image=quay.io/ceph/ceph:v19, name=jovial_rhodes, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:27:54 compute-0 podman[90847]: 2025-11-24 09:27:54.325499796 +0000 UTC m=+0.023794753 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:27:54 compute-0 podman[90847]: 2025-11-24 09:27:54.421343987 +0000 UTC m=+0.119638944 container attach c0ef910ac64e6709730a9afd75afb7025800c7cc0b5f7c90b73c44ac6b18eaa3 (image=quay.io/ceph/ceph:v19, name=jovial_rhodes, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:27:54 compute-0 sudo[90910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.conf.new
Nov 24 09:27:54 compute-0 sudo[90910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:54 compute-0 sudo[90910]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:54 compute-0 sudo[90935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:27:54 compute-0 sudo[90935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:54 compute-0 sudo[90935]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:54 compute-0 sudo[90979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.conf.new
Nov 24 09:27:54 compute-0 sudo[90979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:54 compute-0 sudo[90979]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:54 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 4.b deep-scrub starts
Nov 24 09:27:54 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 4.b deep-scrub ok
Nov 24 09:27:54 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v5: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:27:54 compute-0 sudo[91027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.conf.new
Nov 24 09:27:54 compute-0 sudo[91027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:54 compute-0 sudo[91027]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:54 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.14436 -' entity='client.admin' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.122.100:9092", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 09:27:54 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/PROMETHEUS_API_HOST}] v 0)
Nov 24 09:27:54 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:54 compute-0 jovial_rhodes[90901]: Option PROMETHEUS_API_HOST updated
Nov 24 09:27:54 compute-0 sudo[91052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.conf.new
Nov 24 09:27:54 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:27:54 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:27:54 compute-0 sudo[91052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:54 compute-0 sudo[91052]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:54 compute-0 systemd[1]: libpod-c0ef910ac64e6709730a9afd75afb7025800c7cc0b5f7c90b73c44ac6b18eaa3.scope: Deactivated successfully.
Nov 24 09:27:54 compute-0 podman[90847]: 2025-11-24 09:27:54.81781055 +0000 UTC m=+0.516105507 container died c0ef910ac64e6709730a9afd75afb7025800c7cc0b5f7c90b73c44ac6b18eaa3 (image=quay.io/ceph/ceph:v19, name=jovial_rhodes, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:27:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-abb3afe560e98ca6e5a426d504e7047b4914e618f3a332e64965956c51a73fdf-merged.mount: Deactivated successfully.
Nov 24 09:27:54 compute-0 podman[90847]: 2025-11-24 09:27:54.85934262 +0000 UTC m=+0.557637557 container remove c0ef910ac64e6709730a9afd75afb7025800c7cc0b5f7c90b73c44ac6b18eaa3 (image=quay.io/ceph/ceph:v19, name=jovial_rhodes, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 09:27:54 compute-0 sudo[91080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Nov 24 09:27:54 compute-0 sudo[91080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:54 compute-0 sudo[90838]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:54 compute-0 systemd[1]: libpod-conmon-c0ef910ac64e6709730a9afd75afb7025800c7cc0b5f7c90b73c44ac6b18eaa3.scope: Deactivated successfully.
Nov 24 09:27:54 compute-0 sudo[91080]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:54 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:27:54 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:27:54 compute-0 sudo[91114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config
Nov 24 09:27:54 compute-0 sudo[91114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:54 compute-0 sudo[91114]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:55 compute-0 sudo[91139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config
Nov 24 09:27:55 compute-0 sudo[91139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:55 compute-0 sudo[91139]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:55 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:27:55 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:27:55 compute-0 sudo[91172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf.new
Nov 24 09:27:55 compute-0 sudo[91204]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nldqczovuanaakwnglkqkhdsdhcbltsd ; /usr/bin/python3'
Nov 24 09:27:55 compute-0 sudo[91172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:55 compute-0 sudo[91204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:27:55 compute-0 sudo[91172]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:55 compute-0 sudo[91215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:27:55 compute-0 sudo[91215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:55 compute-0 sudo[91215]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:55 compute-0 python3[91214]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-url http://192.168.122.100:3100
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:27:55 compute-0 sudo[91240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf.new
Nov 24 09:27:55 compute-0 sudo[91240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:55 compute-0 sudo[91240]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:55 compute-0 podman[91264]: 2025-11-24 09:27:55.26755165 +0000 UTC m=+0.034363092 container create 5d28992ba6565f493766544f5ecd241dfaa007b28c776e02e35388abb6d1c9a3 (image=quay.io/ceph/ceph:v19, name=wonderful_greider, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Nov 24 09:27:55 compute-0 systemd[1]: Started libpod-conmon-5d28992ba6565f493766544f5ecd241dfaa007b28c776e02e35388abb6d1c9a3.scope.
Nov 24 09:27:55 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 24 09:27:55 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 24 09:27:55 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:27:55 compute-0 sudo[91301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf.new
Nov 24 09:27:55 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : mgrmap e17: compute-0.mauvni(active, since 4s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:27:55 compute-0 sudo[91301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdb1938a766baae760a15961e624da7cdc9a6999884bab87f998fa09209f356c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdb1938a766baae760a15961e624da7cdc9a6999884bab87f998fa09209f356c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdb1938a766baae760a15961e624da7cdc9a6999884bab87f998fa09209f356c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:55 compute-0 sudo[91301]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:55 compute-0 ceph-mon[74331]: 5.8 scrub starts
Nov 24 09:27:55 compute-0 ceph-mon[74331]: 5.8 scrub ok
Nov 24 09:27:55 compute-0 ceph-mon[74331]: 2.2 scrub starts
Nov 24 09:27:55 compute-0 ceph-mon[74331]: 4.b deep-scrub starts
Nov 24 09:27:55 compute-0 ceph-mon[74331]: 4.b deep-scrub ok
Nov 24 09:27:55 compute-0 ceph-mon[74331]: pgmap v5: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:27:55 compute-0 ceph-mon[74331]: from='client.14436 -' entity='client.admin' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.122.100:9092", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 09:27:55 compute-0 ceph-mon[74331]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:55 compute-0 ceph-mon[74331]: Updating compute-1:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:27:55 compute-0 ceph-mon[74331]: Updating compute-0:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:27:55 compute-0 ceph-mon[74331]: Updating compute-2:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:27:55 compute-0 podman[91264]: 2025-11-24 09:27:55.252823992 +0000 UTC m=+0.019635434 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:27:55 compute-0 podman[91264]: 2025-11-24 09:27:55.351428359 +0000 UTC m=+0.118239821 container init 5d28992ba6565f493766544f5ecd241dfaa007b28c776e02e35388abb6d1c9a3 (image=quay.io/ceph/ceph:v19, name=wonderful_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:27:55 compute-0 podman[91264]: 2025-11-24 09:27:55.35910581 +0000 UTC m=+0.125917252 container start 5d28992ba6565f493766544f5ecd241dfaa007b28c776e02e35388abb6d1c9a3 (image=quay.io/ceph/ceph:v19, name=wonderful_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Nov 24 09:27:55 compute-0 podman[91264]: 2025-11-24 09:27:55.361985157 +0000 UTC m=+0.128796609 container attach 5d28992ba6565f493766544f5ecd241dfaa007b28c776e02e35388abb6d1c9a3 (image=quay.io/ceph/ceph:v19, name=wonderful_greider, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 24 09:27:55 compute-0 sudo[91331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf.new
Nov 24 09:27:55 compute-0 sudo[91331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:55 compute-0 sudo[91331]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:55 compute-0 sudo[91357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf.new /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:27:55 compute-0 sudo[91357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:55 compute-0 sudo[91357]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:55 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 24 09:27:55 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 24 09:27:55 compute-0 sudo[91383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 24 09:27:55 compute-0 sudo[91383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:55 compute-0 sudo[91383]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:55 compute-0 sudo[91426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph
Nov 24 09:27:55 compute-0 sudo[91426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:55 compute-0 sudo[91426]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:55 compute-0 sudo[91451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.client.admin.keyring.new
Nov 24 09:27:55 compute-0 sudo[91451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:55 compute-0 sudo[91451]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:55 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 6.b scrub starts
Nov 24 09:27:55 compute-0 sudo[91476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:27:55 compute-0 sudo[91476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:55 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 6.b scrub ok
Nov 24 09:27:55 compute-0 sudo[91476]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:55 compute-0 sudo[91501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.client.admin.keyring.new
Nov 24 09:27:55 compute-0 sudo[91501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:55 compute-0 sudo[91501]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:55 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.14442 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "http://192.168.122.100:3100", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 09:27:55 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Nov 24 09:27:55 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 24 09:27:55 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 24 09:27:55 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:55 compute-0 wonderful_greider[91324]: Option GRAFANA_API_URL updated
Nov 24 09:27:55 compute-0 systemd[1]: libpod-5d28992ba6565f493766544f5ecd241dfaa007b28c776e02e35388abb6d1c9a3.scope: Deactivated successfully.
Nov 24 09:27:55 compute-0 podman[91264]: 2025-11-24 09:27:55.748356292 +0000 UTC m=+0.515167734 container died 5d28992ba6565f493766544f5ecd241dfaa007b28c776e02e35388abb6d1c9a3 (image=quay.io/ceph/ceph:v19, name=wonderful_greider, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Nov 24 09:27:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-bdb1938a766baae760a15961e624da7cdc9a6999884bab87f998fa09209f356c-merged.mount: Deactivated successfully.
Nov 24 09:27:55 compute-0 podman[91264]: 2025-11-24 09:27:55.783697936 +0000 UTC m=+0.550509378 container remove 5d28992ba6565f493766544f5ecd241dfaa007b28c776e02e35388abb6d1c9a3 (image=quay.io/ceph/ceph:v19, name=wonderful_greider, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Nov 24 09:27:55 compute-0 systemd[1]: libpod-conmon-5d28992ba6565f493766544f5ecd241dfaa007b28c776e02e35388abb6d1c9a3.scope: Deactivated successfully.
Nov 24 09:27:55 compute-0 sudo[91204]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:55 compute-0 sudo[91558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.client.admin.keyring.new
Nov 24 09:27:55 compute-0 sudo[91558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:55 compute-0 sudo[91558]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:55 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 09:27:55 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 09:27:55 compute-0 sudo[91587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.client.admin.keyring.new
Nov 24 09:27:55 compute-0 sudo[91587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:55 compute-0 sudo[91587]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:55 compute-0 sudo[91612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Nov 24 09:27:55 compute-0 sudo[91612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:55 compute-0 sudo[91612]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:55 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 09:27:55 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 09:27:55 compute-0 sudo[91672]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkwibwqrduioipgxxfmgbogrefgjstuw ; /usr/bin/python3'
Nov 24 09:27:55 compute-0 sudo[91672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:27:55 compute-0 sudo[91650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config
Nov 24 09:27:55 compute-0 sudo[91650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:55 compute-0 sudo[91650]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:56 compute-0 sudo[91688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config
Nov 24 09:27:56 compute-0 sudo[91688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:56 compute-0 sudo[91688]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:56 compute-0 sudo[91713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring.new
Nov 24 09:27:56 compute-0 sudo[91713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:56 compute-0 python3[91685]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:27:56 compute-0 sudo[91713]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:56 compute-0 sudo[91739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:27:56 compute-0 sudo[91739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:56 compute-0 podman[91738]: 2025-11-24 09:27:56.142714766 +0000 UTC m=+0.036437030 container create eae79ca109b7a025688829c057532446a0138139956ff46e48c487ef7f6bb0e7 (image=quay.io/ceph/ceph:v19, name=naughty_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:27:56 compute-0 sudo[91739]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:56 compute-0 systemd[1]: Started libpod-conmon-eae79ca109b7a025688829c057532446a0138139956ff46e48c487ef7f6bb0e7.scope.
Nov 24 09:27:56 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:27:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9536da218ecbf2241e8f8acba678d6fb7bcf4c5ddc49beb0d3336a6adcc815b1/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9536da218ecbf2241e8f8acba678d6fb7bcf4c5ddc49beb0d3336a6adcc815b1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9536da218ecbf2241e8f8acba678d6fb7bcf4c5ddc49beb0d3336a6adcc815b1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:56 compute-0 sudo[91777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring.new
Nov 24 09:27:56 compute-0 sudo[91777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:56 compute-0 podman[91738]: 2025-11-24 09:27:56.202342932 +0000 UTC m=+0.096065206 container init eae79ca109b7a025688829c057532446a0138139956ff46e48c487ef7f6bb0e7 (image=quay.io/ceph/ceph:v19, name=naughty_allen, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 24 09:27:56 compute-0 sudo[91777]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:56 compute-0 podman[91738]: 2025-11-24 09:27:56.20729142 +0000 UTC m=+0.101013684 container start eae79ca109b7a025688829c057532446a0138139956ff46e48c487ef7f6bb0e7 (image=quay.io/ceph/ceph:v19, name=naughty_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:27:56 compute-0 podman[91738]: 2025-11-24 09:27:56.210077935 +0000 UTC m=+0.103800209 container attach eae79ca109b7a025688829c057532446a0138139956ff46e48c487ef7f6bb0e7 (image=quay.io/ceph/ceph:v19, name=naughty_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2)
Nov 24 09:27:56 compute-0 podman[91738]: 2025-11-24 09:27:56.125275124 +0000 UTC m=+0.018997418 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:27:56 compute-0 sudo[91831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring.new
Nov 24 09:27:56 compute-0 sudo[91831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:56 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 09:27:56 compute-0 sudo[91831]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:56 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:56 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 09:27:56 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:56 compute-0 ceph-mon[74331]: 2.2 scrub ok
Nov 24 09:27:56 compute-0 ceph-mon[74331]: 3.e deep-scrub starts
Nov 24 09:27:56 compute-0 ceph-mon[74331]: 3.e deep-scrub ok
Nov 24 09:27:56 compute-0 ceph-mon[74331]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 24 09:27:56 compute-0 ceph-mon[74331]: mgrmap e17: compute-0.mauvni(active, since 4s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:27:56 compute-0 ceph-mon[74331]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 24 09:27:56 compute-0 ceph-mon[74331]: 6.b scrub starts
Nov 24 09:27:56 compute-0 ceph-mon[74331]: 6.b scrub ok
Nov 24 09:27:56 compute-0 ceph-mon[74331]: from='client.14442 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "http://192.168.122.100:3100", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 09:27:56 compute-0 ceph-mon[74331]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 24 09:27:56 compute-0 ceph-mon[74331]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:56 compute-0 ceph-mon[74331]: Updating compute-1:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 09:27:56 compute-0 ceph-mon[74331]: Updating compute-0:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 09:27:56 compute-0 ceph-mon[74331]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:56 compute-0 ceph-mon[74331]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:56 compute-0 sudo[91875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring.new
Nov 24 09:27:56 compute-0 sudo[91875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:56 compute-0 sudo[91875]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:56 compute-0 sudo[91900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring.new /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 09:27:56 compute-0 sudo[91900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:56 compute-0 sudo[91900]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:56 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:27:56 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:56 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:27:56 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:56 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 09:27:56 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 09:27:56 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Nov 24 09:27:56 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1755702997' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Nov 24 09:27:56 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 6.14 scrub starts
Nov 24 09:27:56 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 6.14 scrub ok
Nov 24 09:27:56 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v6: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:27:57 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 09:27:57 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:57 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 09:27:57 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:57 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:27:57 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:57 compute-0 ceph-mgr[74626]: [progress INFO root] update: starting ev c38e43c6-c150-4127-9674-ad7718c77426 (Updating node-exporter deployment (+3 -> 3))
Nov 24 09:27:57 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-0 on compute-0
Nov 24 09:27:57 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-0 on compute-0
Nov 24 09:27:57 compute-0 ceph-mon[74331]: 7.7 scrub starts
Nov 24 09:27:57 compute-0 ceph-mon[74331]: 7.7 scrub ok
Nov 24 09:27:57 compute-0 ceph-mon[74331]: 3.11 scrub starts
Nov 24 09:27:57 compute-0 ceph-mon[74331]: 3.11 scrub ok
Nov 24 09:27:57 compute-0 ceph-mon[74331]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:57 compute-0 ceph-mon[74331]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:57 compute-0 ceph-mon[74331]: Updating compute-2:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 09:27:57 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1755702997' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Nov 24 09:27:57 compute-0 ceph-mon[74331]: 6.14 scrub starts
Nov 24 09:27:57 compute-0 ceph-mon[74331]: 6.14 scrub ok
Nov 24 09:27:57 compute-0 ceph-mon[74331]: pgmap v6: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:27:57 compute-0 ceph-mon[74331]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:57 compute-0 ceph-mon[74331]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:57 compute-0 sudo[91926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:27:57 compute-0 sudo[91926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:57 compute-0 sudo[91926]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:57 compute-0 sudo[91951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/node-exporter:v1.7.0 --timeout 895 _orch deploy --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:27:57 compute-0 sudo[91951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:27:57 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1755702997' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Nov 24 09:27:57 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : mgrmap e18: compute-0.mauvni(active, since 6s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:27:57 compute-0 systemd[1]: libpod-eae79ca109b7a025688829c057532446a0138139956ff46e48c487ef7f6bb0e7.scope: Deactivated successfully.
Nov 24 09:27:57 compute-0 podman[91738]: 2025-11-24 09:27:57.505287171 +0000 UTC m=+1.399009435 container died eae79ca109b7a025688829c057532446a0138139956ff46e48c487ef7f6bb0e7 (image=quay.io/ceph/ceph:v19, name=naughty_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Nov 24 09:27:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-9536da218ecbf2241e8f8acba678d6fb7bcf4c5ddc49beb0d3336a6adcc815b1-merged.mount: Deactivated successfully.
Nov 24 09:27:57 compute-0 podman[91738]: 2025-11-24 09:27:57.542774665 +0000 UTC m=+1.436496939 container remove eae79ca109b7a025688829c057532446a0138139956ff46e48c487ef7f6bb0e7 (image=quay.io/ceph/ceph:v19, name=naughty_allen, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Nov 24 09:27:57 compute-0 systemd[1]: libpod-conmon-eae79ca109b7a025688829c057532446a0138139956ff46e48c487ef7f6bb0e7.scope: Deactivated successfully.
Nov 24 09:27:57 compute-0 sshd-session[90246]: Connection closed by 192.168.122.100 port 56872
Nov 24 09:27:57 compute-0 sshd-session[90231]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 24 09:27:57 compute-0 systemd-logind[822]: Session 34 logged out. Waiting for processes to exit.
Nov 24 09:27:57 compute-0 sudo[91672]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ignoring --setuser ceph since I am not root
Nov 24 09:27:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ignoring --setgroup ceph since I am not root
Nov 24 09:27:57 compute-0 ceph-mgr[74626]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Nov 24 09:27:57 compute-0 ceph-mgr[74626]: pidfile_write: ignore empty --pid-file
Nov 24 09:27:57 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'alerts'
Nov 24 09:27:57 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 6.16 scrub starts
Nov 24 09:27:57 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 6.16 scrub ok
Nov 24 09:27:57 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:27:57 compute-0 ceph-mgr[74626]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 24 09:27:57 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'balancer'
Nov 24 09:27:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:27:57.750+0000 7fbf6b8bb140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 24 09:27:57 compute-0 sudo[92057]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymaatairqbbnhsrlioigprjlchklvpak ; /usr/bin/python3'
Nov 24 09:27:57 compute-0 sudo[92057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:27:57 compute-0 systemd[1]: Reloading.
Nov 24 09:27:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:27:57.838+0000 7fbf6b8bb140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 24 09:27:57 compute-0 ceph-mgr[74626]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 24 09:27:57 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'cephadm'
Nov 24 09:27:57 compute-0 systemd-sysv-generator[92099]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:27:57 compute-0 systemd-rc-local-generator[92096]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:27:57 compute-0 python3[92066]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:27:57 compute-0 podman[92105]: 2025-11-24 09:27:57.961266298 +0000 UTC m=+0.034948245 container create e99d7d371de4094473b3b1bee4f92eb6aa4cba25904db37206b7cf37e12bb66b (image=quay.io/ceph/ceph:v19, name=competent_torvalds, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:27:58 compute-0 podman[92105]: 2025-11-24 09:27:57.946877318 +0000 UTC m=+0.020559275 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:27:58 compute-0 systemd[1]: Started libpod-conmon-e99d7d371de4094473b3b1bee4f92eb6aa4cba25904db37206b7cf37e12bb66b.scope.
Nov 24 09:27:58 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:27:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2cf19fda499aca3b205d89a8a765aed9911624a892eea7f2b36f72024cbc8ed/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2cf19fda499aca3b205d89a8a765aed9911624a892eea7f2b36f72024cbc8ed/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2cf19fda499aca3b205d89a8a765aed9911624a892eea7f2b36f72024cbc8ed/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:58 compute-0 podman[92105]: 2025-11-24 09:27:58.128325349 +0000 UTC m=+0.202007326 container init e99d7d371de4094473b3b1bee4f92eb6aa4cba25904db37206b7cf37e12bb66b (image=quay.io/ceph/ceph:v19, name=competent_torvalds, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:27:58 compute-0 systemd[1]: Reloading.
Nov 24 09:27:58 compute-0 podman[92105]: 2025-11-24 09:27:58.136227846 +0000 UTC m=+0.209909793 container start e99d7d371de4094473b3b1bee4f92eb6aa4cba25904db37206b7cf37e12bb66b (image=quay.io/ceph/ceph:v19, name=competent_torvalds, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 09:27:58 compute-0 podman[92105]: 2025-11-24 09:27:58.146192951 +0000 UTC m=+0.219874898 container attach e99d7d371de4094473b3b1bee4f92eb6aa4cba25904db37206b7cf37e12bb66b (image=quay.io/ceph/ceph:v19, name=competent_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:27:58 compute-0 systemd-rc-local-generator[92155]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:27:58 compute-0 systemd-sysv-generator[92161]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:27:58 compute-0 ceph-mon[74331]: 7.1 deep-scrub starts
Nov 24 09:27:58 compute-0 ceph-mon[74331]: 7.1 deep-scrub ok
Nov 24 09:27:58 compute-0 ceph-mon[74331]: 5.12 scrub starts
Nov 24 09:27:58 compute-0 ceph-mon[74331]: 5.12 scrub ok
Nov 24 09:27:58 compute-0 ceph-mon[74331]: from='mgr.14364 192.168.122.100:0/3495962044' entity='mgr.compute-0.mauvni' 
Nov 24 09:27:58 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1755702997' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Nov 24 09:27:58 compute-0 ceph-mon[74331]: mgrmap e18: compute-0.mauvni(active, since 6s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:27:58 compute-0 ceph-mon[74331]: 6.16 scrub starts
Nov 24 09:27:58 compute-0 ceph-mon[74331]: 6.16 scrub ok
Nov 24 09:27:58 compute-0 systemd[1]: Starting Ceph node-exporter.compute-0 for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:27:58 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Nov 24 09:27:58 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4224251278' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Nov 24 09:27:58 compute-0 bash[92240]: Trying to pull quay.io/prometheus/node-exporter:v1.7.0...
Nov 24 09:27:58 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'crash'
Nov 24 09:27:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:27:58.668+0000 7fbf6b8bb140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 24 09:27:58 compute-0 ceph-mgr[74626]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 24 09:27:58 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'dashboard'
Nov 24 09:27:58 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 6.11 scrub starts
Nov 24 09:27:58 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 6.11 scrub ok
Nov 24 09:27:58 compute-0 bash[92240]: Getting image source signatures
Nov 24 09:27:58 compute-0 bash[92240]: Copying blob sha256:324153f2810a9927fcce320af9e4e291e0b6e805cbdd1f338386c756b9defa24
Nov 24 09:27:58 compute-0 bash[92240]: Copying blob sha256:2abcce694348cd2c949c0e98a7400ebdfd8341021bcf6b541bc72033ce982510
Nov 24 09:27:58 compute-0 bash[92240]: Copying blob sha256:455fd88e5221bc1e278ef2d059cd70e4df99a24e5af050ede621534276f6cf9a
Nov 24 09:27:59 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'devicehealth'
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:27:59.302+0000 7fbf6b8bb140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 24 09:27:59 compute-0 ceph-mgr[74626]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 24 09:27:59 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'diskprediction_local'
Nov 24 09:27:59 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4224251278' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Nov 24 09:27:59 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : mgrmap e19: compute-0.mauvni(active, since 8s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:27:59 compute-0 ceph-mon[74331]: 7.d scrub starts
Nov 24 09:27:59 compute-0 ceph-mon[74331]: 7.d scrub ok
Nov 24 09:27:59 compute-0 ceph-mon[74331]: 5.13 deep-scrub starts
Nov 24 09:27:59 compute-0 ceph-mon[74331]: 5.13 deep-scrub ok
Nov 24 09:27:59 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/4224251278' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Nov 24 09:27:59 compute-0 ceph-mon[74331]: 6.11 scrub starts
Nov 24 09:27:59 compute-0 ceph-mon[74331]: 6.11 scrub ok
Nov 24 09:27:59 compute-0 systemd[1]: libpod-e99d7d371de4094473b3b1bee4f92eb6aa4cba25904db37206b7cf37e12bb66b.scope: Deactivated successfully.
Nov 24 09:27:59 compute-0 podman[92105]: 2025-11-24 09:27:59.431638586 +0000 UTC m=+1.505320533 container died e99d7d371de4094473b3b1bee4f92eb6aa4cba25904db37206b7cf37e12bb66b (image=quay.io/ceph/ceph:v19, name=competent_torvalds, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]:   from numpy import show_config as show_numpy_config
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:27:59.476+0000 7fbf6b8bb140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 24 09:27:59 compute-0 ceph-mgr[74626]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 24 09:27:59 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'influx'
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:27:59.552+0000 7fbf6b8bb140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 24 09:27:59 compute-0 ceph-mgr[74626]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 24 09:27:59 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'insights'
Nov 24 09:27:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2cf19fda499aca3b205d89a8a765aed9911624a892eea7f2b36f72024cbc8ed-merged.mount: Deactivated successfully.
Nov 24 09:27:59 compute-0 bash[92240]: Copying config sha256:72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e
Nov 24 09:27:59 compute-0 bash[92240]: Writing manifest to image destination
Nov 24 09:27:59 compute-0 podman[92105]: 2025-11-24 09:27:59.566931127 +0000 UTC m=+1.640613074 container remove e99d7d371de4094473b3b1bee4f92eb6aa4cba25904db37206b7cf37e12bb66b (image=quay.io/ceph/ceph:v19, name=competent_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Nov 24 09:27:59 compute-0 systemd[1]: libpod-conmon-e99d7d371de4094473b3b1bee4f92eb6aa4cba25904db37206b7cf37e12bb66b.scope: Deactivated successfully.
Nov 24 09:27:59 compute-0 sudo[92057]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:59 compute-0 podman[92240]: 2025-11-24 09:27:59.596981646 +0000 UTC m=+1.062934467 container create 7b41a24888e2dd3dca187bd76560d76829b7d7b7dcf75bceeedb6a669c1298b7 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:27:59 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'iostat'
Nov 24 09:27:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3d8fbadef1587ff586d2c16083d30393d0c2cfe352d726c2398b19cd8375193/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Nov 24 09:27:59 compute-0 podman[92240]: 2025-11-24 09:27:59.63612696 +0000 UTC m=+1.102079781 container init 7b41a24888e2dd3dca187bd76560d76829b7d7b7dcf75bceeedb6a669c1298b7 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:27:59 compute-0 podman[92240]: 2025-11-24 09:27:59.640770739 +0000 UTC m=+1.106723550 container start 7b41a24888e2dd3dca187bd76560d76829b7d7b7dcf75bceeedb6a669c1298b7 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:27:59 compute-0 bash[92240]: 7b41a24888e2dd3dca187bd76560d76829b7d7b7dcf75bceeedb6a669c1298b7
Nov 24 09:27:59 compute-0 podman[92240]: 2025-11-24 09:27:59.582293639 +0000 UTC m=+1.048246490 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Nov 24 09:27:59 compute-0 systemd[1]: Started Ceph node-exporter.compute-0 for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[92327]: ts=2025-11-24T09:27:59.650Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[92327]: ts=2025-11-24T09:27:59.650Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[92327]: ts=2025-11-24T09:27:59.651Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[92327]: ts=2025-11-24T09:27:59.651Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[92327]: ts=2025-11-24T09:27:59.651Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[92327]: ts=2025-11-24T09:27:59.651Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[92327]: ts=2025-11-24T09:27:59.652Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[92327]: ts=2025-11-24T09:27:59.652Z caller=node_exporter.go:117 level=info collector=arp
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[92327]: ts=2025-11-24T09:27:59.652Z caller=node_exporter.go:117 level=info collector=bcache
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[92327]: ts=2025-11-24T09:27:59.652Z caller=node_exporter.go:117 level=info collector=bonding
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[92327]: ts=2025-11-24T09:27:59.652Z caller=node_exporter.go:117 level=info collector=btrfs
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[92327]: ts=2025-11-24T09:27:59.652Z caller=node_exporter.go:117 level=info collector=conntrack
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[92327]: ts=2025-11-24T09:27:59.652Z caller=node_exporter.go:117 level=info collector=cpu
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[92327]: ts=2025-11-24T09:27:59.652Z caller=node_exporter.go:117 level=info collector=cpufreq
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[92327]: ts=2025-11-24T09:27:59.652Z caller=node_exporter.go:117 level=info collector=diskstats
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[92327]: ts=2025-11-24T09:27:59.652Z caller=node_exporter.go:117 level=info collector=dmi
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[92327]: ts=2025-11-24T09:27:59.652Z caller=node_exporter.go:117 level=info collector=edac
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[92327]: ts=2025-11-24T09:27:59.652Z caller=node_exporter.go:117 level=info collector=entropy
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[92327]: ts=2025-11-24T09:27:59.652Z caller=node_exporter.go:117 level=info collector=fibrechannel
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[92327]: ts=2025-11-24T09:27:59.652Z caller=node_exporter.go:117 level=info collector=filefd
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[92327]: ts=2025-11-24T09:27:59.652Z caller=node_exporter.go:117 level=info collector=filesystem
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[92327]: ts=2025-11-24T09:27:59.652Z caller=node_exporter.go:117 level=info collector=hwmon
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[92327]: ts=2025-11-24T09:27:59.652Z caller=node_exporter.go:117 level=info collector=infiniband
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[92327]: ts=2025-11-24T09:27:59.652Z caller=node_exporter.go:117 level=info collector=ipvs
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[92327]: ts=2025-11-24T09:27:59.652Z caller=node_exporter.go:117 level=info collector=loadavg
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[92327]: ts=2025-11-24T09:27:59.652Z caller=node_exporter.go:117 level=info collector=mdadm
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[92327]: ts=2025-11-24T09:27:59.652Z caller=node_exporter.go:117 level=info collector=meminfo
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[92327]: ts=2025-11-24T09:27:59.652Z caller=node_exporter.go:117 level=info collector=netclass
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[92327]: ts=2025-11-24T09:27:59.652Z caller=node_exporter.go:117 level=info collector=netdev
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[92327]: ts=2025-11-24T09:27:59.652Z caller=node_exporter.go:117 level=info collector=netstat
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[92327]: ts=2025-11-24T09:27:59.652Z caller=node_exporter.go:117 level=info collector=nfs
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[92327]: ts=2025-11-24T09:27:59.652Z caller=node_exporter.go:117 level=info collector=nfsd
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[92327]: ts=2025-11-24T09:27:59.652Z caller=node_exporter.go:117 level=info collector=nvme
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[92327]: ts=2025-11-24T09:27:59.652Z caller=node_exporter.go:117 level=info collector=os
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[92327]: ts=2025-11-24T09:27:59.652Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[92327]: ts=2025-11-24T09:27:59.652Z caller=node_exporter.go:117 level=info collector=pressure
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[92327]: ts=2025-11-24T09:27:59.652Z caller=node_exporter.go:117 level=info collector=rapl
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[92327]: ts=2025-11-24T09:27:59.652Z caller=node_exporter.go:117 level=info collector=schedstat
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[92327]: ts=2025-11-24T09:27:59.652Z caller=node_exporter.go:117 level=info collector=selinux
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[92327]: ts=2025-11-24T09:27:59.652Z caller=node_exporter.go:117 level=info collector=sockstat
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[92327]: ts=2025-11-24T09:27:59.652Z caller=node_exporter.go:117 level=info collector=softnet
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[92327]: ts=2025-11-24T09:27:59.652Z caller=node_exporter.go:117 level=info collector=stat
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[92327]: ts=2025-11-24T09:27:59.652Z caller=node_exporter.go:117 level=info collector=tapestats
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[92327]: ts=2025-11-24T09:27:59.652Z caller=node_exporter.go:117 level=info collector=textfile
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[92327]: ts=2025-11-24T09:27:59.652Z caller=node_exporter.go:117 level=info collector=thermal_zone
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[92327]: ts=2025-11-24T09:27:59.652Z caller=node_exporter.go:117 level=info collector=time
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[92327]: ts=2025-11-24T09:27:59.652Z caller=node_exporter.go:117 level=info collector=udp_queues
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[92327]: ts=2025-11-24T09:27:59.652Z caller=node_exporter.go:117 level=info collector=uname
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[92327]: ts=2025-11-24T09:27:59.652Z caller=node_exporter.go:117 level=info collector=vmstat
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[92327]: ts=2025-11-24T09:27:59.652Z caller=node_exporter.go:117 level=info collector=xfs
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[92327]: ts=2025-11-24T09:27:59.652Z caller=node_exporter.go:117 level=info collector=zfs
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[92327]: ts=2025-11-24T09:27:59.653Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[92327]: ts=2025-11-24T09:27:59.653Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Nov 24 09:27:59 compute-0 sudo[91951]: pam_unix(sudo:session): session closed for user root
Nov 24 09:27:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:27:59.690+0000 7fbf6b8bb140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 24 09:27:59 compute-0 systemd[1]: session-34.scope: Deactivated successfully.
Nov 24 09:27:59 compute-0 ceph-mgr[74626]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 24 09:27:59 compute-0 systemd[1]: session-34.scope: Consumed 4.924s CPU time.
Nov 24 09:27:59 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'k8sevents'
Nov 24 09:27:59 compute-0 systemd-logind[822]: Removed session 34.
Nov 24 09:27:59 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 6.10 deep-scrub starts
Nov 24 09:27:59 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 6.10 deep-scrub ok
Nov 24 09:28:00 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'localpool'
Nov 24 09:28:00 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'mds_autoscaler'
Nov 24 09:28:00 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'mirroring'
Nov 24 09:28:00 compute-0 python3[92411]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 09:28:00 compute-0 ceph-mon[74331]: 7.c scrub starts
Nov 24 09:28:00 compute-0 ceph-mon[74331]: 7.c scrub ok
Nov 24 09:28:00 compute-0 ceph-mon[74331]: 3.15 scrub starts
Nov 24 09:28:00 compute-0 ceph-mon[74331]: 3.15 scrub ok
Nov 24 09:28:00 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/4224251278' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Nov 24 09:28:00 compute-0 ceph-mon[74331]: mgrmap e19: compute-0.mauvni(active, since 8s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:28:00 compute-0 ceph-mon[74331]: 6.10 deep-scrub starts
Nov 24 09:28:00 compute-0 ceph-mon[74331]: 6.10 deep-scrub ok
Nov 24 09:28:00 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'nfs'
Nov 24 09:28:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:28:00.687+0000 7fbf6b8bb140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 24 09:28:00 compute-0 ceph-mgr[74626]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 24 09:28:00 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'orchestrator'
Nov 24 09:28:00 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 6.13 scrub starts
Nov 24 09:28:00 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 6.13 scrub ok
Nov 24 09:28:00 compute-0 python3[92482]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763976480.1516187-37304-137216876853562/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=b1f36629bdb347469f4890c95dfdef5abc68c3ae backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:28:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:28:00.910+0000 7fbf6b8bb140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 24 09:28:00 compute-0 ceph-mgr[74626]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 24 09:28:00 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'osd_perf_query'
Nov 24 09:28:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:28:00.989+0000 7fbf6b8bb140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 24 09:28:00 compute-0 ceph-mgr[74626]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 24 09:28:00 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'osd_support'
Nov 24 09:28:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:28:01.054+0000 7fbf6b8bb140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 24 09:28:01 compute-0 ceph-mgr[74626]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 24 09:28:01 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'pg_autoscaler'
Nov 24 09:28:01 compute-0 sudo[92530]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oizvgxcsnzecjzchwyanbihxqvyuzzeb ; /usr/bin/python3'
Nov 24 09:28:01 compute-0 sudo[92530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:28:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:28:01.135+0000 7fbf6b8bb140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 24 09:28:01 compute-0 ceph-mgr[74626]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 24 09:28:01 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'progress'
Nov 24 09:28:01 compute-0 python3[92532]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 compute-1 compute-2 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:28:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:28:01.210+0000 7fbf6b8bb140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 24 09:28:01 compute-0 ceph-mgr[74626]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 24 09:28:01 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'prometheus'
Nov 24 09:28:01 compute-0 podman[92533]: 2025-11-24 09:28:01.249677835 +0000 UTC m=+0.039173585 container create 5942f3a3f85a1276d9bf670894bfe4067b8d74af9544eb80ac3fcb14b446a269 (image=quay.io/ceph/ceph:v19, name=gracious_banzai, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:28:01 compute-0 systemd[1]: Started libpod-conmon-5942f3a3f85a1276d9bf670894bfe4067b8d74af9544eb80ac3fcb14b446a269.scope.
Nov 24 09:28:01 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:28:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fb23ddbe2724a415947080575f8c0509237ffdd3a38003125911e7ec935afdc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fb23ddbe2724a415947080575f8c0509237ffdd3a38003125911e7ec935afdc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fb23ddbe2724a415947080575f8c0509237ffdd3a38003125911e7ec935afdc/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:01 compute-0 podman[92533]: 2025-11-24 09:28:01.327138663 +0000 UTC m=+0.116634443 container init 5942f3a3f85a1276d9bf670894bfe4067b8d74af9544eb80ac3fcb14b446a269 (image=quay.io/ceph/ceph:v19, name=gracious_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:28:01 compute-0 podman[92533]: 2025-11-24 09:28:01.23249062 +0000 UTC m=+0.021986390 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:28:01 compute-0 podman[92533]: 2025-11-24 09:28:01.334358853 +0000 UTC m=+0.123854603 container start 5942f3a3f85a1276d9bf670894bfe4067b8d74af9544eb80ac3fcb14b446a269 (image=quay.io/ceph/ceph:v19, name=gracious_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:28:01 compute-0 podman[92533]: 2025-11-24 09:28:01.337362404 +0000 UTC m=+0.126858184 container attach 5942f3a3f85a1276d9bf670894bfe4067b8d74af9544eb80ac3fcb14b446a269 (image=quay.io/ceph/ceph:v19, name=gracious_banzai, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:28:01 compute-0 ceph-mon[74331]: 2.13 scrub starts
Nov 24 09:28:01 compute-0 ceph-mon[74331]: 2.13 scrub ok
Nov 24 09:28:01 compute-0 ceph-mon[74331]: 7.19 scrub starts
Nov 24 09:28:01 compute-0 ceph-mon[74331]: 7.19 scrub ok
Nov 24 09:28:01 compute-0 ceph-mon[74331]: 7.11 scrub starts
Nov 24 09:28:01 compute-0 ceph-mon[74331]: 7.11 scrub ok
Nov 24 09:28:01 compute-0 ceph-mon[74331]: 6.13 scrub starts
Nov 24 09:28:01 compute-0 ceph-mon[74331]: 6.13 scrub ok
Nov 24 09:28:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:28:01.562+0000 7fbf6b8bb140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 24 09:28:01 compute-0 ceph-mgr[74626]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 24 09:28:01 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'rbd_support'
Nov 24 09:28:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:28:01.662+0000 7fbf6b8bb140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 24 09:28:01 compute-0 ceph-mgr[74626]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 24 09:28:01 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'restful'
Nov 24 09:28:01 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 6.1d scrub starts
Nov 24 09:28:01 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 6.1d scrub ok
Nov 24 09:28:01 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'rgw'
Nov 24 09:28:02 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:28:02.119+0000 7fbf6b8bb140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 24 09:28:02 compute-0 ceph-mgr[74626]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 24 09:28:02 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'rook'
Nov 24 09:28:02 compute-0 ceph-mon[74331]: 7.1a scrub starts
Nov 24 09:28:02 compute-0 ceph-mon[74331]: 7.1a scrub ok
Nov 24 09:28:02 compute-0 ceph-mon[74331]: 7.16 deep-scrub starts
Nov 24 09:28:02 compute-0 ceph-mon[74331]: 7.16 deep-scrub ok
Nov 24 09:28:02 compute-0 ceph-mon[74331]: 6.1d scrub starts
Nov 24 09:28:02 compute-0 ceph-mon[74331]: 6.1d scrub ok
Nov 24 09:28:02 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:28:02.701+0000 7fbf6b8bb140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 24 09:28:02 compute-0 ceph-mgr[74626]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 24 09:28:02 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'selftest'
Nov 24 09:28:02 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:28:02 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Nov 24 09:28:02 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Nov 24 09:28:02 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:28:02.774+0000 7fbf6b8bb140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 24 09:28:02 compute-0 ceph-mgr[74626]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 24 09:28:02 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'snap_schedule'
Nov 24 09:28:02 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:28:02.855+0000 7fbf6b8bb140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 24 09:28:02 compute-0 ceph-mgr[74626]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 24 09:28:02 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'stats'
Nov 24 09:28:02 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'status'
Nov 24 09:28:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:28:03.018+0000 7fbf6b8bb140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 24 09:28:03 compute-0 ceph-mgr[74626]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 24 09:28:03 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'telegraf'
Nov 24 09:28:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:28:03.109+0000 7fbf6b8bb140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 24 09:28:03 compute-0 ceph-mgr[74626]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 24 09:28:03 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'telemetry'
Nov 24 09:28:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:28:03.286+0000 7fbf6b8bb140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 24 09:28:03 compute-0 ceph-mgr[74626]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 24 09:28:03 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'test_orchestrator'
Nov 24 09:28:03 compute-0 ceph-mon[74331]: 5.1f scrub starts
Nov 24 09:28:03 compute-0 ceph-mon[74331]: 5.1f scrub ok
Nov 24 09:28:03 compute-0 ceph-mon[74331]: 2.10 scrub starts
Nov 24 09:28:03 compute-0 ceph-mon[74331]: 2.10 scrub ok
Nov 24 09:28:03 compute-0 ceph-mon[74331]: 7.1b scrub starts
Nov 24 09:28:03 compute-0 ceph-mon[74331]: 7.1b scrub ok
Nov 24 09:28:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:28:03.517+0000 7fbf6b8bb140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 24 09:28:03 compute-0 ceph-mgr[74626]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 24 09:28:03 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'volumes'
Nov 24 09:28:03 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.rzcnzg restarted
Nov 24 09:28:03 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.rzcnzg started
Nov 24 09:28:03 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.qelqsg restarted
Nov 24 09:28:03 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.qelqsg started
Nov 24 09:28:03 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 3.1f deep-scrub starts
Nov 24 09:28:03 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 3.1f deep-scrub ok
Nov 24 09:28:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:28:03.786+0000 7fbf6b8bb140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 24 09:28:03 compute-0 ceph-mgr[74626]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 24 09:28:03 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'zabbix'
Nov 24 09:28:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:28:03.857+0000 7fbf6b8bb140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 24 09:28:03 compute-0 ceph-mgr[74626]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 24 09:28:03 compute-0 ceph-mon[74331]: log_channel(cluster) log [INF] : Active manager daemon compute-0.mauvni restarted
Nov 24 09:28:03 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Nov 24 09:28:03 compute-0 ceph-mon[74331]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.mauvni
Nov 24 09:28:03 compute-0 ceph-mgr[74626]: ms_deliver_dispatch: unhandled message 0x561563e41860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Nov 24 09:28:03 compute-0 ceph-mgr[74626]: mgr handle_mgr_map respawning because set of enabled modules changed!
Nov 24 09:28:03 compute-0 ceph-mgr[74626]: mgr respawn  e: '/usr/bin/ceph-mgr'
Nov 24 09:28:03 compute-0 ceph-mgr[74626]: mgr respawn  0: '/usr/bin/ceph-mgr'
Nov 24 09:28:03 compute-0 ceph-mgr[74626]: mgr respawn  1: '-n'
Nov 24 09:28:03 compute-0 ceph-mgr[74626]: mgr respawn  2: 'mgr.compute-0.mauvni'
Nov 24 09:28:03 compute-0 ceph-mgr[74626]: mgr respawn  3: '-f'
Nov 24 09:28:03 compute-0 ceph-mgr[74626]: mgr respawn  4: '--setuser'
Nov 24 09:28:03 compute-0 ceph-mgr[74626]: mgr respawn  5: 'ceph'
Nov 24 09:28:03 compute-0 ceph-mgr[74626]: mgr respawn  6: '--setgroup'
Nov 24 09:28:03 compute-0 ceph-mgr[74626]: mgr respawn  7: 'ceph'
Nov 24 09:28:03 compute-0 ceph-mgr[74626]: mgr respawn  8: '--default-log-to-file=false'
Nov 24 09:28:03 compute-0 ceph-mgr[74626]: mgr respawn  9: '--default-log-to-journald=true'
Nov 24 09:28:03 compute-0 ceph-mgr[74626]: mgr respawn  10: '--default-log-to-stderr=false'
Nov 24 09:28:03 compute-0 ceph-mgr[74626]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Nov 24 09:28:03 compute-0 ceph-mgr[74626]: mgr respawn  exe_path /proc/self/exe
Nov 24 09:28:03 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Nov 24 09:28:03 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Nov 24 09:28:03 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : mgrmap e20: compute-0.mauvni(active, starting, since 0.0304016s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:28:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ignoring --setuser ceph since I am not root
Nov 24 09:28:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ignoring --setgroup ceph since I am not root
Nov 24 09:28:03 compute-0 ceph-mgr[74626]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Nov 24 09:28:03 compute-0 ceph-mgr[74626]: pidfile_write: ignore empty --pid-file
Nov 24 09:28:04 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'alerts'
Nov 24 09:28:04 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:28:04.107+0000 7f5a31d38140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 24 09:28:04 compute-0 ceph-mgr[74626]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 24 09:28:04 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'balancer'
Nov 24 09:28:04 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:28:04.188+0000 7f5a31d38140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 24 09:28:04 compute-0 ceph-mgr[74626]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 24 09:28:04 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'cephadm'
Nov 24 09:28:04 compute-0 ceph-mon[74331]: 3.14 scrub starts
Nov 24 09:28:04 compute-0 ceph-mon[74331]: 3.14 scrub ok
Nov 24 09:28:04 compute-0 ceph-mon[74331]: 7.14 scrub starts
Nov 24 09:28:04 compute-0 ceph-mon[74331]: 7.14 scrub ok
Nov 24 09:28:04 compute-0 ceph-mon[74331]: Standby manager daemon compute-2.rzcnzg restarted
Nov 24 09:28:04 compute-0 ceph-mon[74331]: Standby manager daemon compute-2.rzcnzg started
Nov 24 09:28:04 compute-0 ceph-mon[74331]: Standby manager daemon compute-1.qelqsg restarted
Nov 24 09:28:04 compute-0 ceph-mon[74331]: Standby manager daemon compute-1.qelqsg started
Nov 24 09:28:04 compute-0 ceph-mon[74331]: 3.1f deep-scrub starts
Nov 24 09:28:04 compute-0 ceph-mon[74331]: 3.1f deep-scrub ok
Nov 24 09:28:04 compute-0 ceph-mon[74331]: Active manager daemon compute-0.mauvni restarted
Nov 24 09:28:04 compute-0 ceph-mon[74331]: Activating manager daemon compute-0.mauvni
Nov 24 09:28:04 compute-0 ceph-mon[74331]: osdmap e46: 3 total, 3 up, 3 in
Nov 24 09:28:04 compute-0 ceph-mon[74331]: mgrmap e20: compute-0.mauvni(active, starting, since 0.0304016s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:28:04 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Nov 24 09:28:04 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Nov 24 09:28:04 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'crash'
Nov 24 09:28:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:28:05.040+0000 7f5a31d38140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 24 09:28:05 compute-0 ceph-mgr[74626]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 24 09:28:05 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'dashboard'
Nov 24 09:28:05 compute-0 ceph-mon[74331]: 5.10 deep-scrub starts
Nov 24 09:28:05 compute-0 ceph-mon[74331]: 5.10 deep-scrub ok
Nov 24 09:28:05 compute-0 ceph-mon[74331]: 2.c scrub starts
Nov 24 09:28:05 compute-0 ceph-mon[74331]: 2.c scrub ok
Nov 24 09:28:05 compute-0 ceph-mon[74331]: 3.1e scrub starts
Nov 24 09:28:05 compute-0 ceph-mon[74331]: 3.1e scrub ok
Nov 24 09:28:05 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'devicehealth'
Nov 24 09:28:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:28:05.735+0000 7f5a31d38140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 24 09:28:05 compute-0 ceph-mgr[74626]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 24 09:28:05 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'diskprediction_local'
Nov 24 09:28:05 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Nov 24 09:28:05 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Nov 24 09:28:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 24 09:28:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 24 09:28:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]:   from numpy import show_config as show_numpy_config
Nov 24 09:28:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:28:05.918+0000 7f5a31d38140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 24 09:28:05 compute-0 ceph-mgr[74626]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 24 09:28:05 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'influx'
Nov 24 09:28:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:28:05.998+0000 7f5a31d38140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 24 09:28:05 compute-0 ceph-mgr[74626]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 24 09:28:05 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'insights'
Nov 24 09:28:06 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'iostat'
Nov 24 09:28:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:28:06.133+0000 7f5a31d38140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 24 09:28:06 compute-0 ceph-mgr[74626]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 24 09:28:06 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'k8sevents'
Nov 24 09:28:06 compute-0 ceph-mon[74331]: 4.13 scrub starts
Nov 24 09:28:06 compute-0 ceph-mon[74331]: 4.13 scrub ok
Nov 24 09:28:06 compute-0 ceph-mon[74331]: 2.f deep-scrub starts
Nov 24 09:28:06 compute-0 ceph-mon[74331]: 2.f deep-scrub ok
Nov 24 09:28:06 compute-0 ceph-mon[74331]: 2.19 scrub starts
Nov 24 09:28:06 compute-0 ceph-mon[74331]: 2.19 scrub ok
Nov 24 09:28:06 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'localpool'
Nov 24 09:28:06 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'mds_autoscaler'
Nov 24 09:28:06 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Nov 24 09:28:06 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Nov 24 09:28:06 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'mirroring'
Nov 24 09:28:06 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'nfs'
Nov 24 09:28:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:28:07.127+0000 7f5a31d38140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 24 09:28:07 compute-0 ceph-mgr[74626]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 24 09:28:07 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'orchestrator'
Nov 24 09:28:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:28:07.338+0000 7f5a31d38140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 24 09:28:07 compute-0 ceph-mgr[74626]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 24 09:28:07 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'osd_perf_query'
Nov 24 09:28:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:28:07.409+0000 7f5a31d38140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 24 09:28:07 compute-0 ceph-mgr[74626]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 24 09:28:07 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'osd_support'
Nov 24 09:28:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:28:07.475+0000 7f5a31d38140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 24 09:28:07 compute-0 ceph-mgr[74626]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 24 09:28:07 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'pg_autoscaler'
Nov 24 09:28:07 compute-0 ceph-mon[74331]: 5.15 scrub starts
Nov 24 09:28:07 compute-0 ceph-mon[74331]: 5.15 scrub ok
Nov 24 09:28:07 compute-0 ceph-mon[74331]: 2.15 scrub starts
Nov 24 09:28:07 compute-0 ceph-mon[74331]: 2.15 scrub ok
Nov 24 09:28:07 compute-0 ceph-mon[74331]: 5.1d scrub starts
Nov 24 09:28:07 compute-0 ceph-mon[74331]: 5.1d scrub ok
Nov 24 09:28:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:28:07.559+0000 7f5a31d38140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 24 09:28:07 compute-0 ceph-mgr[74626]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 24 09:28:07 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'progress'
Nov 24 09:28:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:28:07.631+0000 7f5a31d38140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 24 09:28:07 compute-0 ceph-mgr[74626]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 24 09:28:07 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'prometheus'
Nov 24 09:28:07 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:28:07 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 7.1e deep-scrub starts
Nov 24 09:28:07 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 7.1e deep-scrub ok
Nov 24 09:28:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:28:07.990+0000 7f5a31d38140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 24 09:28:07 compute-0 ceph-mgr[74626]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 24 09:28:07 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'rbd_support'
Nov 24 09:28:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:28:08.100+0000 7f5a31d38140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 24 09:28:08 compute-0 ceph-mgr[74626]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 24 09:28:08 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'restful'
Nov 24 09:28:08 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'rgw'
Nov 24 09:28:08 compute-0 ceph-mon[74331]: 5.11 scrub starts
Nov 24 09:28:08 compute-0 ceph-mon[74331]: 5.11 scrub ok
Nov 24 09:28:08 compute-0 ceph-mon[74331]: 7.a scrub starts
Nov 24 09:28:08 compute-0 ceph-mon[74331]: 7.a scrub ok
Nov 24 09:28:08 compute-0 ceph-mon[74331]: 7.1e deep-scrub starts
Nov 24 09:28:08 compute-0 ceph-mon[74331]: 7.1e deep-scrub ok
Nov 24 09:28:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:28:08.569+0000 7f5a31d38140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 24 09:28:08 compute-0 ceph-mgr[74626]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 24 09:28:08 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'rook'
Nov 24 09:28:08 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 5.5 deep-scrub starts
Nov 24 09:28:08 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 5.5 deep-scrub ok
Nov 24 09:28:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:28:09.209+0000 7f5a31d38140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 24 09:28:09 compute-0 ceph-mgr[74626]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 24 09:28:09 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'selftest'
Nov 24 09:28:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:28:09.287+0000 7f5a31d38140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 24 09:28:09 compute-0 ceph-mgr[74626]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 24 09:28:09 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'snap_schedule'
Nov 24 09:28:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:28:09.375+0000 7f5a31d38140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 24 09:28:09 compute-0 ceph-mgr[74626]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 24 09:28:09 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'stats'
Nov 24 09:28:09 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'status'
Nov 24 09:28:09 compute-0 ceph-mon[74331]: 3.10 scrub starts
Nov 24 09:28:09 compute-0 ceph-mon[74331]: 3.10 scrub ok
Nov 24 09:28:09 compute-0 ceph-mon[74331]: 2.1b scrub starts
Nov 24 09:28:09 compute-0 ceph-mon[74331]: 2.1b scrub ok
Nov 24 09:28:09 compute-0 ceph-mon[74331]: 5.5 deep-scrub starts
Nov 24 09:28:09 compute-0 ceph-mon[74331]: 5.5 deep-scrub ok
Nov 24 09:28:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:28:09.546+0000 7f5a31d38140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 24 09:28:09 compute-0 ceph-mgr[74626]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 24 09:28:09 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'telegraf'
Nov 24 09:28:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:28:09.617+0000 7f5a31d38140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 24 09:28:09 compute-0 ceph-mgr[74626]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 24 09:28:09 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'telemetry'
Nov 24 09:28:09 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Nov 24 09:28:09 compute-0 systemd[75656]: Activating special unit Exit the Session...
Nov 24 09:28:09 compute-0 systemd[75656]: Stopped target Main User Target.
Nov 24 09:28:09 compute-0 systemd[75656]: Stopped target Basic System.
Nov 24 09:28:09 compute-0 systemd[75656]: Stopped target Paths.
Nov 24 09:28:09 compute-0 systemd[75656]: Stopped target Sockets.
Nov 24 09:28:09 compute-0 systemd[75656]: Stopped target Timers.
Nov 24 09:28:09 compute-0 systemd[75656]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 24 09:28:09 compute-0 systemd[75656]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 24 09:28:09 compute-0 systemd[75656]: Closed D-Bus User Message Bus Socket.
Nov 24 09:28:09 compute-0 systemd[75656]: Stopped Create User's Volatile Files and Directories.
Nov 24 09:28:09 compute-0 systemd[75656]: Removed slice User Application Slice.
Nov 24 09:28:09 compute-0 systemd[75656]: Reached target Shutdown.
Nov 24 09:28:09 compute-0 systemd[75656]: Finished Exit the Session.
Nov 24 09:28:09 compute-0 systemd[75656]: Reached target Exit the Session.
Nov 24 09:28:09 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Nov 24 09:28:09 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Nov 24 09:28:09 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Nov 24 09:28:09 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Nov 24 09:28:09 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Nov 24 09:28:09 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Nov 24 09:28:09 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Nov 24 09:28:09 compute-0 systemd[1]: user-42477.slice: Consumed 31.900s CPU time.
Nov 24 09:28:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:28:09.772+0000 7f5a31d38140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 24 09:28:09 compute-0 ceph-mgr[74626]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 24 09:28:09 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'test_orchestrator'
Nov 24 09:28:09 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Nov 24 09:28:09 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Nov 24 09:28:09 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.rzcnzg restarted
Nov 24 09:28:09 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.rzcnzg started
Nov 24 09:28:09 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.qelqsg restarted
Nov 24 09:28:09 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.qelqsg started
Nov 24 09:28:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:28:10.014+0000 7f5a31d38140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'volumes'
Nov 24 09:28:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:28:10.292+0000 7f5a31d38140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'zabbix'
Nov 24 09:28:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:28:10.368+0000 7f5a31d38140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 24 09:28:10 compute-0 ceph-mon[74331]: log_channel(cluster) log [INF] : Active manager daemon compute-0.mauvni restarted
Nov 24 09:28:10 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Nov 24 09:28:10 compute-0 ceph-mon[74331]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.mauvni
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: ms_deliver_dispatch: unhandled message 0x561d646b7860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Nov 24 09:28:10 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: mgr handle_mgr_map Activating!
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: mgr handle_mgr_map I am now activating
Nov 24 09:28:10 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Nov 24 09:28:10 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : mgrmap e21: compute-0.mauvni(active, starting, since 0.0317391s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:28:10 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Nov 24 09:28:10 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 24 09:28:10 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Nov 24 09:28:10 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 24 09:28:10 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Nov 24 09:28:10 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 24 09:28:10 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.mauvni", "id": "compute-0.mauvni"} v 0)
Nov 24 09:28:10 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr metadata", "who": "compute-0.mauvni", "id": "compute-0.mauvni"}]: dispatch
Nov 24 09:28:10 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.rzcnzg", "id": "compute-2.rzcnzg"} v 0)
Nov 24 09:28:10 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr metadata", "who": "compute-2.rzcnzg", "id": "compute-2.rzcnzg"}]: dispatch
Nov 24 09:28:10 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.qelqsg", "id": "compute-1.qelqsg"} v 0)
Nov 24 09:28:10 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr metadata", "who": "compute-1.qelqsg", "id": "compute-1.qelqsg"}]: dispatch
Nov 24 09:28:10 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 24 09:28:10 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 09:28:10 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 24 09:28:10 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 09:28:10 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 24 09:28:10 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:28:10 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Nov 24 09:28:10 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 24 09:28:10 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).mds e1 all = 1
Nov 24 09:28:10 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Nov 24 09:28:10 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 24 09:28:10 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Nov 24 09:28:10 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: balancer
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [balancer INFO root] Starting
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:28:10 compute-0 ceph-mon[74331]: log_channel(cluster) log [INF] : Manager daemon compute-0.mauvni is now available
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [balancer INFO root] Optimize plan auto_2025-11-24_09:28:10
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: cephadm
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: crash
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: dashboard
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO access_control] Loading user roles DB version=2
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO sso] Loading SSO DB version=1
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO root] Configured CherryPy, starting engine...
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: devicehealth
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [devicehealth INFO root] Starting
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: iostat
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: nfs
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: orchestrator
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: pg_autoscaler
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: progress
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [progress INFO root] Loading...
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f59ae842430>, <progress.module.GhostEvent object at 0x7f59ae842670>, <progress.module.GhostEvent object at 0x7f59ae8426a0>, <progress.module.GhostEvent object at 0x7f59ae8426d0>, <progress.module.GhostEvent object at 0x7f59ae842700>, <progress.module.GhostEvent object at 0x7f59ae842730>, <progress.module.GhostEvent object at 0x7f59ae842760>, <progress.module.GhostEvent object at 0x7f59ae842790>, <progress.module.GhostEvent object at 0x7f59ae8427c0>, <progress.module.GhostEvent object at 0x7f59ae8427f0>, <progress.module.GhostEvent object at 0x7f59ae842820>] historic events
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [progress INFO root] Loaded OSDMap, ready.
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [rbd_support INFO root] recovery thread starting
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [rbd_support INFO root] starting setup
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: rbd_support
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: restful
Nov 24 09:28:10 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mauvni/mirror_snapshot_schedule"} v 0)
Nov 24 09:28:10 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mauvni/mirror_snapshot_schedule"}]: dispatch
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: status
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [restful INFO root] server_addr: :: server_port: 8003
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: telemetry
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [restful WARNING root] server not running: no certificate configured
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [rbd_support INFO root] PerfHandler: starting
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_task_task: vms, start_after=
Nov 24 09:28:10 compute-0 ceph-mon[74331]: 6.15 scrub starts
Nov 24 09:28:10 compute-0 ceph-mon[74331]: 6.15 scrub ok
Nov 24 09:28:10 compute-0 ceph-mon[74331]: 7.1f scrub starts
Nov 24 09:28:10 compute-0 ceph-mon[74331]: 7.1f scrub ok
Nov 24 09:28:10 compute-0 ceph-mon[74331]: 7.13 scrub starts
Nov 24 09:28:10 compute-0 ceph-mon[74331]: 7.13 scrub ok
Nov 24 09:28:10 compute-0 ceph-mon[74331]: Standby manager daemon compute-2.rzcnzg restarted
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_task_task: volumes, start_after=
Nov 24 09:28:10 compute-0 ceph-mon[74331]: Standby manager daemon compute-2.rzcnzg started
Nov 24 09:28:10 compute-0 ceph-mon[74331]: Standby manager daemon compute-1.qelqsg restarted
Nov 24 09:28:10 compute-0 ceph-mon[74331]: Standby manager daemon compute-1.qelqsg started
Nov 24 09:28:10 compute-0 ceph-mon[74331]: Active manager daemon compute-0.mauvni restarted
Nov 24 09:28:10 compute-0 ceph-mon[74331]: Activating manager daemon compute-0.mauvni
Nov 24 09:28:10 compute-0 ceph-mon[74331]: osdmap e47: 3 total, 3 up, 3 in
Nov 24 09:28:10 compute-0 ceph-mon[74331]: mgrmap e21: compute-0.mauvni(active, starting, since 0.0317391s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:28:10 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 24 09:28:10 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 24 09:28:10 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 24 09:28:10 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr metadata", "who": "compute-0.mauvni", "id": "compute-0.mauvni"}]: dispatch
Nov 24 09:28:10 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr metadata", "who": "compute-2.rzcnzg", "id": "compute-2.rzcnzg"}]: dispatch
Nov 24 09:28:10 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr metadata", "who": "compute-1.qelqsg", "id": "compute-1.qelqsg"}]: dispatch
Nov 24 09:28:10 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 09:28:10 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 09:28:10 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:28:10 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 24 09:28:10 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 24 09:28:10 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 24 09:28:10 compute-0 ceph-mon[74331]: Manager daemon compute-0.mauvni is now available
Nov 24 09:28:10 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mauvni/mirror_snapshot_schedule"}]: dispatch
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_task_task: backups, start_after=
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_task_task: images, start_after=
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: volumes
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TaskHandler: starting
Nov 24 09:28:10 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mauvni/trash_purge_schedule"} v 0)
Nov 24 09:28:10 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mauvni/trash_purge_schedule"}]: dispatch
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [rbd_support INFO root] setup complete
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Nov 24 09:28:10 compute-0 sshd-session[92722]: Accepted publickey for ceph-admin from 192.168.122.100 port 58974 ssh2: RSA SHA256:d901dNHY28a6fGfVJZBiZ/6DokdrVSFZakqDQ7cQMIA
Nov 24 09:28:10 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Nov 24 09:28:10 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Nov 24 09:28:10 compute-0 systemd-logind[822]: New session 35 of user ceph-admin.
Nov 24 09:28:10 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Nov 24 09:28:10 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 24 09:28:10 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 24 09:28:10 compute-0 systemd[1]: Starting User Manager for UID 42477...
Nov 24 09:28:10 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.module] Engine started.
Nov 24 09:28:10 compute-0 systemd[92737]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 09:28:10 compute-0 systemd[92737]: Queued start job for default target Main User Target.
Nov 24 09:28:11 compute-0 systemd[92737]: Created slice User Application Slice.
Nov 24 09:28:11 compute-0 systemd[92737]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 24 09:28:11 compute-0 systemd[92737]: Started Daily Cleanup of User's Temporary Directories.
Nov 24 09:28:11 compute-0 systemd[92737]: Reached target Paths.
Nov 24 09:28:11 compute-0 systemd[92737]: Reached target Timers.
Nov 24 09:28:11 compute-0 systemd[92737]: Starting D-Bus User Message Bus Socket...
Nov 24 09:28:11 compute-0 systemd[92737]: Starting Create User's Volatile Files and Directories...
Nov 24 09:28:11 compute-0 systemd[92737]: Finished Create User's Volatile Files and Directories.
Nov 24 09:28:11 compute-0 systemd[92737]: Listening on D-Bus User Message Bus Socket.
Nov 24 09:28:11 compute-0 systemd[92737]: Reached target Sockets.
Nov 24 09:28:11 compute-0 systemd[92737]: Reached target Basic System.
Nov 24 09:28:11 compute-0 systemd[92737]: Reached target Main User Target.
Nov 24 09:28:11 compute-0 systemd[92737]: Startup finished in 125ms.
Nov 24 09:28:11 compute-0 systemd[1]: Started User Manager for UID 42477.
Nov 24 09:28:11 compute-0 systemd[1]: Started Session 35 of User ceph-admin.
Nov 24 09:28:11 compute-0 sshd-session[92722]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 09:28:11 compute-0 sudo[92754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:28:11 compute-0 sudo[92754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:11 compute-0 sudo[92754]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:11 compute-0 sudo[92779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 24 09:28:11 compute-0 sudo[92779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:11 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : mgrmap e22: compute-0.mauvni(active, since 1.05786s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:28:11 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Nov 24 09:28:11 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 24 09:28:11 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.14475 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 09:28:11 compute-0 ceph-mgr[74626]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Nov 24 09:28:11 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Nov 24 09:28:11 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 24 09:28:11 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Nov 24 09:28:11 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 24 09:28:11 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Nov 24 09:28:11 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v3: 197 pgs: 197 active+clean; 454 KiB data, 102 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:28:11 compute-0 ceph-mon[74331]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 24 09:28:11 compute-0 ceph-mon[74331]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 24 09:28:11 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mon-compute-0[74327]: 2025-11-24T09:28:11.440+0000 7f6964768640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 24 09:28:11 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 24 09:28:11 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).mds e2 new map
Nov 24 09:28:11 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).mds e2 print_map
                                           e2
                                           btime 2025-11-24T09:28:11:441297+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-24T09:28:11.441245+0000
                                           modified        2025-11-24T09:28:11.441245+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
Nov 24 09:28:11 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Nov 24 09:28:11 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Nov 24 09:28:11 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Nov 24 09:28:11 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Nov 24 09:28:11 compute-0 ceph-mgr[74626]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 24 09:28:11 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 24 09:28:11 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:11 compute-0 ceph-mgr[74626]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Nov 24 09:28:11 compute-0 systemd[1]: libpod-5942f3a3f85a1276d9bf670894bfe4067b8d74af9544eb80ac3fcb14b446a269.scope: Deactivated successfully.
Nov 24 09:28:11 compute-0 ceph-mon[74331]: 5.9 scrub starts
Nov 24 09:28:11 compute-0 ceph-mon[74331]: 5.9 scrub ok
Nov 24 09:28:11 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mauvni/trash_purge_schedule"}]: dispatch
Nov 24 09:28:11 compute-0 ceph-mon[74331]: 2.d scrub starts
Nov 24 09:28:11 compute-0 ceph-mon[74331]: 2.d scrub ok
Nov 24 09:28:11 compute-0 ceph-mon[74331]: 7.6 scrub starts
Nov 24 09:28:11 compute-0 ceph-mon[74331]: 7.6 scrub ok
Nov 24 09:28:11 compute-0 ceph-mon[74331]: mgrmap e22: compute-0.mauvni(active, since 1.05786s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:28:11 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 24 09:28:11 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 24 09:28:11 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 24 09:28:11 compute-0 ceph-mon[74331]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 24 09:28:11 compute-0 ceph-mon[74331]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 24 09:28:11 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 24 09:28:11 compute-0 ceph-mon[74331]: osdmap e48: 3 total, 3 up, 3 in
Nov 24 09:28:11 compute-0 ceph-mon[74331]: fsmap cephfs:0
Nov 24 09:28:11 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:11 compute-0 podman[92817]: 2025-11-24 09:28:11.560027558 +0000 UTC m=+0.033026130 container died 5942f3a3f85a1276d9bf670894bfe4067b8d74af9544eb80ac3fcb14b446a269 (image=quay.io/ceph/ceph:v19, name=gracious_banzai, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Nov 24 09:28:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-4fb23ddbe2724a415947080575f8c0509237ffdd3a38003125911e7ec935afdc-merged.mount: Deactivated successfully.
Nov 24 09:28:11 compute-0 podman[92817]: 2025-11-24 09:28:11.603413701 +0000 UTC m=+0.076412273 container remove 5942f3a3f85a1276d9bf670894bfe4067b8d74af9544eb80ac3fcb14b446a269 (image=quay.io/ceph/ceph:v19, name=gracious_banzai, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:28:11 compute-0 systemd[1]: libpod-conmon-5942f3a3f85a1276d9bf670894bfe4067b8d74af9544eb80ac3fcb14b446a269.scope: Deactivated successfully.
Nov 24 09:28:11 compute-0 sudo[92530]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:11 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Nov 24 09:28:11 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Nov 24 09:28:11 compute-0 sudo[92907]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urdpnrcnvvmbvbuvxuhtmrittnaoukuy ; /usr/bin/python3'
Nov 24 09:28:11 compute-0 sudo[92907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:28:11 compute-0 podman[92908]: 2025-11-24 09:28:11.916633491 +0000 UTC m=+0.070154636 container exec 926e81c0f890a1c1ac5ebf5b0a3fc7d39273a3029701ecf933d5ab782a4c6bc4 (image=quay.io/ceph/ceph:v19, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mon-compute-0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:28:11 compute-0 python3[92916]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:28:12 compute-0 podman[92908]: 2025-11-24 09:28:12.01240766 +0000 UTC m=+0.165928805 container exec_died 926e81c0f890a1c1ac5ebf5b0a3fc7d39273a3029701ecf933d5ab782a4c6bc4 (image=quay.io/ceph/ceph:v19, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:28:12 compute-0 podman[92931]: 2025-11-24 09:28:12.050115079 +0000 UTC m=+0.046035037 container create f3aafafb93a7dcf68cdeddfa4f6191bb6942a230bfc6f185b047307701994d34 (image=quay.io/ceph/ceph:v19, name=epic_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Nov 24 09:28:12 compute-0 systemd[1]: Started libpod-conmon-f3aafafb93a7dcf68cdeddfa4f6191bb6942a230bfc6f185b047307701994d34.scope.
Nov 24 09:28:12 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:28:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdd10db88c5dc8d52b76a8e89e8fe3e95cfeab4d08666ceccf3dce5bf086c22e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdd10db88c5dc8d52b76a8e89e8fe3e95cfeab4d08666ceccf3dce5bf086c22e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdd10db88c5dc8d52b76a8e89e8fe3e95cfeab4d08666ceccf3dce5bf086c22e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:12 compute-0 podman[92931]: 2025-11-24 09:28:12.032245298 +0000 UTC m=+0.028165266 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:28:12 compute-0 podman[92931]: 2025-11-24 09:28:12.134468519 +0000 UTC m=+0.130388497 container init f3aafafb93a7dcf68cdeddfa4f6191bb6942a230bfc6f185b047307701994d34 (image=quay.io/ceph/ceph:v19, name=epic_rhodes, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 09:28:12 compute-0 podman[92931]: 2025-11-24 09:28:12.141789213 +0000 UTC m=+0.137709171 container start f3aafafb93a7dcf68cdeddfa4f6191bb6942a230bfc6f185b047307701994d34 (image=quay.io/ceph/ceph:v19, name=epic_rhodes, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Nov 24 09:28:12 compute-0 podman[92931]: 2025-11-24 09:28:12.145347916 +0000 UTC m=+0.141267874 container attach f3aafafb93a7dcf68cdeddfa4f6191bb6942a230bfc6f185b047307701994d34 (image=quay.io/ceph/ceph:v19, name=epic_rhodes, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:28:12 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 09:28:12 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:12 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 09:28:12 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:12 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v5: 197 pgs: 197 active+clean; 454 KiB data, 102 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:28:12 compute-0 ceph-mgr[74626]: [cephadm INFO cherrypy.error] [24/Nov/2025:09:28:12] ENGINE Bus STARTING
Nov 24 09:28:12 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : [24/Nov/2025:09:28:12] ENGINE Bus STARTING
Nov 24 09:28:12 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.14508 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 09:28:12 compute-0 ceph-mgr[74626]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 24 09:28:12 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 24 09:28:12 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Nov 24 09:28:12 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:12 compute-0 epic_rhodes[92958]: Scheduled mds.cephfs update...
Nov 24 09:28:12 compute-0 systemd[1]: libpod-f3aafafb93a7dcf68cdeddfa4f6191bb6942a230bfc6f185b047307701994d34.scope: Deactivated successfully.
Nov 24 09:28:12 compute-0 podman[92931]: 2025-11-24 09:28:12.541191575 +0000 UTC m=+0.537111533 container died f3aafafb93a7dcf68cdeddfa4f6191bb6942a230bfc6f185b047307701994d34 (image=quay.io/ceph/ceph:v19, name=epic_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:28:12 compute-0 ceph-mgr[74626]: [devicehealth INFO root] Check health
Nov 24 09:28:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-cdd10db88c5dc8d52b76a8e89e8fe3e95cfeab4d08666ceccf3dce5bf086c22e-merged.mount: Deactivated successfully.
Nov 24 09:28:12 compute-0 ceph-mgr[74626]: [cephadm INFO cherrypy.error] [24/Nov/2025:09:28:12] ENGINE Serving on https://192.168.122.100:7150
Nov 24 09:28:12 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : [24/Nov/2025:09:28:12] ENGINE Serving on https://192.168.122.100:7150
Nov 24 09:28:12 compute-0 ceph-mon[74331]: 3.16 scrub starts
Nov 24 09:28:12 compute-0 ceph-mon[74331]: 3.16 scrub ok
Nov 24 09:28:12 compute-0 ceph-mon[74331]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 24 09:28:12 compute-0 ceph-mon[74331]: 2.12 deep-scrub starts
Nov 24 09:28:12 compute-0 ceph-mon[74331]: 2.12 deep-scrub ok
Nov 24 09:28:12 compute-0 ceph-mon[74331]: 3.4 scrub starts
Nov 24 09:28:12 compute-0 ceph-mon[74331]: 3.4 scrub ok
Nov 24 09:28:12 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:12 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:12 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:12 compute-0 ceph-mgr[74626]: [cephadm INFO cherrypy.error] [24/Nov/2025:09:28:12] ENGINE Client ('192.168.122.100', 34270) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 24 09:28:12 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : [24/Nov/2025:09:28:12] ENGINE Client ('192.168.122.100', 34270) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 24 09:28:12 compute-0 podman[92931]: 2025-11-24 09:28:12.585695025 +0000 UTC m=+0.581614983 container remove f3aafafb93a7dcf68cdeddfa4f6191bb6942a230bfc6f185b047307701994d34 (image=quay.io/ceph/ceph:v19, name=epic_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:28:12 compute-0 systemd[1]: libpod-conmon-f3aafafb93a7dcf68cdeddfa4f6191bb6942a230bfc6f185b047307701994d34.scope: Deactivated successfully.
Nov 24 09:28:12 compute-0 sudo[92907]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:12 compute-0 podman[93118]: 2025-11-24 09:28:12.66261522 +0000 UTC m=+0.061505263 container exec 7b41a24888e2dd3dca187bd76560d76829b7d7b7dcf75bceeedb6a669c1298b7 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:28:12 compute-0 podman[93118]: 2025-11-24 09:28:12.671705293 +0000 UTC m=+0.070595296 container exec_died 7b41a24888e2dd3dca187bd76560d76829b7d7b7dcf75bceeedb6a669c1298b7 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:28:12 compute-0 ceph-mgr[74626]: [cephadm INFO cherrypy.error] [24/Nov/2025:09:28:12] ENGINE Serving on http://192.168.122.100:8765
Nov 24 09:28:12 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : [24/Nov/2025:09:28:12] ENGINE Serving on http://192.168.122.100:8765
Nov 24 09:28:12 compute-0 ceph-mgr[74626]: [cephadm INFO cherrypy.error] [24/Nov/2025:09:28:12] ENGINE Bus STARTED
Nov 24 09:28:12 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : [24/Nov/2025:09:28:12] ENGINE Bus STARTED
Nov 24 09:28:12 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:28:12 compute-0 sudo[92779]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:12 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:28:12 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Nov 24 09:28:12 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 09:28:12 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Nov 24 09:28:12 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:12 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:28:12 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:12 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 09:28:12 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:12 compute-0 sudo[93184]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abskshvsnoywweiopldxdozbzxvweval ; /usr/bin/python3'
Nov 24 09:28:12 compute-0 sudo[93184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:28:12 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:12 compute-0 sudo[93186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:28:12 compute-0 sudo[93186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:12 compute-0 sudo[93186]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:13 compute-0 sudo[93212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:28:13 compute-0 sudo[93212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:13 compute-0 python3[93188]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   nfs cluster create cephfs --ingress --virtual-ip=192.168.122.2/24 --ingress-mode=haproxy-protocol '--placement=compute-0 compute-1 compute-2 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:28:13 compute-0 podman[93237]: 2025-11-24 09:28:13.119091818 +0000 UTC m=+0.060948769 container create 5d207dfdbcecf9878b651d127ebae85c7a0bdce9c79896aa99134660f4da4100 (image=quay.io/ceph/ceph:v19, name=modest_babbage, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid)
Nov 24 09:28:13 compute-0 systemd[1]: Started libpod-conmon-5d207dfdbcecf9878b651d127ebae85c7a0bdce9c79896aa99134660f4da4100.scope.
Nov 24 09:28:13 compute-0 podman[93237]: 2025-11-24 09:28:13.091084848 +0000 UTC m=+0.032941839 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:28:13 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:28:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30e6630a3821b4cea6d2c8091fecb31cc8ba2945ee9df6eaa22e1351bcf19bfe/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30e6630a3821b4cea6d2c8091fecb31cc8ba2945ee9df6eaa22e1351bcf19bfe/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30e6630a3821b4cea6d2c8091fecb31cc8ba2945ee9df6eaa22e1351bcf19bfe/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:13 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : mgrmap e23: compute-0.mauvni(active, since 2s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:28:13 compute-0 podman[93237]: 2025-11-24 09:28:13.228243923 +0000 UTC m=+0.170100894 container init 5d207dfdbcecf9878b651d127ebae85c7a0bdce9c79896aa99134660f4da4100 (image=quay.io/ceph/ceph:v19, name=modest_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:28:13 compute-0 podman[93237]: 2025-11-24 09:28:13.235656128 +0000 UTC m=+0.177513079 container start 5d207dfdbcecf9878b651d127ebae85c7a0bdce9c79896aa99134660f4da4100 (image=quay.io/ceph/ceph:v19, name=modest_babbage, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Nov 24 09:28:13 compute-0 podman[93237]: 2025-11-24 09:28:13.23916059 +0000 UTC m=+0.181017551 container attach 5d207dfdbcecf9878b651d127ebae85c7a0bdce9c79896aa99134660f4da4100 (image=quay.io/ceph/ceph:v19, name=modest_babbage, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:28:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 09:28:13 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 09:28:13 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Nov 24 09:28:13 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 24 09:28:13 compute-0 sudo[93212]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:13 compute-0 ceph-mon[74331]: 3.f deep-scrub starts
Nov 24 09:28:13 compute-0 ceph-mon[74331]: 3.f deep-scrub ok
Nov 24 09:28:13 compute-0 ceph-mon[74331]: pgmap v5: 197 pgs: 197 active+clean; 454 KiB data, 102 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:28:13 compute-0 ceph-mon[74331]: [24/Nov/2025:09:28:12] ENGINE Bus STARTING
Nov 24 09:28:13 compute-0 ceph-mon[74331]: from='client.14508 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 09:28:13 compute-0 ceph-mon[74331]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 24 09:28:13 compute-0 ceph-mon[74331]: [24/Nov/2025:09:28:12] ENGINE Serving on https://192.168.122.100:7150
Nov 24 09:28:13 compute-0 ceph-mon[74331]: [24/Nov/2025:09:28:12] ENGINE Client ('192.168.122.100', 34270) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 24 09:28:13 compute-0 ceph-mon[74331]: 2.b scrub starts
Nov 24 09:28:13 compute-0 ceph-mon[74331]: 2.b scrub ok
Nov 24 09:28:13 compute-0 ceph-mon[74331]: [24/Nov/2025:09:28:12] ENGINE Serving on http://192.168.122.100:8765
Nov 24 09:28:13 compute-0 ceph-mon[74331]: [24/Nov/2025:09:28:12] ENGINE Bus STARTED
Nov 24 09:28:13 compute-0 ceph-mon[74331]: 3.1 scrub starts
Nov 24 09:28:13 compute-0 ceph-mon[74331]: 3.1 scrub ok
Nov 24 09:28:13 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:13 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:13 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:13 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:13 compute-0 ceph-mon[74331]: 3.c scrub starts
Nov 24 09:28:13 compute-0 ceph-mon[74331]: 3.c scrub ok
Nov 24 09:28:13 compute-0 ceph-mon[74331]: mgrmap e23: compute-0.mauvni(active, since 2s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:28:13 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:13 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:13 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 24 09:28:13 compute-0 sudo[93307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:28:13 compute-0 sudo[93307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:13 compute-0 sudo[93307]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:13 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.14520 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "cluster_id": "cephfs", "ingress": true, "virtual_ip": "192.168.122.2/24", "ingress_mode": "haproxy-protocol", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 09:28:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true} v 0)
Nov 24 09:28:13 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Nov 24 09:28:13 compute-0 sudo[93332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Nov 24 09:28:13 compute-0 sudo[93332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:13 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 5.19 deep-scrub starts
Nov 24 09:28:13 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 5.19 deep-scrub ok
Nov 24 09:28:13 compute-0 sudo[93332]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:28:13 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:28:14 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:14 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Nov 24 09:28:14 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 09:28:14 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 09:28:14 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:14 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 09:28:14 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:14 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Nov 24 09:28:14 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 24 09:28:14 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:28:14 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:28:14 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 24 09:28:14 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:28:14 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Nov 24 09:28:14 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Nov 24 09:28:14 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Nov 24 09:28:14 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Nov 24 09:28:14 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Nov 24 09:28:14 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Nov 24 09:28:14 compute-0 sudo[93378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 24 09:28:14 compute-0 sudo[93378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:14 compute-0 sudo[93378]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:14 compute-0 sudo[93403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph
Nov 24 09:28:14 compute-0 sudo[93403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:14 compute-0 sudo[93403]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:14 compute-0 sudo[93428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.conf.new
Nov 24 09:28:14 compute-0 sudo[93428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:14 compute-0 sudo[93428]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:14 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v6: 197 pgs: 197 active+clean; 454 KiB data, 102 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:28:14 compute-0 sudo[93453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:28:14 compute-0 sudo[93453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:14 compute-0 sudo[93453]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:14 compute-0 sudo[93478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.conf.new
Nov 24 09:28:14 compute-0 sudo[93478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:14 compute-0 sudo[93478]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:14 compute-0 sudo[93526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.conf.new
Nov 24 09:28:14 compute-0 sudo[93526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:14 compute-0 sudo[93526]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:14 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Nov 24 09:28:14 compute-0 ceph-mon[74331]: 2.18 deep-scrub starts
Nov 24 09:28:14 compute-0 ceph-mon[74331]: 2.18 deep-scrub ok
Nov 24 09:28:14 compute-0 ceph-mon[74331]: from='client.14520 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "cluster_id": "cephfs", "ingress": true, "virtual_ip": "192.168.122.2/24", "ingress_mode": "haproxy-protocol", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 09:28:14 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Nov 24 09:28:14 compute-0 ceph-mon[74331]: 5.19 deep-scrub starts
Nov 24 09:28:14 compute-0 ceph-mon[74331]: 5.19 deep-scrub ok
Nov 24 09:28:14 compute-0 ceph-mon[74331]: 3.a scrub starts
Nov 24 09:28:14 compute-0 ceph-mon[74331]: 3.a scrub ok
Nov 24 09:28:14 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:14 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:14 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 09:28:14 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:14 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:14 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 24 09:28:14 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:28:14 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:28:14 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Nov 24 09:28:14 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Nov 24 09:28:14 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Nov 24 09:28:14 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"} v 0)
Nov 24 09:28:14 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Nov 24 09:28:14 compute-0 sudo[93551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.conf.new
Nov 24 09:28:14 compute-0 sudo[93551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:14 compute-0 sudo[93551]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:14 compute-0 sudo[93576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Nov 24 09:28:14 compute-0 sudo[93576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:14 compute-0 sudo[93576]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:14 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:28:14 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:28:14 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:28:14 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:28:14 compute-0 sudo[93601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config
Nov 24 09:28:14 compute-0 sudo[93601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:14 compute-0 sudo[93601]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:14 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Nov 24 09:28:14 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Nov 24 09:28:14 compute-0 sudo[93626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config
Nov 24 09:28:14 compute-0 sudo[93626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:14 compute-0 sudo[93626]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:14 compute-0 sudo[93651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf.new
Nov 24 09:28:14 compute-0 sudo[93651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:14 compute-0 sudo[93651]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:14 compute-0 sudo[93676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:28:14 compute-0 sudo[93676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:14 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:28:14 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:28:14 compute-0 sudo[93676]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:15 compute-0 sudo[93701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf.new
Nov 24 09:28:15 compute-0 sudo[93701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:15 compute-0 sudo[93701]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:15 compute-0 sudo[93749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf.new
Nov 24 09:28:15 compute-0 sudo[93749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:15 compute-0 sudo[93749]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:15 compute-0 sudo[93774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf.new
Nov 24 09:28:15 compute-0 sudo[93774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:15 compute-0 sudo[93774]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:15 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 24 09:28:15 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 24 09:28:15 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : mgrmap e24: compute-0.mauvni(active, since 4s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:28:15 compute-0 sudo[93799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf.new /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:28:15 compute-0 sudo[93799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:15 compute-0 sudo[93799]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:15 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 24 09:28:15 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 24 09:28:15 compute-0 sudo[93824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 24 09:28:15 compute-0 sudo[93824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:15 compute-0 sudo[93824]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:15 compute-0 sudo[93849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph
Nov 24 09:28:15 compute-0 sudo[93849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:15 compute-0 sudo[93849]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:15 compute-0 sudo[93874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.client.admin.keyring.new
Nov 24 09:28:15 compute-0 sudo[93874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:15 compute-0 sudo[93874]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:15 compute-0 sudo[93899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:28:15 compute-0 sudo[93899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:15 compute-0 sudo[93899]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:15 compute-0 sudo[93924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.client.admin.keyring.new
Nov 24 09:28:15 compute-0 sudo[93924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:15 compute-0 sudo[93924]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:15 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Nov 24 09:28:15 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Nov 24 09:28:15 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Nov 24 09:28:15 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Nov 24 09:28:15 compute-0 ceph-mon[74331]: Updating compute-0:/etc/ceph/ceph.conf
Nov 24 09:28:15 compute-0 ceph-mon[74331]: Updating compute-1:/etc/ceph/ceph.conf
Nov 24 09:28:15 compute-0 ceph-mon[74331]: Updating compute-2:/etc/ceph/ceph.conf
Nov 24 09:28:15 compute-0 ceph-mon[74331]: pgmap v6: 197 pgs: 197 active+clean; 454 KiB data, 102 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:28:15 compute-0 ceph-mon[74331]: 4.2 scrub starts
Nov 24 09:28:15 compute-0 ceph-mon[74331]: 4.2 scrub ok
Nov 24 09:28:15 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Nov 24 09:28:15 compute-0 ceph-mon[74331]: osdmap e49: 3 total, 3 up, 3 in
Nov 24 09:28:15 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Nov 24 09:28:15 compute-0 ceph-mon[74331]: Updating compute-1:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:28:15 compute-0 ceph-mon[74331]: Updating compute-0:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:28:15 compute-0 ceph-mon[74331]: 3.6 scrub starts
Nov 24 09:28:15 compute-0 ceph-mon[74331]: 3.6 scrub ok
Nov 24 09:28:15 compute-0 ceph-mon[74331]: 4.d scrub starts
Nov 24 09:28:15 compute-0 ceph-mon[74331]: 4.d scrub ok
Nov 24 09:28:15 compute-0 ceph-mon[74331]: Updating compute-2:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:28:15 compute-0 ceph-mon[74331]: mgrmap e24: compute-0.mauvni(active, since 4s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:28:15 compute-0 ceph-mgr[74626]: [nfs INFO nfs.cluster] Created empty object:conf-nfs.cephfs
Nov 24 09:28:15 compute-0 ceph-mgr[74626]: [cephadm INFO root] Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Nov 24 09:28:15 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Nov 24 09:28:15 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:28:15 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:15 compute-0 ceph-mgr[74626]: [cephadm INFO root] Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Nov 24 09:28:15 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Nov 24 09:28:15 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Nov 24 09:28:15 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:15 compute-0 systemd[1]: libpod-5d207dfdbcecf9878b651d127ebae85c7a0bdce9c79896aa99134660f4da4100.scope: Deactivated successfully.
Nov 24 09:28:15 compute-0 podman[93237]: 2025-11-24 09:28:15.707823529 +0000 UTC m=+2.649680480 container died 5d207dfdbcecf9878b651d127ebae85c7a0bdce9c79896aa99134660f4da4100 (image=quay.io/ceph/ceph:v19, name=modest_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 24 09:28:15 compute-0 sudo[93982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.client.admin.keyring.new
Nov 24 09:28:15 compute-0 sudo[93982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:15 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 09:28:15 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 09:28:15 compute-0 sudo[93982]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-30e6630a3821b4cea6d2c8091fecb31cc8ba2945ee9df6eaa22e1351bcf19bfe-merged.mount: Deactivated successfully.
Nov 24 09:28:15 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 24 09:28:15 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 24 09:28:15 compute-0 podman[93237]: 2025-11-24 09:28:15.759192531 +0000 UTC m=+2.701049482 container remove 5d207dfdbcecf9878b651d127ebae85c7a0bdce9c79896aa99134660f4da4100 (image=quay.io/ceph/ceph:v19, name=modest_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:28:15 compute-0 systemd[1]: libpod-conmon-5d207dfdbcecf9878b651d127ebae85c7a0bdce9c79896aa99134660f4da4100.scope: Deactivated successfully.
Nov 24 09:28:15 compute-0 sudo[93184]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:15 compute-0 sudo[94020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.client.admin.keyring.new
Nov 24 09:28:15 compute-0 sudo[94020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:15 compute-0 sudo[94020]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:15 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Nov 24 09:28:15 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Nov 24 09:28:15 compute-0 sudo[94045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Nov 24 09:28:15 compute-0 sudo[94045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:15 compute-0 sudo[94045]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:15 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 09:28:15 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 09:28:15 compute-0 sudo[94070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config
Nov 24 09:28:15 compute-0 sudo[94070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:15 compute-0 sudo[94070]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:15 compute-0 sudo[94095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config
Nov 24 09:28:15 compute-0 sudo[94095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:15 compute-0 sudo[94095]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:16 compute-0 sudo[94120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring.new
Nov 24 09:28:16 compute-0 sudo[94120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:16 compute-0 sudo[94120]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:16 compute-0 sudo[94145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:28:16 compute-0 sudo[94145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:16 compute-0 sudo[94145]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:16 compute-0 sudo[94170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring.new
Nov 24 09:28:16 compute-0 sudo[94170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:16 compute-0 sudo[94170]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:16 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 09:28:16 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:16 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 09:28:16 compute-0 sudo[94241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring.new
Nov 24 09:28:16 compute-0 sudo[94241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:16 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:16 compute-0 sudo[94241]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:16 compute-0 sudo[94295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring.new
Nov 24 09:28:16 compute-0 sudo[94295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:16 compute-0 sudo[94295]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:16 compute-0 sudo[94344]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmqwrxzkgojbqlykefxwxcmowfxmcsjm ; /usr/bin/python3'
Nov 24 09:28:16 compute-0 sudo[94344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:28:16 compute-0 sudo[94343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring.new /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 09:28:16 compute-0 sudo[94343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:16 compute-0 sudo[94343]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:16 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:28:16 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:16 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:28:16 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v9: 198 pgs: 1 unknown, 197 active+clean; 454 KiB data, 102 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:28:16 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:16 compute-0 python3[94358]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 09:28:16 compute-0 sudo[94344]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:16 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Nov 24 09:28:16 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 09:28:16 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 09:28:16 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Nov 24 09:28:16 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Nov 24 09:28:16 compute-0 ceph-mon[74331]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 24 09:28:16 compute-0 ceph-mon[74331]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 24 09:28:16 compute-0 ceph-mon[74331]: 4.1 deep-scrub starts
Nov 24 09:28:16 compute-0 ceph-mon[74331]: 4.1 deep-scrub ok
Nov 24 09:28:16 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Nov 24 09:28:16 compute-0 ceph-mon[74331]: osdmap e50: 3 total, 3 up, 3 in
Nov 24 09:28:16 compute-0 ceph-mon[74331]: Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Nov 24 09:28:16 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:16 compute-0 ceph-mon[74331]: Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Nov 24 09:28:16 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:16 compute-0 ceph-mon[74331]: Updating compute-1:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 09:28:16 compute-0 ceph-mon[74331]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 24 09:28:16 compute-0 ceph-mon[74331]: 5.6 scrub starts
Nov 24 09:28:16 compute-0 ceph-mon[74331]: 5.6 scrub ok
Nov 24 09:28:16 compute-0 ceph-mon[74331]: Updating compute-0:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 09:28:16 compute-0 ceph-mon[74331]: 5.16 deep-scrub starts
Nov 24 09:28:16 compute-0 ceph-mon[74331]: 5.16 deep-scrub ok
Nov 24 09:28:16 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:16 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:16 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:16 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:16 compute-0 ceph-mon[74331]: osdmap e51: 3 total, 3 up, 3 in
Nov 24 09:28:16 compute-0 sudo[94441]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwqvhprzjkmiingvmqxzryzsinpcznjy ; /usr/bin/python3'
Nov 24 09:28:16 compute-0 sudo[94441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:28:16 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Nov 24 09:28:16 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Nov 24 09:28:16 compute-0 python3[94443]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763976496.221464-37335-49750986260577/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=5b68b38eb199b40419da711d3119a1cd74c89fee backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:28:16 compute-0 sudo[94441]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:17 compute-0 sudo[94491]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fiqwrkcstvdkbckfxcldsagpsmqyksdq ; /usr/bin/python3'
Nov 24 09:28:17 compute-0 sudo[94491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:28:17 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : mgrmap e25: compute-0.mauvni(active, since 6s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:28:17 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 09:28:17 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:17 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 09:28:17 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:17 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:28:17 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:17 compute-0 ceph-mgr[74626]: [progress INFO root] update: starting ev e50c2553-61bb-44e8-8573-666929243308 (Updating node-exporter deployment (+2 -> 3))
Nov 24 09:28:17 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-1 on compute-1
Nov 24 09:28:17 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-1 on compute-1
Nov 24 09:28:17 compute-0 python3[94493]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:28:17 compute-0 podman[94494]: 2025-11-24 09:28:17.553147923 +0000 UTC m=+0.060882058 container create 1be14b3c99f2a4a871aef562a21672ba37ba7ff76e0981347e8ab5ff9b259f0c (image=quay.io/ceph/ceph:v19, name=awesome_euclid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:28:17 compute-0 systemd[1]: Started libpod-conmon-1be14b3c99f2a4a871aef562a21672ba37ba7ff76e0981347e8ab5ff9b259f0c.scope.
Nov 24 09:28:17 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:28:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8c3f63ede51de5c760f3a94fffc0516bb67d87af973b0adb69f3cc9c5a95835/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8c3f63ede51de5c760f3a94fffc0516bb67d87af973b0adb69f3cc9c5a95835/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:17 compute-0 podman[94494]: 2025-11-24 09:28:17.528956631 +0000 UTC m=+0.036690846 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:28:17 compute-0 podman[94494]: 2025-11-24 09:28:17.62808601 +0000 UTC m=+0.135820175 container init 1be14b3c99f2a4a871aef562a21672ba37ba7ff76e0981347e8ab5ff9b259f0c (image=quay.io/ceph/ceph:v19, name=awesome_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:28:17 compute-0 podman[94494]: 2025-11-24 09:28:17.634278526 +0000 UTC m=+0.142012661 container start 1be14b3c99f2a4a871aef562a21672ba37ba7ff76e0981347e8ab5ff9b259f0c (image=quay.io/ceph/ceph:v19, name=awesome_euclid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:28:17 compute-0 podman[94494]: 2025-11-24 09:28:17.637587205 +0000 UTC m=+0.145321350 container attach 1be14b3c99f2a4a871aef562a21672ba37ba7ff76e0981347e8ab5ff9b259f0c (image=quay.io/ceph/ceph:v19, name=awesome_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Nov 24 09:28:17 compute-0 ceph-mon[74331]: pgmap v9: 198 pgs: 1 unknown, 197 active+clean; 454 KiB data, 102 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:28:17 compute-0 ceph-mon[74331]: 4.19 scrub starts
Nov 24 09:28:17 compute-0 ceph-mon[74331]: 4.19 scrub ok
Nov 24 09:28:17 compute-0 ceph-mon[74331]: Updating compute-2:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 09:28:17 compute-0 ceph-mon[74331]: 3.2 scrub starts
Nov 24 09:28:17 compute-0 ceph-mon[74331]: 3.2 scrub ok
Nov 24 09:28:17 compute-0 ceph-mon[74331]: 4.5 scrub starts
Nov 24 09:28:17 compute-0 ceph-mon[74331]: 4.5 scrub ok
Nov 24 09:28:17 compute-0 ceph-mon[74331]: mgrmap e25: compute-0.mauvni(active, since 6s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:28:17 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:17 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:17 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:17 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 5.c scrub starts
Nov 24 09:28:17 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:28:17 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 5.c scrub ok
Nov 24 09:28:18 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth import"} v 0)
Nov 24 09:28:18 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1364618523' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 24 09:28:18 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1364618523' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 24 09:28:18 compute-0 systemd[1]: libpod-1be14b3c99f2a4a871aef562a21672ba37ba7ff76e0981347e8ab5ff9b259f0c.scope: Deactivated successfully.
Nov 24 09:28:18 compute-0 podman[94494]: 2025-11-24 09:28:18.072817292 +0000 UTC m=+0.580551457 container died 1be14b3c99f2a4a871aef562a21672ba37ba7ff76e0981347e8ab5ff9b259f0c (image=quay.io/ceph/ceph:v19, name=awesome_euclid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 24 09:28:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8c3f63ede51de5c760f3a94fffc0516bb67d87af973b0adb69f3cc9c5a95835-merged.mount: Deactivated successfully.
Nov 24 09:28:18 compute-0 podman[94494]: 2025-11-24 09:28:18.115931619 +0000 UTC m=+0.623665744 container remove 1be14b3c99f2a4a871aef562a21672ba37ba7ff76e0981347e8ab5ff9b259f0c (image=quay.io/ceph/ceph:v19, name=awesome_euclid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 24 09:28:18 compute-0 systemd[1]: libpod-conmon-1be14b3c99f2a4a871aef562a21672ba37ba7ff76e0981347e8ab5ff9b259f0c.scope: Deactivated successfully.
Nov 24 09:28:18 compute-0 sudo[94491]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:18 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v11: 198 pgs: 198 active+clean; 454 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Nov 24 09:28:18 compute-0 ceph-mon[74331]: Deploying daemon node-exporter.compute-1 on compute-1
Nov 24 09:28:18 compute-0 ceph-mon[74331]: 4.3 scrub starts
Nov 24 09:28:18 compute-0 ceph-mon[74331]: 4.3 scrub ok
Nov 24 09:28:18 compute-0 ceph-mon[74331]: 5.c scrub starts
Nov 24 09:28:18 compute-0 ceph-mon[74331]: 5.c scrub ok
Nov 24 09:28:18 compute-0 ceph-mon[74331]: 3.13 scrub starts
Nov 24 09:28:18 compute-0 ceph-mon[74331]: 3.13 scrub ok
Nov 24 09:28:18 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1364618523' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 24 09:28:18 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1364618523' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 24 09:28:18 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 2.e scrub starts
Nov 24 09:28:18 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 2.e scrub ok
Nov 24 09:28:18 compute-0 sudo[94568]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agdusysrmbvygtxwxvqccnghzfweoxuc ; /usr/bin/python3'
Nov 24 09:28:18 compute-0 sudo[94568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:28:18 compute-0 python3[94570]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:28:19 compute-0 podman[94572]: 2025-11-24 09:28:19.054010099 +0000 UTC m=+0.046372995 container create 2f2f1b0308e922a3054e09a70f36770cdf4d9e1aef95a73abc842b3072f836ad (image=quay.io/ceph/ceph:v19, name=recursing_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid)
Nov 24 09:28:19 compute-0 systemd[1]: Started libpod-conmon-2f2f1b0308e922a3054e09a70f36770cdf4d9e1aef95a73abc842b3072f836ad.scope.
Nov 24 09:28:19 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:28:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e00debf6f05940ee92346e4dc5c92b5b430aadca8ba65630ff33d32ea207fe37/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e00debf6f05940ee92346e4dc5c92b5b430aadca8ba65630ff33d32ea207fe37/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:19 compute-0 podman[94572]: 2025-11-24 09:28:19.114880916 +0000 UTC m=+0.107243822 container init 2f2f1b0308e922a3054e09a70f36770cdf4d9e1aef95a73abc842b3072f836ad (image=quay.io/ceph/ceph:v19, name=recursing_shaw, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 09:28:19 compute-0 podman[94572]: 2025-11-24 09:28:19.123415697 +0000 UTC m=+0.115778603 container start 2f2f1b0308e922a3054e09a70f36770cdf4d9e1aef95a73abc842b3072f836ad (image=quay.io/ceph/ceph:v19, name=recursing_shaw, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:28:19 compute-0 podman[94572]: 2025-11-24 09:28:19.127305859 +0000 UTC m=+0.119668815 container attach 2f2f1b0308e922a3054e09a70f36770cdf4d9e1aef95a73abc842b3072f836ad (image=quay.io/ceph/ceph:v19, name=recursing_shaw, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Nov 24 09:28:19 compute-0 podman[94572]: 2025-11-24 09:28:19.035752599 +0000 UTC m=+0.028115525 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:28:19 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Nov 24 09:28:19 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3211639658' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 24 09:28:19 compute-0 recursing_shaw[94588]: 
Nov 24 09:28:19 compute-0 recursing_shaw[94588]: {"fsid":"84a084c3-61a7-5de7-8207-1f88efa59a64","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":67,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":51,"num_osds":3,"num_up_osds":3,"osd_up_since":1763976455,"num_in_osds":3,"osd_in_since":1763976437,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":198}],"num_pgs":198,"num_pools":12,"num_objects":195,"data_bytes":464595,"bytes_used":107462656,"bytes_avail":64304463872,"bytes_total":64411926528,"read_bytes_sec":30029,"write_bytes_sec":0,"read_op_per_sec":9,"write_op_per_sec":2},"fsmap":{"epoch":2,"btime":"2025-11-24T09:28:11:441297+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","dashboard","iostat","nfs","restful"],"services":{"dashboard":"http://192.168.122.100:8443/"}},"servicemap":{"epoch":4,"modified":"2025-11-24T09:27:51.721300+0000","services":{"mgr":{"daemons":{"summary":"","compute-0.mauvni":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1.qelqsg":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2.rzcnzg":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"rgw":{"daemons":{"summary":"","14394":{"start_epoch":4,"start_stamp":"2025-11-24T09:27:51.712701+0000","gid":14394,"addr":"192.168.122.100:0/2097383266","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-0","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.100:8082","frontend_type#0":"beast","hostname":"compute-0","id":"rgw.compute-0.zlrxyg","kernel_description":"#1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025","kernel_version":"5.14.0-639.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864320","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"0565e2b2-234e-414b-b909-932048ceb050","zone_name":"default","zonegroup_id":"5f03f326-32a0-4275-804c-1875d841eeca","zonegroup_name":"default"},"task_status":{}},"24148":{"start_epoch":4,"start_stamp":"2025-11-24T09:27:51.718135+0000","gid":24148,"addr":"192.168.122.102:0/2761939167","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-2","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.102:8082","frontend_type#0":"beast","hostname":"compute-2","id":"rgw.compute-2.qecnjt","kernel_description":"#1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025","kernel_version":"5.14.0-639.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864312","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"0565e2b2-234e-414b-b909-932048ceb050","zone_name":"default","zonegroup_id":"5f03f326-32a0-4275-804c-1875d841eeca","zonegroup_name":"default"},"task_status":{}},"24191":{"start_epoch":4,"start_stamp":"2025-11-24T09:27:51.712940+0000","gid":24191,"addr":"192.168.122.101:0/2580956473","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-1","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.101:8082","frontend_type#0":"beast","hostname":"compute-1","id":"rgw.compute-1.vproll","kernel_description":"#1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025","kernel_version":"5.14.0-639.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864320","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"0565e2b2-234e-414b-b909-932048ceb050","zone_name":"default","zonegroup_id":"5f03f326-32a0-4275-804c-1875d841eeca","zonegroup_name":"default"},"task_status":{}}}}}},"progress_events":{"e50c2553-61bb-44e8-8573-666929243308":{"message":"Updating node-exporter deployment (+2 -> 3) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Nov 24 09:28:19 compute-0 systemd[1]: libpod-2f2f1b0308e922a3054e09a70f36770cdf4d9e1aef95a73abc842b3072f836ad.scope: Deactivated successfully.
Nov 24 09:28:19 compute-0 podman[94572]: 2025-11-24 09:28:19.570707159 +0000 UTC m=+0.563070055 container died 2f2f1b0308e922a3054e09a70f36770cdf4d9e1aef95a73abc842b3072f836ad (image=quay.io/ceph/ceph:v19, name=recursing_shaw, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Nov 24 09:28:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-e00debf6f05940ee92346e4dc5c92b5b430aadca8ba65630ff33d32ea207fe37-merged.mount: Deactivated successfully.
Nov 24 09:28:19 compute-0 podman[94572]: 2025-11-24 09:28:19.614912852 +0000 UTC m=+0.607275758 container remove 2f2f1b0308e922a3054e09a70f36770cdf4d9e1aef95a73abc842b3072f836ad (image=quay.io/ceph/ceph:v19, name=recursing_shaw, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 24 09:28:19 compute-0 systemd[1]: libpod-conmon-2f2f1b0308e922a3054e09a70f36770cdf4d9e1aef95a73abc842b3072f836ad.scope: Deactivated successfully.
Nov 24 09:28:19 compute-0 sudo[94568]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:19 compute-0 ceph-mon[74331]: pgmap v11: 198 pgs: 198 active+clean; 454 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Nov 24 09:28:19 compute-0 ceph-mon[74331]: 4.6 scrub starts
Nov 24 09:28:19 compute-0 ceph-mon[74331]: 4.6 scrub ok
Nov 24 09:28:19 compute-0 ceph-mon[74331]: 2.e scrub starts
Nov 24 09:28:19 compute-0 ceph-mon[74331]: 2.e scrub ok
Nov 24 09:28:19 compute-0 ceph-mon[74331]: 4.a scrub starts
Nov 24 09:28:19 compute-0 ceph-mon[74331]: 4.a scrub ok
Nov 24 09:28:19 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3211639658' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 24 09:28:19 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 5.a scrub starts
Nov 24 09:28:19 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 5.a scrub ok
Nov 24 09:28:19 compute-0 sudo[94648]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttlflkcyrnopvlclnkcvxblspmudtbdq ; /usr/bin/python3'
Nov 24 09:28:19 compute-0 sudo[94648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:28:19 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 09:28:19 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:19 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 09:28:19 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:19 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Nov 24 09:28:19 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:19 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-2 on compute-2
Nov 24 09:28:19 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-2 on compute-2
Nov 24 09:28:19 compute-0 python3[94650]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:28:20 compute-0 podman[94651]: 2025-11-24 09:28:20.026147124 +0000 UTC m=+0.051895526 container create fafeb296fdef4177ff2680f1f2eed1e2339fa4ebb4effbb5ae17023ed8ce27f4 (image=quay.io/ceph/ceph:v19, name=pensive_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:28:20 compute-0 systemd[1]: Started libpod-conmon-fafeb296fdef4177ff2680f1f2eed1e2339fa4ebb4effbb5ae17023ed8ce27f4.scope.
Nov 24 09:28:20 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:28:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b9363d3d293454a4a669e73ea72eb6ad25cafe49a32862327b5c1e018c82665/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b9363d3d293454a4a669e73ea72eb6ad25cafe49a32862327b5c1e018c82665/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:20 compute-0 podman[94651]: 2025-11-24 09:28:20.008022966 +0000 UTC m=+0.033771388 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:28:20 compute-0 podman[94651]: 2025-11-24 09:28:20.112720196 +0000 UTC m=+0.138468618 container init fafeb296fdef4177ff2680f1f2eed1e2339fa4ebb4effbb5ae17023ed8ce27f4 (image=quay.io/ceph/ceph:v19, name=pensive_archimedes, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:28:20 compute-0 podman[94651]: 2025-11-24 09:28:20.11967837 +0000 UTC m=+0.145426772 container start fafeb296fdef4177ff2680f1f2eed1e2339fa4ebb4effbb5ae17023ed8ce27f4 (image=quay.io/ceph/ceph:v19, name=pensive_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Nov 24 09:28:20 compute-0 podman[94651]: 2025-11-24 09:28:20.123374967 +0000 UTC m=+0.149123399 container attach fafeb296fdef4177ff2680f1f2eed1e2339fa4ebb4effbb5ae17023ed8ce27f4 (image=quay.io/ceph/ceph:v19, name=pensive_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:28:20 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v12: 198 pgs: 198 active+clean; 454 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Nov 24 09:28:20 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 24 09:28:20 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3915495198' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 09:28:20 compute-0 pensive_archimedes[94667]: 
Nov 24 09:28:20 compute-0 pensive_archimedes[94667]: {"epoch":3,"fsid":"84a084c3-61a7-5de7-8207-1f88efa59a64","modified":"2025-11-24T09:27:06.832853Z","created":"2025-11-24T09:25:03.414609Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"compute-2","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.102:3300","nonce":0},{"type":"v1","addr":"192.168.122.102:6789","nonce":0}]},"addr":"192.168.122.102:6789/0","public_addr":"192.168.122.102:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"compute-1","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.101:3300","nonce":0},{"type":"v1","addr":"192.168.122.101:6789","nonce":0}]},"addr":"192.168.122.101:6789/0","public_addr":"192.168.122.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1,2]}
Nov 24 09:28:20 compute-0 pensive_archimedes[94667]: dumped monmap epoch 3
Nov 24 09:28:20 compute-0 systemd[1]: libpod-fafeb296fdef4177ff2680f1f2eed1e2339fa4ebb4effbb5ae17023ed8ce27f4.scope: Deactivated successfully.
Nov 24 09:28:20 compute-0 podman[94692]: 2025-11-24 09:28:20.606171337 +0000 UTC m=+0.025194976 container died fafeb296fdef4177ff2680f1f2eed1e2339fa4ebb4effbb5ae17023ed8ce27f4 (image=quay.io/ceph/ceph:v19, name=pensive_archimedes, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:28:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b9363d3d293454a4a669e73ea72eb6ad25cafe49a32862327b5c1e018c82665-merged.mount: Deactivated successfully.
Nov 24 09:28:20 compute-0 podman[94692]: 2025-11-24 09:28:20.642727439 +0000 UTC m=+0.061751068 container remove fafeb296fdef4177ff2680f1f2eed1e2339fa4ebb4effbb5ae17023ed8ce27f4 (image=quay.io/ceph/ceph:v19, name=pensive_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Nov 24 09:28:20 compute-0 systemd[1]: libpod-conmon-fafeb296fdef4177ff2680f1f2eed1e2339fa4ebb4effbb5ae17023ed8ce27f4.scope: Deactivated successfully.
Nov 24 09:28:20 compute-0 sudo[94648]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:20 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 3.b scrub starts
Nov 24 09:28:20 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 3.b scrub ok
Nov 24 09:28:20 compute-0 ceph-mon[74331]: 4.1d scrub starts
Nov 24 09:28:20 compute-0 ceph-mon[74331]: 4.1d scrub ok
Nov 24 09:28:20 compute-0 ceph-mon[74331]: 5.a scrub starts
Nov 24 09:28:20 compute-0 ceph-mon[74331]: 5.a scrub ok
Nov 24 09:28:20 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:20 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:20 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:20 compute-0 ceph-mon[74331]: Deploying daemon node-exporter.compute-2 on compute-2
Nov 24 09:28:20 compute-0 ceph-mon[74331]: 5.7 scrub starts
Nov 24 09:28:20 compute-0 ceph-mon[74331]: 5.7 scrub ok
Nov 24 09:28:20 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3915495198' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 09:28:21 compute-0 sudo[94730]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hohuljbhpvoqpbsipvgvltgdbjjxyrxm ; /usr/bin/python3'
Nov 24 09:28:21 compute-0 sudo[94730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:28:21 compute-0 python3[94732]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:28:21 compute-0 podman[94733]: 2025-11-24 09:28:21.37526139 +0000 UTC m=+0.041505029 container create 634ed030b7a47ef42f0a1db99199d5f3ec57c8c67d16cd0d6e9fe76c697b1956 (image=quay.io/ceph/ceph:v19, name=strange_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Nov 24 09:28:21 compute-0 systemd[1]: Started libpod-conmon-634ed030b7a47ef42f0a1db99199d5f3ec57c8c67d16cd0d6e9fe76c697b1956.scope.
Nov 24 09:28:21 compute-0 podman[94733]: 2025-11-24 09:28:21.35827972 +0000 UTC m=+0.024523379 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:28:21 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:28:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fafbac9a899dc7bbd1f84b9f985a554d87a29335f3db0f16678be7fbb9cc2e2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fafbac9a899dc7bbd1f84b9f985a554d87a29335f3db0f16678be7fbb9cc2e2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:21 compute-0 podman[94733]: 2025-11-24 09:28:21.474712406 +0000 UTC m=+0.140956075 container init 634ed030b7a47ef42f0a1db99199d5f3ec57c8c67d16cd0d6e9fe76c697b1956 (image=quay.io/ceph/ceph:v19, name=strange_gagarin, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 24 09:28:21 compute-0 podman[94733]: 2025-11-24 09:28:21.482762306 +0000 UTC m=+0.149005945 container start 634ed030b7a47ef42f0a1db99199d5f3ec57c8c67d16cd0d6e9fe76c697b1956 (image=quay.io/ceph/ceph:v19, name=strange_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Nov 24 09:28:21 compute-0 podman[94733]: 2025-11-24 09:28:21.485844629 +0000 UTC m=+0.152088298 container attach 634ed030b7a47ef42f0a1db99199d5f3ec57c8c67d16cd0d6e9fe76c697b1956 (image=quay.io/ceph/ceph:v19, name=strange_gagarin, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Nov 24 09:28:21 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Nov 24 09:28:21 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Nov 24 09:28:21 compute-0 ceph-mon[74331]: pgmap v12: 198 pgs: 198 active+clean; 454 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Nov 24 09:28:21 compute-0 ceph-mon[74331]: 4.1c scrub starts
Nov 24 09:28:21 compute-0 ceph-mon[74331]: 4.1c scrub ok
Nov 24 09:28:21 compute-0 ceph-mon[74331]: 3.b scrub starts
Nov 24 09:28:21 compute-0 ceph-mon[74331]: 3.b scrub ok
Nov 24 09:28:21 compute-0 ceph-mon[74331]: 3.d scrub starts
Nov 24 09:28:21 compute-0 ceph-mon[74331]: 3.d scrub ok
Nov 24 09:28:21 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Nov 24 09:28:21 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/72635421' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 24 09:28:21 compute-0 strange_gagarin[94749]: [client.openstack]
Nov 24 09:28:21 compute-0 strange_gagarin[94749]:         key = AQBLJCRpAAAAABAAXAzKB5itq82KD4bRedT2Ig==
Nov 24 09:28:21 compute-0 strange_gagarin[94749]:         caps mgr = "allow *"
Nov 24 09:28:21 compute-0 strange_gagarin[94749]:         caps mon = "profile rbd"
Nov 24 09:28:21 compute-0 strange_gagarin[94749]:         caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Nov 24 09:28:21 compute-0 systemd[1]: libpod-634ed030b7a47ef42f0a1db99199d5f3ec57c8c67d16cd0d6e9fe76c697b1956.scope: Deactivated successfully.
Nov 24 09:28:21 compute-0 podman[94733]: 2025-11-24 09:28:21.96912861 +0000 UTC m=+0.635372249 container died 634ed030b7a47ef42f0a1db99199d5f3ec57c8c67d16cd0d6e9fe76c697b1956 (image=quay.io/ceph/ceph:v19, name=strange_gagarin, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:28:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-9fafbac9a899dc7bbd1f84b9f985a554d87a29335f3db0f16678be7fbb9cc2e2-merged.mount: Deactivated successfully.
Nov 24 09:28:22 compute-0 podman[94733]: 2025-11-24 09:28:22.006526163 +0000 UTC m=+0.672769802 container remove 634ed030b7a47ef42f0a1db99199d5f3ec57c8c67d16cd0d6e9fe76c697b1956 (image=quay.io/ceph/ceph:v19, name=strange_gagarin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:28:22 compute-0 systemd[1]: libpod-conmon-634ed030b7a47ef42f0a1db99199d5f3ec57c8c67d16cd0d6e9fe76c697b1956.scope: Deactivated successfully.
Nov 24 09:28:22 compute-0 sudo[94730]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:22 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v13: 198 pgs: 198 active+clean; 454 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 9 op/s
Nov 24 09:28:22 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Nov 24 09:28:22 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Nov 24 09:28:22 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:28:22 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 09:28:22 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:22 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 09:28:22 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:22 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Nov 24 09:28:22 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:22 compute-0 ceph-mgr[74626]: [progress INFO root] complete: finished ev e50c2553-61bb-44e8-8573-666929243308 (Updating node-exporter deployment (+2 -> 3))
Nov 24 09:28:22 compute-0 ceph-mgr[74626]: [progress INFO root] Completed event e50c2553-61bb-44e8-8573-666929243308 (Updating node-exporter deployment (+2 -> 3)) in 5 seconds
Nov 24 09:28:22 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Nov 24 09:28:22 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:22 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 24 09:28:22 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:28:22 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 24 09:28:22 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:28:22 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:28:22 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:28:22 compute-0 ceph-mon[74331]: 6.1c deep-scrub starts
Nov 24 09:28:22 compute-0 ceph-mon[74331]: 6.1c deep-scrub ok
Nov 24 09:28:22 compute-0 ceph-mon[74331]: 5.17 scrub starts
Nov 24 09:28:22 compute-0 ceph-mon[74331]: 5.17 scrub ok
Nov 24 09:28:22 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/72635421' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 24 09:28:22 compute-0 ceph-mon[74331]: 3.5 scrub starts
Nov 24 09:28:22 compute-0 ceph-mon[74331]: 3.5 scrub ok
Nov 24 09:28:22 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:22 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:22 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 09:28:22 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:22 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:22 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:28:22 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:28:22 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:28:22 compute-0 sudo[94787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:28:22 compute-0 sudo[94787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:22 compute-0 sudo[94787]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:23 compute-0 sudo[94813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 24 09:28:23 compute-0 sudo[94813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:23 compute-0 podman[94997]: 2025-11-24 09:28:23.441860573 +0000 UTC m=+0.036595464 container create d26f68c828a7218f2c243dfa04a13c194a5778927e6bd6d2fc29bd4fa021d444 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 24 09:28:23 compute-0 systemd[1]: Started libpod-conmon-d26f68c828a7218f2c243dfa04a13c194a5778927e6bd6d2fc29bd4fa021d444.scope.
Nov 24 09:28:23 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:28:23 compute-0 sudo[95041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vywiicznovmsdgjcivkjzqbgzxrnacfr ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763976503.117737-37407-170233280230538/async_wrapper.py j244454990368 30 /home/zuul/.ansible/tmp/ansible-tmp-1763976503.117737-37407-170233280230538/AnsiballZ_command.py _'
Nov 24 09:28:23 compute-0 sudo[95041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:28:23 compute-0 podman[94997]: 2025-11-24 09:28:23.513704219 +0000 UTC m=+0.108439130 container init d26f68c828a7218f2c243dfa04a13c194a5778927e6bd6d2fc29bd4fa021d444 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_heisenberg, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Nov 24 09:28:23 compute-0 podman[94997]: 2025-11-24 09:28:23.51967813 +0000 UTC m=+0.114413021 container start d26f68c828a7218f2c243dfa04a13c194a5778927e6bd6d2fc29bd4fa021d444 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 09:28:23 compute-0 podman[94997]: 2025-11-24 09:28:23.426140783 +0000 UTC m=+0.020875694 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:28:23 compute-0 podman[94997]: 2025-11-24 09:28:23.522457945 +0000 UTC m=+0.117192836 container attach d26f68c828a7218f2c243dfa04a13c194a5778927e6bd6d2fc29bd4fa021d444 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_heisenberg, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1)
Nov 24 09:28:23 compute-0 musing_heisenberg[95042]: 167 167
Nov 24 09:28:23 compute-0 systemd[1]: libpod-d26f68c828a7218f2c243dfa04a13c194a5778927e6bd6d2fc29bd4fa021d444.scope: Deactivated successfully.
Nov 24 09:28:23 compute-0 podman[94997]: 2025-11-24 09:28:23.526015429 +0000 UTC m=+0.120750330 container died d26f68c828a7218f2c243dfa04a13c194a5778927e6bd6d2fc29bd4fa021d444 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_heisenberg, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:28:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ce21b1c72af75edb6f0ffec4070d46372b142acbac32af7cb1650519fcf4563-merged.mount: Deactivated successfully.
Nov 24 09:28:23 compute-0 podman[94997]: 2025-11-24 09:28:23.56543929 +0000 UTC m=+0.160174191 container remove d26f68c828a7218f2c243dfa04a13c194a5778927e6bd6d2fc29bd4fa021d444 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_heisenberg, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:28:23 compute-0 systemd[1]: libpod-conmon-d26f68c828a7218f2c243dfa04a13c194a5778927e6bd6d2fc29bd4fa021d444.scope: Deactivated successfully.
Nov 24 09:28:23 compute-0 ansible-async_wrapper.py[95046]: Invoked with j244454990368 30 /home/zuul/.ansible/tmp/ansible-tmp-1763976503.117737-37407-170233280230538/AnsiballZ_command.py _
Nov 24 09:28:23 compute-0 ansible-async_wrapper.py[95065]: Starting module and watcher
Nov 24 09:28:23 compute-0 ansible-async_wrapper.py[95065]: Start watching 95066 (30)
Nov 24 09:28:23 compute-0 ansible-async_wrapper.py[95066]: Start module (95066)
Nov 24 09:28:23 compute-0 ansible-async_wrapper.py[95046]: Return async_wrapper task started.
Nov 24 09:28:23 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Nov 24 09:28:23 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Nov 24 09:28:23 compute-0 sudo[95041]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:23 compute-0 podman[95073]: 2025-11-24 09:28:23.751878478 +0000 UTC m=+0.041755966 container create 57d4355631b981ec4d4e9619e6fd848c3bcd50da2a2c951032780de4c6b3037e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_chaplygin, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Nov 24 09:28:23 compute-0 systemd[1]: Started libpod-conmon-57d4355631b981ec4d4e9619e6fd848c3bcd50da2a2c951032780de4c6b3037e.scope.
Nov 24 09:28:23 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:28:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5024cbac1903f6d04370800ecda8cae3b656c1e6684262d11ef9dd439719413/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5024cbac1903f6d04370800ecda8cae3b656c1e6684262d11ef9dd439719413/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5024cbac1903f6d04370800ecda8cae3b656c1e6684262d11ef9dd439719413/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5024cbac1903f6d04370800ecda8cae3b656c1e6684262d11ef9dd439719413/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5024cbac1903f6d04370800ecda8cae3b656c1e6684262d11ef9dd439719413/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:23 compute-0 podman[95073]: 2025-11-24 09:28:23.73420061 +0000 UTC m=+0.024078118 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:28:23 compute-0 podman[95073]: 2025-11-24 09:28:23.834130467 +0000 UTC m=+0.124007975 container init 57d4355631b981ec4d4e9619e6fd848c3bcd50da2a2c951032780de4c6b3037e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:28:23 compute-0 python3[95067]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:28:23 compute-0 podman[95073]: 2025-11-24 09:28:23.843785045 +0000 UTC m=+0.133662533 container start 57d4355631b981ec4d4e9619e6fd848c3bcd50da2a2c951032780de4c6b3037e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_chaplygin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Nov 24 09:28:23 compute-0 podman[95073]: 2025-11-24 09:28:23.846982121 +0000 UTC m=+0.136859609 container attach 57d4355631b981ec4d4e9619e6fd848c3bcd50da2a2c951032780de4c6b3037e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_chaplygin, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 24 09:28:23 compute-0 podman[95095]: 2025-11-24 09:28:23.899096131 +0000 UTC m=+0.047648745 container create 65e15e771f61646f79897e213e845b3ae43612388b50581db629cc6d83f5c0f2 (image=quay.io/ceph/ceph:v19, name=silly_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 09:28:23 compute-0 ceph-mon[74331]: pgmap v13: 198 pgs: 198 active+clean; 454 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 9 op/s
Nov 24 09:28:23 compute-0 ceph-mon[74331]: 6.1 scrub starts
Nov 24 09:28:23 compute-0 ceph-mon[74331]: 6.1 scrub ok
Nov 24 09:28:23 compute-0 ceph-mon[74331]: 3.12 scrub starts
Nov 24 09:28:23 compute-0 ceph-mon[74331]: 3.12 scrub ok
Nov 24 09:28:23 compute-0 ceph-mon[74331]: 5.f scrub starts
Nov 24 09:28:23 compute-0 ceph-mon[74331]: 5.f scrub ok
Nov 24 09:28:23 compute-0 systemd[1]: Started libpod-conmon-65e15e771f61646f79897e213e845b3ae43612388b50581db629cc6d83f5c0f2.scope.
Nov 24 09:28:23 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:28:23 compute-0 podman[95095]: 2025-11-24 09:28:23.877441319 +0000 UTC m=+0.025993913 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:28:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c19f197de14ef402e2a11873277c395574471034eb4a717097bfdfb2fd009ba6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c19f197de14ef402e2a11873277c395574471034eb4a717097bfdfb2fd009ba6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:23 compute-0 podman[95095]: 2025-11-24 09:28:23.98851252 +0000 UTC m=+0.137065184 container init 65e15e771f61646f79897e213e845b3ae43612388b50581db629cc6d83f5c0f2 (image=quay.io/ceph/ceph:v19, name=silly_benz, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:28:24 compute-0 podman[95095]: 2025-11-24 09:28:24.000033822 +0000 UTC m=+0.148586456 container start 65e15e771f61646f79897e213e845b3ae43612388b50581db629cc6d83f5c0f2 (image=quay.io/ceph/ceph:v19, name=silly_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:28:24 compute-0 podman[95095]: 2025-11-24 09:28:24.003733169 +0000 UTC m=+0.152285773 container attach 65e15e771f61646f79897e213e845b3ae43612388b50581db629cc6d83f5c0f2 (image=quay.io/ceph/ceph:v19, name=silly_benz, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Nov 24 09:28:24 compute-0 frosty_chaplygin[95091]: --> passed data devices: 0 physical, 1 LVM
Nov 24 09:28:24 compute-0 frosty_chaplygin[95091]: --> All data devices are unavailable
Nov 24 09:28:24 compute-0 systemd[1]: libpod-57d4355631b981ec4d4e9619e6fd848c3bcd50da2a2c951032780de4c6b3037e.scope: Deactivated successfully.
Nov 24 09:28:24 compute-0 conmon[95091]: conmon 57d4355631b981ec4d4e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-57d4355631b981ec4d4e9619e6fd848c3bcd50da2a2c951032780de4c6b3037e.scope/container/memory.events
Nov 24 09:28:24 compute-0 podman[95073]: 2025-11-24 09:28:24.265696309 +0000 UTC m=+0.555573807 container died 57d4355631b981ec4d4e9619e6fd848c3bcd50da2a2c951032780de4c6b3037e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_chaplygin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:28:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5024cbac1903f6d04370800ecda8cae3b656c1e6684262d11ef9dd439719413-merged.mount: Deactivated successfully.
Nov 24 09:28:24 compute-0 podman[95073]: 2025-11-24 09:28:24.355216371 +0000 UTC m=+0.645093869 container remove 57d4355631b981ec4d4e9619e6fd848c3bcd50da2a2c951032780de4c6b3037e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_chaplygin, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:28:24 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.14556 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 24 09:28:24 compute-0 silly_benz[95112]: 
Nov 24 09:28:24 compute-0 silly_benz[95112]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 24 09:28:24 compute-0 systemd[1]: libpod-conmon-57d4355631b981ec4d4e9619e6fd848c3bcd50da2a2c951032780de4c6b3037e.scope: Deactivated successfully.
Nov 24 09:28:24 compute-0 systemd[1]: libpod-65e15e771f61646f79897e213e845b3ae43612388b50581db629cc6d83f5c0f2.scope: Deactivated successfully.
Nov 24 09:28:24 compute-0 podman[95095]: 2025-11-24 09:28:24.391385334 +0000 UTC m=+0.539937948 container died 65e15e771f61646f79897e213e845b3ae43612388b50581db629cc6d83f5c0f2 (image=quay.io/ceph/ceph:v19, name=silly_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Nov 24 09:28:24 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v14: 198 pgs: 198 active+clean; 454 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s
Nov 24 09:28:24 compute-0 sudo[94813]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:24 compute-0 podman[95095]: 2025-11-24 09:28:24.429226936 +0000 UTC m=+0.577779510 container remove 65e15e771f61646f79897e213e845b3ae43612388b50581db629cc6d83f5c0f2 (image=quay.io/ceph/ceph:v19, name=silly_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:28:24 compute-0 systemd[1]: libpod-conmon-65e15e771f61646f79897e213e845b3ae43612388b50581db629cc6d83f5c0f2.scope: Deactivated successfully.
Nov 24 09:28:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-c19f197de14ef402e2a11873277c395574471034eb4a717097bfdfb2fd009ba6-merged.mount: Deactivated successfully.
Nov 24 09:28:24 compute-0 ansible-async_wrapper.py[95066]: Module complete (95066)
Nov 24 09:28:24 compute-0 sudo[95171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:28:24 compute-0 sudo[95171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:24 compute-0 sudo[95171]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:24 compute-0 sudo[95196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- lvm list --format json
Nov 24 09:28:24 compute-0 sudo[95196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:24 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Nov 24 09:28:24 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Nov 24 09:28:24 compute-0 sudo[95287]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abrrvpgqdbkubjzophywhazwfaycbbop ; /usr/bin/python3'
Nov 24 09:28:24 compute-0 sudo[95287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:28:24 compute-0 python3[95294]: ansible-ansible.legacy.async_status Invoked with jid=j244454990368.95046 mode=status _async_dir=/root/.ansible_async
Nov 24 09:28:24 compute-0 ceph-mon[74331]: 3.17 scrub starts
Nov 24 09:28:24 compute-0 ceph-mon[74331]: 7.1d scrub starts
Nov 24 09:28:24 compute-0 ceph-mon[74331]: 3.17 scrub ok
Nov 24 09:28:24 compute-0 ceph-mon[74331]: 7.1d scrub ok
Nov 24 09:28:24 compute-0 ceph-mon[74331]: 4.e deep-scrub starts
Nov 24 09:28:24 compute-0 ceph-mon[74331]: 4.e deep-scrub ok
Nov 24 09:28:24 compute-0 podman[95309]: 2025-11-24 09:28:24.963689565 +0000 UTC m=+0.044044799 container create 692014150b9e47f5cc18698218e11970b75e4f27ff2ec0ad5bc5c4dc55caaa9f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Nov 24 09:28:24 compute-0 sudo[95287]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:25 compute-0 systemd[1]: Started libpod-conmon-692014150b9e47f5cc18698218e11970b75e4f27ff2ec0ad5bc5c4dc55caaa9f.scope.
Nov 24 09:28:25 compute-0 podman[95309]: 2025-11-24 09:28:24.942750601 +0000 UTC m=+0.023105855 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:28:25 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:28:25 compute-0 podman[95309]: 2025-11-24 09:28:25.056804722 +0000 UTC m=+0.137159986 container init 692014150b9e47f5cc18698218e11970b75e4f27ff2ec0ad5bc5c4dc55caaa9f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_spence, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 24 09:28:25 compute-0 podman[95309]: 2025-11-24 09:28:25.066275325 +0000 UTC m=+0.146630549 container start 692014150b9e47f5cc18698218e11970b75e4f27ff2ec0ad5bc5c4dc55caaa9f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_spence, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:28:25 compute-0 podman[95309]: 2025-11-24 09:28:25.071008727 +0000 UTC m=+0.151363951 container attach 692014150b9e47f5cc18698218e11970b75e4f27ff2ec0ad5bc5c4dc55caaa9f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:28:25 compute-0 loving_spence[95329]: 167 167
Nov 24 09:28:25 compute-0 systemd[1]: libpod-692014150b9e47f5cc18698218e11970b75e4f27ff2ec0ad5bc5c4dc55caaa9f.scope: Deactivated successfully.
Nov 24 09:28:25 compute-0 podman[95309]: 2025-11-24 09:28:25.073452415 +0000 UTC m=+0.153807649 container died 692014150b9e47f5cc18698218e11970b75e4f27ff2ec0ad5bc5c4dc55caaa9f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_spence, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:28:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a4d3733b5ff367b8d24a5057bc8dd699e9e0ca5dccbbcc0cc657ea6e557f55f-merged.mount: Deactivated successfully.
Nov 24 09:28:25 compute-0 podman[95309]: 2025-11-24 09:28:25.115294292 +0000 UTC m=+0.195649516 container remove 692014150b9e47f5cc18698218e11970b75e4f27ff2ec0ad5bc5c4dc55caaa9f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:28:25 compute-0 systemd[1]: libpod-conmon-692014150b9e47f5cc18698218e11970b75e4f27ff2ec0ad5bc5c4dc55caaa9f.scope: Deactivated successfully.
Nov 24 09:28:25 compute-0 sudo[95389]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avhlygijrecxgbgqhkxdbqcyknzcfyrj ; /usr/bin/python3'
Nov 24 09:28:25 compute-0 sudo[95389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:28:25 compute-0 python3[95391]: ansible-ansible.legacy.async_status Invoked with jid=j244454990368.95046 mode=cleanup _async_dir=/root/.ansible_async
Nov 24 09:28:25 compute-0 podman[95397]: 2025-11-24 09:28:25.297556442 +0000 UTC m=+0.051955066 container create 29365c0df3ec4d4b0493b4aaf6bf461f9a4caad957a3ff9e45799904c03bd2e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_torvalds, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:28:25 compute-0 sudo[95389]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:25 compute-0 systemd[1]: Started libpod-conmon-29365c0df3ec4d4b0493b4aaf6bf461f9a4caad957a3ff9e45799904c03bd2e4.scope.
Nov 24 09:28:25 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:28:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eefe587c9b4ba36ba2b21654ea8de341e984b12c3b31710eb6d78ff196f308b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eefe587c9b4ba36ba2b21654ea8de341e984b12c3b31710eb6d78ff196f308b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eefe587c9b4ba36ba2b21654ea8de341e984b12c3b31710eb6d78ff196f308b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eefe587c9b4ba36ba2b21654ea8de341e984b12c3b31710eb6d78ff196f308b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:25 compute-0 podman[95397]: 2025-11-24 09:28:25.274334734 +0000 UTC m=+0.028733368 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:28:25 compute-0 podman[95397]: 2025-11-24 09:28:25.381554154 +0000 UTC m=+0.135952798 container init 29365c0df3ec4d4b0493b4aaf6bf461f9a4caad957a3ff9e45799904c03bd2e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:28:25 compute-0 podman[95397]: 2025-11-24 09:28:25.388898727 +0000 UTC m=+0.143297341 container start 29365c0df3ec4d4b0493b4aaf6bf461f9a4caad957a3ff9e45799904c03bd2e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:28:25 compute-0 podman[95397]: 2025-11-24 09:28:25.416672332 +0000 UTC m=+0.171070966 container attach 29365c0df3ec4d4b0493b4aaf6bf461f9a4caad957a3ff9e45799904c03bd2e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_torvalds, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:28:25 compute-0 ceph-mgr[74626]: [progress INFO root] Writing back 12 completed events
Nov 24 09:28:25 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 24 09:28:25 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:25 compute-0 infallible_torvalds[95414]: {
Nov 24 09:28:25 compute-0 infallible_torvalds[95414]:     "0": [
Nov 24 09:28:25 compute-0 infallible_torvalds[95414]:         {
Nov 24 09:28:25 compute-0 infallible_torvalds[95414]:             "devices": [
Nov 24 09:28:25 compute-0 infallible_torvalds[95414]:                 "/dev/loop3"
Nov 24 09:28:25 compute-0 infallible_torvalds[95414]:             ],
Nov 24 09:28:25 compute-0 infallible_torvalds[95414]:             "lv_name": "ceph_lv0",
Nov 24 09:28:25 compute-0 infallible_torvalds[95414]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:28:25 compute-0 infallible_torvalds[95414]:             "lv_size": "21470642176",
Nov 24 09:28:25 compute-0 infallible_torvalds[95414]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=84a084c3-61a7-5de7-8207-1f88efa59a64,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 24 09:28:25 compute-0 infallible_torvalds[95414]:             "lv_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:28:25 compute-0 infallible_torvalds[95414]:             "name": "ceph_lv0",
Nov 24 09:28:25 compute-0 infallible_torvalds[95414]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:28:25 compute-0 infallible_torvalds[95414]:             "tags": {
Nov 24 09:28:25 compute-0 infallible_torvalds[95414]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:28:25 compute-0 infallible_torvalds[95414]:                 "ceph.block_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:28:25 compute-0 infallible_torvalds[95414]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 09:28:25 compute-0 infallible_torvalds[95414]:                 "ceph.cluster_fsid": "84a084c3-61a7-5de7-8207-1f88efa59a64",
Nov 24 09:28:25 compute-0 infallible_torvalds[95414]:                 "ceph.cluster_name": "ceph",
Nov 24 09:28:25 compute-0 infallible_torvalds[95414]:                 "ceph.crush_device_class": "",
Nov 24 09:28:25 compute-0 infallible_torvalds[95414]:                 "ceph.encrypted": "0",
Nov 24 09:28:25 compute-0 infallible_torvalds[95414]:                 "ceph.osd_fsid": "4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c",
Nov 24 09:28:25 compute-0 infallible_torvalds[95414]:                 "ceph.osd_id": "0",
Nov 24 09:28:25 compute-0 infallible_torvalds[95414]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 09:28:25 compute-0 infallible_torvalds[95414]:                 "ceph.type": "block",
Nov 24 09:28:25 compute-0 infallible_torvalds[95414]:                 "ceph.vdo": "0",
Nov 24 09:28:25 compute-0 infallible_torvalds[95414]:                 "ceph.with_tpm": "0"
Nov 24 09:28:25 compute-0 infallible_torvalds[95414]:             },
Nov 24 09:28:25 compute-0 infallible_torvalds[95414]:             "type": "block",
Nov 24 09:28:25 compute-0 infallible_torvalds[95414]:             "vg_name": "ceph_vg0"
Nov 24 09:28:25 compute-0 infallible_torvalds[95414]:         }
Nov 24 09:28:25 compute-0 infallible_torvalds[95414]:     ]
Nov 24 09:28:25 compute-0 infallible_torvalds[95414]: }
Nov 24 09:28:25 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Nov 24 09:28:25 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Nov 24 09:28:25 compute-0 systemd[1]: libpod-29365c0df3ec4d4b0493b4aaf6bf461f9a4caad957a3ff9e45799904c03bd2e4.scope: Deactivated successfully.
Nov 24 09:28:25 compute-0 podman[95397]: 2025-11-24 09:28:25.677342861 +0000 UTC m=+0.431741475 container died 29365c0df3ec4d4b0493b4aaf6bf461f9a4caad957a3ff9e45799904c03bd2e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 24 09:28:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-3eefe587c9b4ba36ba2b21654ea8de341e984b12c3b31710eb6d78ff196f308b-merged.mount: Deactivated successfully.
Nov 24 09:28:25 compute-0 podman[95397]: 2025-11-24 09:28:25.727020333 +0000 UTC m=+0.481418947 container remove 29365c0df3ec4d4b0493b4aaf6bf461f9a4caad957a3ff9e45799904c03bd2e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_torvalds, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 24 09:28:25 compute-0 systemd[1]: libpod-conmon-29365c0df3ec4d4b0493b4aaf6bf461f9a4caad957a3ff9e45799904c03bd2e4.scope: Deactivated successfully.
Nov 24 09:28:25 compute-0 sudo[95196]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:25 compute-0 sudo[95458]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-einmnxktuvcwyzcoomalqclnntjacfuu ; /usr/bin/python3'
Nov 24 09:28:25 compute-0 sudo[95458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:28:25 compute-0 sudo[95461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:28:25 compute-0 sudo[95461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:25 compute-0 sudo[95461]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:25 compute-0 sudo[95486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- raw list --format json
Nov 24 09:28:25 compute-0 sudo[95486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:25 compute-0 python3[95460]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:28:25 compute-0 podman[95511]: 2025-11-24 09:28:25.987305153 +0000 UTC m=+0.044427658 container create a91bf56226d759f94edd1214eddc46c3ab7215b67d86e58d95b41e5876f48bf8 (image=quay.io/ceph/ceph:v19, name=magical_keller, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Nov 24 09:28:25 compute-0 ceph-mon[74331]: from='client.14556 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 24 09:28:25 compute-0 ceph-mon[74331]: pgmap v14: 198 pgs: 198 active+clean; 454 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s
Nov 24 09:28:25 compute-0 ceph-mon[74331]: 6.1b deep-scrub starts
Nov 24 09:28:25 compute-0 ceph-mon[74331]: 6.1b deep-scrub ok
Nov 24 09:28:25 compute-0 ceph-mon[74331]: 5.14 scrub starts
Nov 24 09:28:25 compute-0 ceph-mon[74331]: 5.14 scrub ok
Nov 24 09:28:25 compute-0 ceph-mon[74331]: 3.3 deep-scrub starts
Nov 24 09:28:25 compute-0 ceph-mon[74331]: 3.3 deep-scrub ok
Nov 24 09:28:25 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:26 compute-0 systemd[1]: Started libpod-conmon-a91bf56226d759f94edd1214eddc46c3ab7215b67d86e58d95b41e5876f48bf8.scope.
Nov 24 09:28:26 compute-0 podman[95511]: 2025-11-24 09:28:25.97103574 +0000 UTC m=+0.028158255 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:28:26 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:28:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a64d0d4e349e2292f6292c2554ea506e28d6e3c3c61821522e61c36850eab2f5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a64d0d4e349e2292f6292c2554ea506e28d6e3c3c61821522e61c36850eab2f5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:26 compute-0 podman[95511]: 2025-11-24 09:28:26.085327986 +0000 UTC m=+0.142450511 container init a91bf56226d759f94edd1214eddc46c3ab7215b67d86e58d95b41e5876f48bf8 (image=quay.io/ceph/ceph:v19, name=magical_keller, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2)
Nov 24 09:28:26 compute-0 podman[95511]: 2025-11-24 09:28:26.094476342 +0000 UTC m=+0.151598857 container start a91bf56226d759f94edd1214eddc46c3ab7215b67d86e58d95b41e5876f48bf8 (image=quay.io/ceph/ceph:v19, name=magical_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Nov 24 09:28:26 compute-0 podman[95511]: 2025-11-24 09:28:26.098547888 +0000 UTC m=+0.155670453 container attach a91bf56226d759f94edd1214eddc46c3ab7215b67d86e58d95b41e5876f48bf8 (image=quay.io/ceph/ceph:v19, name=magical_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:28:26 compute-0 podman[95589]: 2025-11-24 09:28:26.341406617 +0000 UTC m=+0.052805187 container create e9da9bec166db8f8569c0f7993d5a69fdc7a3bd328dfee9c7ec3434511ce50fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_solomon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:28:26 compute-0 systemd[1]: Started libpod-conmon-e9da9bec166db8f8569c0f7993d5a69fdc7a3bd328dfee9c7ec3434511ce50fa.scope.
Nov 24 09:28:26 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:28:26 compute-0 podman[95589]: 2025-11-24 09:28:26.315010774 +0000 UTC m=+0.026409364 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:28:26 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v15: 198 pgs: 198 active+clean; 454 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 7 op/s
Nov 24 09:28:26 compute-0 podman[95589]: 2025-11-24 09:28:26.425796178 +0000 UTC m=+0.137194768 container init e9da9bec166db8f8569c0f7993d5a69fdc7a3bd328dfee9c7ec3434511ce50fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_solomon, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 24 09:28:26 compute-0 podman[95589]: 2025-11-24 09:28:26.43178054 +0000 UTC m=+0.143179110 container start e9da9bec166db8f8569c0f7993d5a69fdc7a3bd328dfee9c7ec3434511ce50fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_solomon, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:28:26 compute-0 ecstatic_solomon[95606]: 167 167
Nov 24 09:28:26 compute-0 podman[95589]: 2025-11-24 09:28:26.435282332 +0000 UTC m=+0.146680922 container attach e9da9bec166db8f8569c0f7993d5a69fdc7a3bd328dfee9c7ec3434511ce50fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 24 09:28:26 compute-0 systemd[1]: libpod-e9da9bec166db8f8569c0f7993d5a69fdc7a3bd328dfee9c7ec3434511ce50fa.scope: Deactivated successfully.
Nov 24 09:28:26 compute-0 conmon[95606]: conmon e9da9bec166db8f8569c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e9da9bec166db8f8569c0f7993d5a69fdc7a3bd328dfee9c7ec3434511ce50fa.scope/container/memory.events
Nov 24 09:28:26 compute-0 podman[95589]: 2025-11-24 09:28:26.437619817 +0000 UTC m=+0.149018387 container died e9da9bec166db8f8569c0f7993d5a69fdc7a3bd328dfee9c7ec3434511ce50fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_solomon, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:28:26 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.14562 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 24 09:28:26 compute-0 magical_keller[95527]: 
Nov 24 09:28:26 compute-0 magical_keller[95527]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 24 09:28:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f37b1162c6766acaf3ce00dc835e0d09ee5618784cd34f7631c21d03bce5ab9-merged.mount: Deactivated successfully.
Nov 24 09:28:26 compute-0 podman[95589]: 2025-11-24 09:28:26.473238617 +0000 UTC m=+0.184637187 container remove e9da9bec166db8f8569c0f7993d5a69fdc7a3bd328dfee9c7ec3434511ce50fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Nov 24 09:28:26 compute-0 systemd[1]: libpod-a91bf56226d759f94edd1214eddc46c3ab7215b67d86e58d95b41e5876f48bf8.scope: Deactivated successfully.
Nov 24 09:28:26 compute-0 podman[95511]: 2025-11-24 09:28:26.476071944 +0000 UTC m=+0.533194469 container died a91bf56226d759f94edd1214eddc46c3ab7215b67d86e58d95b41e5876f48bf8 (image=quay.io/ceph/ceph:v19, name=magical_keller, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True)
Nov 24 09:28:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-a64d0d4e349e2292f6292c2554ea506e28d6e3c3c61821522e61c36850eab2f5-merged.mount: Deactivated successfully.
Nov 24 09:28:26 compute-0 systemd[1]: libpod-conmon-e9da9bec166db8f8569c0f7993d5a69fdc7a3bd328dfee9c7ec3434511ce50fa.scope: Deactivated successfully.
Nov 24 09:28:26 compute-0 podman[95511]: 2025-11-24 09:28:26.522169032 +0000 UTC m=+0.579291547 container remove a91bf56226d759f94edd1214eddc46c3ab7215b67d86e58d95b41e5876f48bf8 (image=quay.io/ceph/ceph:v19, name=magical_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:28:26 compute-0 systemd[1]: libpod-conmon-a91bf56226d759f94edd1214eddc46c3ab7215b67d86e58d95b41e5876f48bf8.scope: Deactivated successfully.
Nov 24 09:28:26 compute-0 sudo[95458]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:26 compute-0 podman[95643]: 2025-11-24 09:28:26.686629872 +0000 UTC m=+0.065614829 container create c8d14a5370072d58622c1ed9a4dd75a6e03d5768783830b334728a54de8ece5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_haibt, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 24 09:28:26 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Nov 24 09:28:26 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Nov 24 09:28:26 compute-0 systemd[1]: Started libpod-conmon-c8d14a5370072d58622c1ed9a4dd75a6e03d5768783830b334728a54de8ece5e.scope.
Nov 24 09:28:26 compute-0 podman[95643]: 2025-11-24 09:28:26.644306134 +0000 UTC m=+0.023291141 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:28:26 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:28:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f7def158d0b1afa8ba0cc65e8c01eb6f1a2068e089d00af548707a85c53c895/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f7def158d0b1afa8ba0cc65e8c01eb6f1a2068e089d00af548707a85c53c895/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f7def158d0b1afa8ba0cc65e8c01eb6f1a2068e089d00af548707a85c53c895/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f7def158d0b1afa8ba0cc65e8c01eb6f1a2068e089d00af548707a85c53c895/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:26 compute-0 podman[95643]: 2025-11-24 09:28:26.794940877 +0000 UTC m=+0.173925854 container init c8d14a5370072d58622c1ed9a4dd75a6e03d5768783830b334728a54de8ece5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_haibt, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True)
Nov 24 09:28:26 compute-0 podman[95643]: 2025-11-24 09:28:26.801703097 +0000 UTC m=+0.180688054 container start c8d14a5370072d58622c1ed9a4dd75a6e03d5768783830b334728a54de8ece5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_haibt, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:28:26 compute-0 podman[95643]: 2025-11-24 09:28:26.805746392 +0000 UTC m=+0.184731379 container attach c8d14a5370072d58622c1ed9a4dd75a6e03d5768783830b334728a54de8ece5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_haibt, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Nov 24 09:28:26 compute-0 ceph-mon[74331]: 7.5 scrub starts
Nov 24 09:28:26 compute-0 ceph-mon[74331]: 7.5 scrub ok
Nov 24 09:28:26 compute-0 ceph-mon[74331]: 7.18 scrub starts
Nov 24 09:28:26 compute-0 ceph-mon[74331]: 7.18 scrub ok
Nov 24 09:28:26 compute-0 ceph-mon[74331]: 4.18 scrub starts
Nov 24 09:28:26 compute-0 ceph-mon[74331]: 4.18 scrub ok
Nov 24 09:28:27 compute-0 sudo[95744]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utkiikfkfcxdfklraujbfvwgdxnhbmzm ; /usr/bin/python3'
Nov 24 09:28:27 compute-0 sudo[95744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:28:27 compute-0 python3[95750]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:28:27 compute-0 lvm[95759]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 09:28:27 compute-0 lvm[95759]: VG ceph_vg0 finished
Nov 24 09:28:27 compute-0 lvm[95772]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 09:28:27 compute-0 lvm[95772]: VG ceph_vg0 finished
Nov 24 09:28:27 compute-0 festive_haibt[95659]: {}
Nov 24 09:28:27 compute-0 podman[95760]: 2025-11-24 09:28:27.529372233 +0000 UTC m=+0.044418919 container create 466d31ed3a89e9d7f8d1e75ac245db21f6e3472daf4ee7ce52be807ac16f2828 (image=quay.io/ceph/ceph:v19, name=sharp_meitner, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 09:28:27 compute-0 podman[95643]: 2025-11-24 09:28:27.546920087 +0000 UTC m=+0.925905034 container died c8d14a5370072d58622c1ed9a4dd75a6e03d5768783830b334728a54de8ece5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:28:27 compute-0 systemd[1]: libpod-c8d14a5370072d58622c1ed9a4dd75a6e03d5768783830b334728a54de8ece5e.scope: Deactivated successfully.
Nov 24 09:28:27 compute-0 systemd[1]: libpod-c8d14a5370072d58622c1ed9a4dd75a6e03d5768783830b334728a54de8ece5e.scope: Consumed 1.233s CPU time.
Nov 24 09:28:27 compute-0 systemd[1]: Started libpod-conmon-466d31ed3a89e9d7f8d1e75ac245db21f6e3472daf4ee7ce52be807ac16f2828.scope.
Nov 24 09:28:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f7def158d0b1afa8ba0cc65e8c01eb6f1a2068e089d00af548707a85c53c895-merged.mount: Deactivated successfully.
Nov 24 09:28:27 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:28:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffe354288b2a9a20959e43672dbd5fe307048a12f6e594b21eda7ec6260e793e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffe354288b2a9a20959e43672dbd5fe307048a12f6e594b21eda7ec6260e793e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:27 compute-0 podman[95643]: 2025-11-24 09:28:27.60472138 +0000 UTC m=+0.983706337 container remove c8d14a5370072d58622c1ed9a4dd75a6e03d5768783830b334728a54de8ece5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_haibt, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:28:27 compute-0 podman[95760]: 2025-11-24 09:28:27.509037013 +0000 UTC m=+0.024083729 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:28:27 compute-0 systemd[1]: libpod-conmon-c8d14a5370072d58622c1ed9a4dd75a6e03d5768783830b334728a54de8ece5e.scope: Deactivated successfully.
Nov 24 09:28:27 compute-0 podman[95760]: 2025-11-24 09:28:27.618023144 +0000 UTC m=+0.133069830 container init 466d31ed3a89e9d7f8d1e75ac245db21f6e3472daf4ee7ce52be807ac16f2828 (image=quay.io/ceph/ceph:v19, name=sharp_meitner, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Nov 24 09:28:27 compute-0 podman[95760]: 2025-11-24 09:28:27.629359642 +0000 UTC m=+0.144406328 container start 466d31ed3a89e9d7f8d1e75ac245db21f6e3472daf4ee7ce52be807ac16f2828 (image=quay.io/ceph/ceph:v19, name=sharp_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325)
Nov 24 09:28:27 compute-0 podman[95760]: 2025-11-24 09:28:27.63226702 +0000 UTC m=+0.147313706 container attach 466d31ed3a89e9d7f8d1e75ac245db21f6e3472daf4ee7ce52be807ac16f2828 (image=quay.io/ceph/ceph:v19, name=sharp_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:28:27 compute-0 sudo[95486]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:27 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:28:27 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:27 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Nov 24 09:28:27 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:28:27 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:27 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Nov 24 09:28:27 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 24 09:28:27 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 24 09:28:27 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Nov 24 09:28:27 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:27 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Nov 24 09:28:27 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:27 compute-0 ceph-mgr[74626]: [progress INFO root] update: starting ev c8897e0d-b10f-45ae-9bc7-8180b2c53e57 (Updating mds.cephfs deployment (+3 -> 3))
Nov 24 09:28:27 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.bbilht", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Nov 24 09:28:27 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.bbilht", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 24 09:28:27 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.bbilht", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 24 09:28:27 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:28:27 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:28:27 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-2.bbilht on compute-2
Nov 24 09:28:27 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-2.bbilht on compute-2
Nov 24 09:28:27 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:28:28 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.14568 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 24 09:28:28 compute-0 ceph-mon[74331]: pgmap v15: 198 pgs: 198 active+clean; 454 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 7 op/s
Nov 24 09:28:28 compute-0 ceph-mon[74331]: from='client.14562 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 24 09:28:28 compute-0 ceph-mon[74331]: 6.1e scrub starts
Nov 24 09:28:28 compute-0 ceph-mon[74331]: 6.1e scrub ok
Nov 24 09:28:28 compute-0 ceph-mon[74331]: 7.2 scrub starts
Nov 24 09:28:28 compute-0 ceph-mon[74331]: 7.2 scrub ok
Nov 24 09:28:28 compute-0 ceph-mon[74331]: 5.18 scrub starts
Nov 24 09:28:28 compute-0 ceph-mon[74331]: 5.18 scrub ok
Nov 24 09:28:28 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:28 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:28 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:28 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:28 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.bbilht", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 24 09:28:28 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.bbilht", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 24 09:28:28 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:28:28 compute-0 sharp_meitner[95790]: 
Nov 24 09:28:28 compute-0 sharp_meitner[95790]: [{"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "alertmanager", "service_type": "alertmanager"}, {"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "nfs.cephfs", "service_name": "ingress.nfs.cephfs", "service_type": "ingress", "spec": {"backend_service": "nfs.cephfs", "enable_haproxy_protocol": true, "first_virtual_router_id": 50, "frontend_port": 2049, "monitor_port": 9049, "virtual_ip": "192.168.122.2/24"}}, {"placement": {"count": 2}, "service_id": "rgw.default", "service_name": "ingress.rgw.default", "service_type": "ingress", "spec": {"backend_service": "rgw.rgw", "first_virtual_router_id": 50, "frontend_port": 8080, "monitor_port": 8999, "virtual_interface_networks": ["192.168.122.0/24"], "virtual_ip": "192.168.122.2/24"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "nfs.cephfs", "service_type": "nfs", "spec": {"enable_haproxy_protocol": true, "port": 12049}}, {"placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "prometheus", "service_type": "prometheus"}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Nov 24 09:28:28 compute-0 systemd[1]: libpod-466d31ed3a89e9d7f8d1e75ac245db21f6e3472daf4ee7ce52be807ac16f2828.scope: Deactivated successfully.
Nov 24 09:28:28 compute-0 podman[95760]: 2025-11-24 09:28:28.032947043 +0000 UTC m=+0.547993739 container died 466d31ed3a89e9d7f8d1e75ac245db21f6e3472daf4ee7ce52be807ac16f2828 (image=quay.io/ceph/ceph:v19, name=sharp_meitner, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:28:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-ffe354288b2a9a20959e43672dbd5fe307048a12f6e594b21eda7ec6260e793e-merged.mount: Deactivated successfully.
Nov 24 09:28:28 compute-0 podman[95760]: 2025-11-24 09:28:28.072831514 +0000 UTC m=+0.587878200 container remove 466d31ed3a89e9d7f8d1e75ac245db21f6e3472daf4ee7ce52be807ac16f2828 (image=quay.io/ceph/ceph:v19, name=sharp_meitner, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Nov 24 09:28:28 compute-0 systemd[1]: libpod-conmon-466d31ed3a89e9d7f8d1e75ac245db21f6e3472daf4ee7ce52be807ac16f2828.scope: Deactivated successfully.
Nov 24 09:28:28 compute-0 sudo[95744]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:28 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v16: 198 pgs: 198 active+clean; 454 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:28:28 compute-0 ansible-async_wrapper.py[95065]: Done in kid B.
Nov 24 09:28:28 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Nov 24 09:28:28 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Nov 24 09:28:29 compute-0 ceph-mon[74331]: 6.17 scrub starts
Nov 24 09:28:29 compute-0 ceph-mon[74331]: 6.17 scrub ok
Nov 24 09:28:29 compute-0 ceph-mon[74331]: 7.3 scrub starts
Nov 24 09:28:29 compute-0 ceph-mon[74331]: 7.3 scrub ok
Nov 24 09:28:29 compute-0 ceph-mon[74331]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 24 09:28:29 compute-0 ceph-mon[74331]: Deploying daemon mds.cephfs.compute-2.bbilht on compute-2
Nov 24 09:28:29 compute-0 ceph-mon[74331]: from='client.14568 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 24 09:28:29 compute-0 ceph-mon[74331]: 4.1b scrub starts
Nov 24 09:28:29 compute-0 ceph-mon[74331]: 4.1b scrub ok
Nov 24 09:28:29 compute-0 ceph-mon[74331]: pgmap v16: 198 pgs: 198 active+clean; 454 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:28:29 compute-0 sudo[95850]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wndvpmxuklkifvyukjrrodmdoxjgoqnx ; /usr/bin/python3'
Nov 24 09:28:29 compute-0 sudo[95850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:28:29 compute-0 python3[95852]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:28:29 compute-0 podman[95853]: 2025-11-24 09:28:29.245016607 +0000 UTC m=+0.055741166 container create 929f62670b0d0d0b03d33dd0101592f62d121cbf4d7c0982396b0dfd2de5ac3c (image=quay.io/ceph/ceph:v19, name=competent_curran, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:28:29 compute-0 systemd[1]: Started libpod-conmon-929f62670b0d0d0b03d33dd0101592f62d121cbf4d7c0982396b0dfd2de5ac3c.scope.
Nov 24 09:28:29 compute-0 podman[95853]: 2025-11-24 09:28:29.214072867 +0000 UTC m=+0.024797456 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:28:29 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:28:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d4c8aad8d1c0e8ba297bce35ae21f5d34c5c89c7f0557977b826c13437ebbe6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d4c8aad8d1c0e8ba297bce35ae21f5d34c5c89c7f0557977b826c13437ebbe6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:29 compute-0 podman[95853]: 2025-11-24 09:28:29.33840726 +0000 UTC m=+0.149131829 container init 929f62670b0d0d0b03d33dd0101592f62d121cbf4d7c0982396b0dfd2de5ac3c (image=quay.io/ceph/ceph:v19, name=competent_curran, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:28:29 compute-0 podman[95853]: 2025-11-24 09:28:29.344889753 +0000 UTC m=+0.155614312 container start 929f62670b0d0d0b03d33dd0101592f62d121cbf4d7c0982396b0dfd2de5ac3c (image=quay.io/ceph/ceph:v19, name=competent_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 09:28:29 compute-0 podman[95853]: 2025-11-24 09:28:29.348222772 +0000 UTC m=+0.158947381 container attach 929f62670b0d0d0b03d33dd0101592f62d121cbf4d7c0982396b0dfd2de5ac3c (image=quay.io/ceph/ceph:v19, name=competent_curran, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 24 09:28:29 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 7.e scrub starts
Nov 24 09:28:29 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 7.e scrub ok
Nov 24 09:28:29 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.14574 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 24 09:28:29 compute-0 competent_curran[95868]: 
Nov 24 09:28:29 compute-0 competent_curran[95868]: [{"container_id": "dc063fe0e39f", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.12%", "created": "2025-11-24T09:25:44.737912Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-24T09:28:12.751616Z", "memory_usage": 7799308, "ports": [], "service_name": "crash", "started": "2025-11-24T09:25:44.629195Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@crash.compute-0", "version": "19.2.3"}, {"container_id": "fca3d6a645ca", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.42%", "created": "2025-11-24T09:26:19.222590Z", "daemon_id": "compute-1", "daemon_name": "crash.compute-1", "daemon_type": "crash", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-11-24T09:28:12.153441Z", "memory_usage": 7812939, "ports": [], "service_name": "crash", "started": "2025-11-24T09:26:19.115107Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@crash.compute-1", "version": "19.2.3"}, {"container_id": "bb2dda4b9803", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.43%", "created": "2025-11-24T09:27:15.667910Z", "daemon_id": "compute-2", "daemon_name": "crash.compute-2", "daemon_type": "crash", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-11-24T09:28:12.752893Z", "memory_usage": 7812939, "ports": [], "service_name": "crash", "started": "2025-11-24T09:27:15.534825Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@crash.compute-2", "version": "19.2.3"}, {"container_id": "df5dc55b63c9", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph:v19", "cpu_percentage": "28.55%", "created": "2025-11-24T09:25:09.511070Z", "daemon_id": "compute-0.mauvni", "daemon_name": "mgr.compute-0.mauvni", "daemon_type": "mgr", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-24T09:28:12.751517Z", "memory_usage": 542533222, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-11-24T09:25:09.365988Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@mgr.compute-0.mauvni", "version": "19.2.3"}, {"container_id": "060bf9a9568d", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "43.56%", "created": "2025-11-24T09:27:13.528725Z", "daemon_id": "compute-1.qelqsg", "daemon_name": "mgr.compute-1.qelqsg", "daemon_type": "mgr", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-11-24T09:28:12.153723Z", "memory_usage": 503631052, "ports": [8765], "service_name": "mgr", "started": "2025-11-24T09:27:13.440055Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@mgr.compute-1.qelqsg", "version": "19.2.3"}, {"container_id": "6b33d25e7f15", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "39.51%", "created": "2025-11-24T09:27:07.944712Z", "daemon_id": "compute-2.rzcnzg", "daemon_name": "mgr.compute-2.rzcnzg", "daemon_type": "mgr", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-11-24T09:28:12.752801Z", "memory_usage": 503735910, "ports": [8765], "service_name": "mgr", "started": "2025-11-24T09:27:07.831228Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@mgr.compute-2.rzcnzg", "version": "19.2.3"}, {"container_id": "926e81c0f890", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph:v19", "cpu_percentage": "2.71%", "created": "2025-11-24T09:25:05.470974Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-24T09:28:12.751409Z", "memory_request": 2147483648, "memory_usage": 59978547, "ports": [], "service_name": "mon", "started": "2025-11-24T09:25:07.567047Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@mon.compute-0", "version": "19.2.3"}, {"container_id": "515e62465fc9", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "2.10%", "created": "2025-11-24T09:27:02.727511Z", "daemon_id": "compute-1", "daemon_name": "mon.compute-1", "daemon_type": "mon", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-11-24T09:28:12.153616Z", "memory_request": 2147483648, "memory_usage": 51002736, "ports": [], "service_name": "mon", "started": "2025-11-24T09:27:02.637523Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@mon.compute-1", "version": "19.2.3"}, {"container_id": "c1530d44c3cf", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "3.02%", "created": "2025-11-24T09:27:00.802530Z", "daemon_id": "compute-2", "daemon_name": "mon.compute-2", "daemon_type": "mon", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-11-24T09:28:12.752664Z", "memory_request": 2147483648, "memory_usage": 50373591, "ports": [], "service_name": "mon", "started": "2025-11-24T09:27:00.687477Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@mon.compute-2", "version": "19.2.3"}, {"container_id": "7b41a24888e2", "container_image_digests": ["quay.io/prometheus/node-exporter@sha256:4cb2b9019f1757be8482419002cb7afe028fdba35d47958829e4cfeaf6246d80", "quay.io/prometheus/node-exporter@sha256:52a6f10ff10238979c365c06dbed8ad5cd1645c41780dc08ff813adacfb2341e"], "container_image_id": "72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e", "container_image_name": "quay.io/prometheus/node-exporter:v1.7.0", "cpu_percentage": "0.12%", "created": "2025-11-24T09:27:59.653514Z", "daemon_id": "compute-0", "daemon_name": "node-exporter.compute-0", "daemon_type": "node-exporter", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-24T09:28:12.751852Z", "memory_usage": 5918162, "ports": [9100], "service_name": "node-exporter", "started": "2025-11-24T09:27:59.587076Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@node-exporter.compute-0", "version": "1.7.0"}, {"daemon_id": "compute-1", "daemon_name": "node-exporter.compute-1", "daemon_type": "node-exporter", "events": ["2025-11-24T09:28:19.870080Z daemon:node-exporter.compute-1 [INFO] \"Deployed node-exporter.compute-1 on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "ports": [9100], "service_name": "node-exporter", "status": 2, "status_desc": "starting"}, {"daemon_id": "compute-2", "daemon_name": "node-exporter.compute-2", "daemon_type": "node-exporter", "events": ["2025-11-24T09:28:22.856139Z daemon:node-exporter.compute-2 [INFO] \"Deployed node-exporter.compute-2 on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "ports": [9100], "service_name": "node-exporter", "status": 2, "status_desc": "starting"}, {"container_id": "1545a78bd796", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "2.62%", "created": "2025-11-24T09:26:30.903029Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-24T09:28:12.751715Z", "memory_request": 4294967296, "memory_usage": 80509665, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-24T09:26:30.809923Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@osd.0", "version": "19.2.3"}, {"container_id": "074006852d3b", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "1.55%", "created": "2025-11-24T09:26:30.579867Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-11-24T09:28:12.153545Z", "memory_request": 4294967296, "memory_usage": 70820823, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-24T09:26:30.489784Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@osd.1", "version": "19.2.3"}, {"container_id": "8b8bdd446fab", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "3.80%", "created": "2025-11-24T09:27:27.991225Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-11-24T09:28:12.752989Z", "memory_request": 4294967296, "memory_usage": 71313653, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-24T09:27:27.883562Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@osd.2", "version": "19.2.3"}, {"container_id": "db2712baa80b", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "1.78%", "created": "2025-11-24T09:27:45.350186Z", "daemon_id": "rgw.compute-0.zlrxyg", "daemon_name": "rgw.rgw.compute-0.zlrxyg", "daemon_type": "rgw", "hostname": "compute-0", "ip": "192.168.122.100", "is_active": false, "last_refresh": "2025-11-24T09:28:12.751786Z", "memory_usage": 100914954, "ports": [8082], "service_name": "rgw.rgw", "started": "2025-11-24T09:27:45.251062Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@rgw.rgw.compute-0.zlrxyg", "version": "19.2.3"}, {"container_id": "057f5f369766", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "1.83%", "created": "2025-11-24T09:27:43.709354Z", "daemon_id": "rgw.compute-1.vproll", "daemon_name": "rgw.rgw.compute-1.vproll", "daemon_type": "rgw", "hostname": "compute-1", "ip": "192.168.122.101", "is_active": false, "last_refresh": "2025-11-24T09:28:12.153799Z", "memory_usage": 104584970, "ports": [8082], "service_name": "rgw.rgw", "started": "2025-11-24T09:27:43.626458Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@rgw.rgw.compute-1.vproll", "version": "19.2.3"}, {"container_id": "b32af8cc53ea", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "2.41%", "created": "2025-11-24T09:27:42.096580Z", "daemon_id": "rgw.compute-2.qecnjt", "daemon_name": "rgw.rgw.compute-2.qecnjt", "daemon_type": "rgw", "hostname": "compute-2", "ip": "192.168.122.102", "is_active": false, "last_refresh": "2025-11-24T09:28:12.753064Z", "memory_usage": 101785272, "ports": [8082], "service_name": "rgw.rgw", "started": "2025-11-24T09:27:41.856046Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@rgw.rgw.compute-2.qecnjt", "version": "19.2.3"}]
Nov 24 09:28:29 compute-0 systemd[1]: libpod-929f62670b0d0d0b03d33dd0101592f62d121cbf4d7c0982396b0dfd2de5ac3c.scope: Deactivated successfully.
Nov 24 09:28:29 compute-0 conmon[95868]: conmon 929f62670b0d0d0b03d3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-929f62670b0d0d0b03d33dd0101592f62d121cbf4d7c0982396b0dfd2de5ac3c.scope/container/memory.events
Nov 24 09:28:29 compute-0 podman[95853]: 2025-11-24 09:28:29.704437405 +0000 UTC m=+0.515161974 container died 929f62670b0d0d0b03d33dd0101592f62d121cbf4d7c0982396b0dfd2de5ac3c (image=quay.io/ceph/ceph:v19, name=competent_curran, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:28:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d4c8aad8d1c0e8ba297bce35ae21f5d34c5c89c7f0557977b826c13437ebbe6-merged.mount: Deactivated successfully.
Nov 24 09:28:29 compute-0 podman[95853]: 2025-11-24 09:28:29.746063747 +0000 UTC m=+0.556788296 container remove 929f62670b0d0d0b03d33dd0101592f62d121cbf4d7c0982396b0dfd2de5ac3c (image=quay.io/ceph/ceph:v19, name=competent_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:28:29 compute-0 systemd[1]: libpod-conmon-929f62670b0d0d0b03d33dd0101592f62d121cbf4d7c0982396b0dfd2de5ac3c.scope: Deactivated successfully.
Nov 24 09:28:29 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 09:28:29 compute-0 sudo[95850]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:29 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:29 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 09:28:29 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:29 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Nov 24 09:28:29 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:29 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.cibmfe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Nov 24 09:28:29 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.cibmfe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 24 09:28:29 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.cibmfe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 24 09:28:29 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:28:29 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:28:29 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.cibmfe on compute-0
Nov 24 09:28:29 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.cibmfe on compute-0
Nov 24 09:28:29 compute-0 sudo[95905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:28:29 compute-0 sudo[95905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:29 compute-0 sudo[95905]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:29 compute-0 rsyslogd[1004]: message too long (15159) with configured size 8096, begin of message is: [{"container_id": "dc063fe0e39f", "container_image_digests": ["quay.io/ceph/ceph [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 24 09:28:29 compute-0 sudo[95930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:28:29 compute-0 sudo[95930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:30 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).mds e3 new map
Nov 24 09:28:30 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).mds e3 print_map
                                           e3
                                           btime 2025-11-24T09:28:30:031773+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-24T09:28:11.441245+0000
                                           modified        2025-11-24T09:28:11.441245+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-2.bbilht{-1:24181} state up:standby seq 1 addr [v2:192.168.122.102:6804/3576340281,v1:192.168.122.102:6805/3576340281] compat {c=[1],r=[1],i=[1fff]}]
Nov 24 09:28:30 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/3576340281,v1:192.168.122.102:6805/3576340281] up:boot
Nov 24 09:28:30 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.102:6804/3576340281,v1:192.168.122.102:6805/3576340281] as mds.0
Nov 24 09:28:30 compute-0 ceph-mon[74331]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.bbilht assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 24 09:28:30 compute-0 ceph-mon[74331]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 24 09:28:30 compute-0 ceph-mon[74331]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 24 09:28:30 compute-0 ceph-mon[74331]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 24 09:28:30 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Nov 24 09:28:30 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.bbilht"} v 0)
Nov 24 09:28:30 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.bbilht"}]: dispatch
Nov 24 09:28:30 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).mds e3 all = 0
Nov 24 09:28:30 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).mds e4 new map
Nov 24 09:28:30 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).mds e4 print_map
                                           e4
                                           btime 2025-11-24T09:28:30:045188+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-24T09:28:11.441245+0000
                                           modified        2025-11-24T09:28:30.045062+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24181}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                           [mds.cephfs.compute-2.bbilht{0:24181} state up:creating seq 1 addr [v2:192.168.122.102:6804/3576340281,v1:192.168.122.102:6805/3576340281] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Nov 24 09:28:30 compute-0 ceph-mon[74331]: 6.12 scrub starts
Nov 24 09:28:30 compute-0 ceph-mon[74331]: 6.12 scrub ok
Nov 24 09:28:30 compute-0 ceph-mon[74331]: 7.4 scrub starts
Nov 24 09:28:30 compute-0 ceph-mon[74331]: 7.4 scrub ok
Nov 24 09:28:30 compute-0 ceph-mon[74331]: 4.c scrub starts
Nov 24 09:28:30 compute-0 ceph-mon[74331]: 4.c scrub ok
Nov 24 09:28:30 compute-0 ceph-mon[74331]: from='client.14574 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 24 09:28:30 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:30 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:30 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:30 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.cibmfe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 24 09:28:30 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.cibmfe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 24 09:28:30 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:28:30 compute-0 ceph-mon[74331]: Deploying daemon mds.cephfs.compute-0.cibmfe on compute-0
Nov 24 09:28:30 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.bbilht=up:creating}
Nov 24 09:28:30 compute-0 ceph-mon[74331]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.bbilht is now active in filesystem cephfs as rank 0
Nov 24 09:28:30 compute-0 podman[95996]: 2025-11-24 09:28:30.377144996 +0000 UTC m=+0.050234547 container create d179d691a711d81cf89ca7f138bddca91481e9426615c037e63222af699b09fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_banach, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:28:30 compute-0 systemd[1]: Started libpod-conmon-d179d691a711d81cf89ca7f138bddca91481e9426615c037e63222af699b09fb.scope.
Nov 24 09:28:30 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v17: 198 pgs: 198 active+clean; 454 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:28:30 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:28:30 compute-0 podman[95996]: 2025-11-24 09:28:30.436954046 +0000 UTC m=+0.110043647 container init d179d691a711d81cf89ca7f138bddca91481e9426615c037e63222af699b09fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_banach, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Nov 24 09:28:30 compute-0 podman[95996]: 2025-11-24 09:28:30.444410602 +0000 UTC m=+0.117500183 container start d179d691a711d81cf89ca7f138bddca91481e9426615c037e63222af699b09fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_banach, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:28:30 compute-0 podman[95996]: 2025-11-24 09:28:30.448072748 +0000 UTC m=+0.121162339 container attach d179d691a711d81cf89ca7f138bddca91481e9426615c037e63222af699b09fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_banach, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Nov 24 09:28:30 compute-0 crazy_banach[96013]: 167 167
Nov 24 09:28:30 compute-0 systemd[1]: libpod-d179d691a711d81cf89ca7f138bddca91481e9426615c037e63222af699b09fb.scope: Deactivated successfully.
Nov 24 09:28:30 compute-0 podman[95996]: 2025-11-24 09:28:30.450034465 +0000 UTC m=+0.123124026 container died d179d691a711d81cf89ca7f138bddca91481e9426615c037e63222af699b09fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Nov 24 09:28:30 compute-0 podman[95996]: 2025-11-24 09:28:30.361669241 +0000 UTC m=+0.034758822 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:28:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ffe1d955c844a141e913fbf3e7eaa1015889601c93a895d803101bea64b5b2f-merged.mount: Deactivated successfully.
Nov 24 09:28:30 compute-0 podman[95996]: 2025-11-24 09:28:30.481270142 +0000 UTC m=+0.154359703 container remove d179d691a711d81cf89ca7f138bddca91481e9426615c037e63222af699b09fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_banach, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 09:28:30 compute-0 systemd[1]: libpod-conmon-d179d691a711d81cf89ca7f138bddca91481e9426615c037e63222af699b09fb.scope: Deactivated successfully.
Nov 24 09:28:30 compute-0 systemd[1]: Reloading.
Nov 24 09:28:30 compute-0 systemd-sysv-generator[96059]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:28:30 compute-0 systemd-rc-local-generator[96055]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:28:30 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 7.f scrub starts
Nov 24 09:28:30 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 7.f scrub ok
Nov 24 09:28:30 compute-0 sudo[96089]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpypflnhapahwxainmdetqmluvndaffr ; /usr/bin/python3'
Nov 24 09:28:30 compute-0 sudo[96089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:28:30 compute-0 systemd[1]: Reloading.
Nov 24 09:28:30 compute-0 systemd-rc-local-generator[96125]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:28:30 compute-0 systemd-sysv-generator[96129]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:28:30 compute-0 python3[96093]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:28:31 compute-0 podman[96132]: 2025-11-24 09:28:31.026412062 +0000 UTC m=+0.055626443 container create 28bca7b90fc896ea263d2c17daab9669bd7cd67e7a5790c3dac7c751276657cb (image=quay.io/ceph/ceph:v19, name=unruffled_wilbur, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Nov 24 09:28:31 compute-0 ceph-mon[74331]: 7.e scrub starts
Nov 24 09:28:31 compute-0 ceph-mon[74331]: 7.e scrub ok
Nov 24 09:28:31 compute-0 ceph-mon[74331]: mds.? [v2:192.168.122.102:6804/3576340281,v1:192.168.122.102:6805/3576340281] up:boot
Nov 24 09:28:31 compute-0 ceph-mon[74331]: daemon mds.cephfs.compute-2.bbilht assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 24 09:28:31 compute-0 ceph-mon[74331]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 24 09:28:31 compute-0 ceph-mon[74331]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 24 09:28:31 compute-0 ceph-mon[74331]: Cluster is now healthy
Nov 24 09:28:31 compute-0 ceph-mon[74331]: fsmap cephfs:0 1 up:standby
Nov 24 09:28:31 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.bbilht"}]: dispatch
Nov 24 09:28:31 compute-0 ceph-mon[74331]: fsmap cephfs:1 {0=cephfs.compute-2.bbilht=up:creating}
Nov 24 09:28:31 compute-0 ceph-mon[74331]: daemon mds.cephfs.compute-2.bbilht is now active in filesystem cephfs as rank 0
Nov 24 09:28:31 compute-0 ceph-mon[74331]: 4.1a scrub starts
Nov 24 09:28:31 compute-0 ceph-mon[74331]: 4.1a scrub ok
Nov 24 09:28:31 compute-0 ceph-mon[74331]: pgmap v17: 198 pgs: 198 active+clean; 454 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:28:31 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).mds e5 new map
Nov 24 09:28:31 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).mds e5 print_map
                                           e5
                                           btime 2025-11-24T09:28:31:054777+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-24T09:28:11.441245+0000
                                           modified        2025-11-24T09:28:31.054773+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24181}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 24181 members: 24181
                                           [mds.cephfs.compute-2.bbilht{0:24181} state up:active seq 2 addr [v2:192.168.122.102:6804/3576340281,v1:192.168.122.102:6805/3576340281] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Nov 24 09:28:31 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/3576340281,v1:192.168.122.102:6805/3576340281] up:active
Nov 24 09:28:31 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.bbilht=up:active}
Nov 24 09:28:31 compute-0 podman[96132]: 2025-11-24 09:28:30.999514738 +0000 UTC m=+0.028729159 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:28:31 compute-0 systemd[1]: Started libpod-conmon-28bca7b90fc896ea263d2c17daab9669bd7cd67e7a5790c3dac7c751276657cb.scope.
Nov 24 09:28:31 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.cibmfe for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:28:31 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:28:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6194e9b4c5cd513f4ff0dc410dc06d305a047b9857339962977f3bc88ab677e6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6194e9b4c5cd513f4ff0dc410dc06d305a047b9857339962977f3bc88ab677e6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:31 compute-0 podman[96132]: 2025-11-24 09:28:31.156420409 +0000 UTC m=+0.185634820 container init 28bca7b90fc896ea263d2c17daab9669bd7cd67e7a5790c3dac7c751276657cb (image=quay.io/ceph/ceph:v19, name=unruffled_wilbur, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:28:31 compute-0 podman[96132]: 2025-11-24 09:28:31.165861072 +0000 UTC m=+0.195075443 container start 28bca7b90fc896ea263d2c17daab9669bd7cd67e7a5790c3dac7c751276657cb (image=quay.io/ceph/ceph:v19, name=unruffled_wilbur, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:28:31 compute-0 podman[96132]: 2025-11-24 09:28:31.169317403 +0000 UTC m=+0.198531784 container attach 28bca7b90fc896ea263d2c17daab9669bd7cd67e7a5790c3dac7c751276657cb (image=quay.io/ceph/ceph:v19, name=unruffled_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 24 09:28:31 compute-0 podman[96220]: 2025-11-24 09:28:31.403888227 +0000 UTC m=+0.058887120 container create 4a7ccebb8003fe21edcdd8246bcb3e3e9c9abf8d48f91414d06252aae3a770b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mds-cephfs-compute-0-cibmfe, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 24 09:28:31 compute-0 podman[96220]: 2025-11-24 09:28:31.375338714 +0000 UTC m=+0.030337617 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:28:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e46b41b055fa12ba81bb13632a7b0ebd64743c131194b427fd35d8f69f76c7df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e46b41b055fa12ba81bb13632a7b0ebd64743c131194b427fd35d8f69f76c7df/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e46b41b055fa12ba81bb13632a7b0ebd64743c131194b427fd35d8f69f76c7df/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e46b41b055fa12ba81bb13632a7b0ebd64743c131194b427fd35d8f69f76c7df/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.cibmfe supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:31 compute-0 podman[96220]: 2025-11-24 09:28:31.500985268 +0000 UTC m=+0.155984161 container init 4a7ccebb8003fe21edcdd8246bcb3e3e9c9abf8d48f91414d06252aae3a770b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mds-cephfs-compute-0-cibmfe, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:28:31 compute-0 podman[96220]: 2025-11-24 09:28:31.509042418 +0000 UTC m=+0.164041311 container start 4a7ccebb8003fe21edcdd8246bcb3e3e9c9abf8d48f91414d06252aae3a770b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mds-cephfs-compute-0-cibmfe, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 24 09:28:31 compute-0 bash[96220]: 4a7ccebb8003fe21edcdd8246bcb3e3e9c9abf8d48f91414d06252aae3a770b5
Nov 24 09:28:31 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.cibmfe for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:28:31 compute-0 ceph-mds[96241]: set uid:gid to 167:167 (ceph:ceph)
Nov 24 09:28:31 compute-0 ceph-mds[96241]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mds, pid 2
Nov 24 09:28:31 compute-0 ceph-mds[96241]: main not setting numa affinity
Nov 24 09:28:31 compute-0 ceph-mds[96241]: pidfile_write: ignore empty --pid-file
Nov 24 09:28:31 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mds-cephfs-compute-0-cibmfe[96237]: starting mds.cephfs.compute-0.cibmfe at 
Nov 24 09:28:31 compute-0 sudo[95930]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:31 compute-0 ceph-mds[96241]: mds.cephfs.compute-0.cibmfe Updating MDS map to version 5 from mon.0
Nov 24 09:28:31 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:28:31 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:31 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:28:31 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:31 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Nov 24 09:28:31 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:31 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.vpamdk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Nov 24 09:28:31 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.vpamdk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 24 09:28:31 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Nov 24 09:28:31 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2235972342' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 24 09:28:31 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.vpamdk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 24 09:28:31 compute-0 unruffled_wilbur[96149]: 
Nov 24 09:28:31 compute-0 unruffled_wilbur[96149]: {"fsid":"84a084c3-61a7-5de7-8207-1f88efa59a64","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":79,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":51,"num_osds":3,"num_up_osds":3,"osd_up_since":1763976455,"num_in_osds":3,"osd_in_since":1763976437,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":198}],"num_pgs":198,"num_pools":12,"num_objects":195,"data_bytes":464595,"bytes_used":107503616,"bytes_avail":64304422912,"bytes_total":64411926528},"fsmap":{"epoch":5,"btime":"2025-11-24T09:28:31:054777+0000","id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-2.bbilht","status":"up:active","gid":24181}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","dashboard","iostat","nfs","restful"],"services":{"dashboard":"http://192.168.122.100:8443/"}},"servicemap":{"epoch":4,"modified":"2025-11-24T09:27:51.721300+0000","services":{"mgr":{"daemons":{"summary":"","compute-0.mauvni":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1.qelqsg":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2.rzcnzg":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"rgw":{"daemons":{"summary":"","14394":{"start_epoch":4,"start_stamp":"2025-11-24T09:27:51.712701+0000","gid":14394,"addr":"192.168.122.100:0/2097383266","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-0","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.100:8082","frontend_type#0":"beast","hostname":"compute-0","id":"rgw.compute-0.zlrxyg","kernel_description":"#1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025","kernel_version":"5.14.0-639.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864320","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"0565e2b2-234e-414b-b909-932048ceb050","zone_name":"default","zonegroup_id":"5f03f326-32a0-4275-804c-1875d841eeca","zonegroup_name":"default"},"task_status":{}},"24148":{"start_epoch":4,"start_stamp":"2025-11-24T09:27:51.718135+0000","gid":24148,"addr":"192.168.122.102:0/2761939167","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-2","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.102:8082","frontend_type#0":"beast","hostname":"compute-2","id":"rgw.compute-2.qecnjt","kernel_description":"#1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025","kernel_version":"5.14.0-639.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864312","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"0565e2b2-234e-414b-b909-932048ceb050","zone_name":"default","zonegroup_id":"5f03f326-32a0-4275-804c-1875d841eeca","zonegroup_name":"default"},"task_status":{}},"24191":{"start_epoch":4,"start_stamp":"2025-11-24T09:27:51.712940+0000","gid":24191,"addr":"192.168.122.101:0/2580956473","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-1","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.101:8082","frontend_type#0":"beast","hostname":"compute-1","id":"rgw.compute-1.vproll","kernel_description":"#1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025","kernel_version":"5.14.0-639.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864320","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"0565e2b2-234e-414b-b909-932048ceb050","zone_name":"default","zonegroup_id":"5f03f326-32a0-4275-804c-1875d841eeca","zonegroup_name":"default"},"task_status":{}}}}}},"progress_events":{"c8897e0d-b10f-45ae-9bc7-8180b2c53e57":{"message":"Updating mds.cephfs deployment (+3 -> 3) (2s)\n      [=========...................] (remaining: 4s)","progress":0.3333333432674408,"add_to_ceph_s":true}}}
Nov 24 09:28:31 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:28:31 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:28:31 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-1.vpamdk on compute-1
Nov 24 09:28:31 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-1.vpamdk on compute-1
Nov 24 09:28:31 compute-0 systemd[1]: libpod-28bca7b90fc896ea263d2c17daab9669bd7cd67e7a5790c3dac7c751276657cb.scope: Deactivated successfully.
Nov 24 09:28:31 compute-0 podman[96132]: 2025-11-24 09:28:31.653393103 +0000 UTC m=+0.682607554 container died 28bca7b90fc896ea263d2c17daab9669bd7cd67e7a5790c3dac7c751276657cb (image=quay.io/ceph/ceph:v19, name=unruffled_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Nov 24 09:28:31 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Nov 24 09:28:31 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Nov 24 09:28:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-6194e9b4c5cd513f4ff0dc410dc06d305a047b9857339962977f3bc88ab677e6-merged.mount: Deactivated successfully.
Nov 24 09:28:31 compute-0 podman[96132]: 2025-11-24 09:28:31.702702076 +0000 UTC m=+0.731916497 container remove 28bca7b90fc896ea263d2c17daab9669bd7cd67e7a5790c3dac7c751276657cb (image=quay.io/ceph/ceph:v19, name=unruffled_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1)
Nov 24 09:28:31 compute-0 systemd[1]: libpod-conmon-28bca7b90fc896ea263d2c17daab9669bd7cd67e7a5790c3dac7c751276657cb.scope: Deactivated successfully.
Nov 24 09:28:31 compute-0 sudo[96089]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:32 compute-0 ceph-mon[74331]: 7.f scrub starts
Nov 24 09:28:32 compute-0 ceph-mon[74331]: 7.f scrub ok
Nov 24 09:28:32 compute-0 ceph-mon[74331]: mds.? [v2:192.168.122.102:6804/3576340281,v1:192.168.122.102:6805/3576340281] up:active
Nov 24 09:28:32 compute-0 ceph-mon[74331]: fsmap cephfs:1 {0=cephfs.compute-2.bbilht=up:active}
Nov 24 09:28:32 compute-0 ceph-mon[74331]: 5.1c scrub starts
Nov 24 09:28:32 compute-0 ceph-mon[74331]: 5.1c scrub ok
Nov 24 09:28:32 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:32 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:32 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:32 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.vpamdk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 24 09:28:32 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2235972342' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 24 09:28:32 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.vpamdk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 24 09:28:32 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:28:32 compute-0 ceph-mon[74331]: Deploying daemon mds.cephfs.compute-1.vpamdk on compute-1
Nov 24 09:28:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).mds e6 new map
Nov 24 09:28:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).mds e6 print_map
                                           e6
                                           btime 2025-11-24T09:28:32:078769+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-24T09:28:11.441245+0000
                                           modified        2025-11-24T09:28:31.054773+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24181}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 24181 members: 24181
                                           [mds.cephfs.compute-2.bbilht{0:24181} state up:active seq 2 addr [v2:192.168.122.102:6804/3576340281,v1:192.168.122.102:6805/3576340281] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.cibmfe{-1:14586} state up:standby seq 1 addr [v2:192.168.122.100:6806/3605740467,v1:192.168.122.100:6807/3605740467] compat {c=[1],r=[1],i=[1fff]}]
Nov 24 09:28:32 compute-0 ceph-mds[96241]: mds.cephfs.compute-0.cibmfe Updating MDS map to version 6 from mon.0
Nov 24 09:28:32 compute-0 ceph-mds[96241]: mds.cephfs.compute-0.cibmfe Monitors have assigned me to become a standby
Nov 24 09:28:32 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/3605740467,v1:192.168.122.100:6807/3605740467] up:boot
Nov 24 09:28:32 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.bbilht=up:active} 1 up:standby
Nov 24 09:28:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.cibmfe"} v 0)
Nov 24 09:28:32 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.cibmfe"}]: dispatch
Nov 24 09:28:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).mds e6 all = 0
Nov 24 09:28:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).mds e7 new map
Nov 24 09:28:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).mds e7 print_map
                                           e7
                                           btime 2025-11-24T09:28:32:111568+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-24T09:28:11.441245+0000
                                           modified        2025-11-24T09:28:31.054773+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24181}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24181 members: 24181
                                           [mds.cephfs.compute-2.bbilht{0:24181} state up:active seq 2 addr [v2:192.168.122.102:6804/3576340281,v1:192.168.122.102:6805/3576340281] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.cibmfe{-1:14586} state up:standby seq 1 addr [v2:192.168.122.100:6806/3605740467,v1:192.168.122.100:6807/3605740467] compat {c=[1],r=[1],i=[1fff]}]
Nov 24 09:28:32 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.bbilht=up:active} 1 up:standby
Nov 24 09:28:32 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v18: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Nov 24 09:28:32 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 7.b scrub starts
Nov 24 09:28:32 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 7.b scrub ok
Nov 24 09:28:32 compute-0 sudo[96296]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eprecoleidmhiyfwhoqjjstvkjimjdtw ; /usr/bin/python3'
Nov 24 09:28:32 compute-0 sudo[96296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:28:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:28:32 compute-0 python3[96298]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:28:32 compute-0 podman[96299]: 2025-11-24 09:28:32.890418966 +0000 UTC m=+0.051866104 container create 3728f1795ca6e49aa877bedd807f5c81335b10c9713fb83a8988951296fece3b (image=quay.io/ceph/ceph:v19, name=amazing_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Nov 24 09:28:32 compute-0 systemd[1]: Started libpod-conmon-3728f1795ca6e49aa877bedd807f5c81335b10c9713fb83a8988951296fece3b.scope.
Nov 24 09:28:32 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:28:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40f02ba05528c72149fb600f62d373d2d9537b75e769e6bb0c304763f2ec580e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40f02ba05528c72149fb600f62d373d2d9537b75e769e6bb0c304763f2ec580e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:32 compute-0 podman[96299]: 2025-11-24 09:28:32.867129107 +0000 UTC m=+0.028576265 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:28:32 compute-0 podman[96299]: 2025-11-24 09:28:32.974395227 +0000 UTC m=+0.135842395 container init 3728f1795ca6e49aa877bedd807f5c81335b10c9713fb83a8988951296fece3b (image=quay.io/ceph/ceph:v19, name=amazing_yonath, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 09:28:32 compute-0 podman[96299]: 2025-11-24 09:28:32.983269567 +0000 UTC m=+0.144716695 container start 3728f1795ca6e49aa877bedd807f5c81335b10c9713fb83a8988951296fece3b (image=quay.io/ceph/ceph:v19, name=amazing_yonath, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:28:32 compute-0 podman[96299]: 2025-11-24 09:28:32.987857644 +0000 UTC m=+0.149304892 container attach 3728f1795ca6e49aa877bedd807f5c81335b10c9713fb83a8988951296fece3b (image=quay.io/ceph/ceph:v19, name=amazing_yonath, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:28:33 compute-0 ceph-mon[74331]: 7.8 scrub starts
Nov 24 09:28:33 compute-0 ceph-mon[74331]: 7.8 scrub ok
Nov 24 09:28:33 compute-0 ceph-mon[74331]: mds.? [v2:192.168.122.100:6806/3605740467,v1:192.168.122.100:6807/3605740467] up:boot
Nov 24 09:28:33 compute-0 ceph-mon[74331]: fsmap cephfs:1 {0=cephfs.compute-2.bbilht=up:active} 1 up:standby
Nov 24 09:28:33 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.cibmfe"}]: dispatch
Nov 24 09:28:33 compute-0 ceph-mon[74331]: fsmap cephfs:1 {0=cephfs.compute-2.bbilht=up:active} 1 up:standby
Nov 24 09:28:33 compute-0 ceph-mon[74331]: 5.1b scrub starts
Nov 24 09:28:33 compute-0 ceph-mon[74331]: 5.1b scrub ok
Nov 24 09:28:33 compute-0 ceph-mon[74331]: pgmap v18: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Nov 24 09:28:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 09:28:33 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 09:28:33 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Nov 24 09:28:33 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:33 compute-0 ceph-mgr[74626]: [progress INFO root] complete: finished ev c8897e0d-b10f-45ae-9bc7-8180b2c53e57 (Updating mds.cephfs deployment (+3 -> 3))
Nov 24 09:28:33 compute-0 ceph-mgr[74626]: [progress INFO root] Completed event c8897e0d-b10f-45ae-9bc7-8180b2c53e57 (Updating mds.cephfs deployment (+3 -> 3)) in 6 seconds
Nov 24 09:28:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Nov 24 09:28:33 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Nov 24 09:28:33 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:33 compute-0 ceph-mgr[74626]: [progress INFO root] update: starting ev c180ba8d-7383-4ea2-88c7-36c847a2ea3c (Updating nfs.cephfs deployment (+3 -> 3))
Nov 24 09:28:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:28:33 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:33 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.vvoanr
Nov 24 09:28:33 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.vvoanr
Nov 24 09:28:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.vvoanr", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Nov 24 09:28:33 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.vvoanr", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Nov 24 09:28:33 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.vvoanr", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Nov 24 09:28:33 compute-0 ceph-mgr[74626]: [cephadm INFO root] Ensuring nfs.cephfs.0 is in the ganesha grace table
Nov 24 09:28:33 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.0 is in the ganesha grace table
Nov 24 09:28:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Nov 24 09:28:33 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Nov 24 09:28:33 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Nov 24 09:28:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:28:33 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:28:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Nov 24 09:28:33 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1660329950' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 24 09:28:33 compute-0 amazing_yonath[96314]: 
Nov 24 09:28:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Nov 24 09:28:33 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Nov 24 09:28:33 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Nov 24 09:28:33 compute-0 amazing_yonath[96314]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ALERTMANAGER_API_HOST","value":"http://192.168.122.100:9093","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_PASSWORD","value":"/home/grafana_password.yml","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_URL","value":"http://192.168.122.100:3100","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_USERNAME","value":"admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/PROMETHEUS_API_HOST","value":"http://192.168.122.100:9092","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-0.mauvni/server_addr","value":"192.168.122.100","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-1.qelqsg/server_addr","value":"192.168.122.101","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-2.rzcnzg/server_addr","value":"192.168.122.102","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/server_port","value":"8443","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ssl","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ssl_server_port","value":"8443","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.zlrxyg","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-1.vproll","name":"rgw_frontends","value":"beast endpoint=192.168.122.101:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-2.qecnjt","name":"rgw_frontends","value":"beast endpoint=192.168.122.102:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Nov 24 09:28:33 compute-0 systemd[1]: libpod-3728f1795ca6e49aa877bedd807f5c81335b10c9713fb83a8988951296fece3b.scope: Deactivated successfully.
Nov 24 09:28:33 compute-0 podman[96355]: 2025-11-24 09:28:33.418605077 +0000 UTC m=+0.029191190 container died 3728f1795ca6e49aa877bedd807f5c81335b10c9713fb83a8988951296fece3b (image=quay.io/ceph/ceph:v19, name=amazing_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Nov 24 09:28:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-40f02ba05528c72149fb600f62d373d2d9537b75e769e6bb0c304763f2ec580e-merged.mount: Deactivated successfully.
Nov 24 09:28:33 compute-0 podman[96355]: 2025-11-24 09:28:33.45902795 +0000 UTC m=+0.069614033 container remove 3728f1795ca6e49aa877bedd807f5c81335b10c9713fb83a8988951296fece3b (image=quay.io/ceph/ceph:v19, name=amazing_yonath, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:28:33 compute-0 systemd[1]: libpod-conmon-3728f1795ca6e49aa877bedd807f5c81335b10c9713fb83a8988951296fece3b.scope: Deactivated successfully.
Nov 24 09:28:33 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Nov 24 09:28:33 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Nov 24 09:28:33 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.vvoanr-rgw
Nov 24 09:28:33 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.vvoanr-rgw
Nov 24 09:28:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.vvoanr-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Nov 24 09:28:33 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.vvoanr-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 24 09:28:33 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.vvoanr-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 24 09:28:33 compute-0 ceph-mgr[74626]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.0.0.compute-1.vvoanr's ganesha conf is defaulting to empty
Nov 24 09:28:33 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.0.0.compute-1.vvoanr's ganesha conf is defaulting to empty
Nov 24 09:28:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:28:33 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:28:33 compute-0 sudo[96296]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:33 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.0.0.compute-1.vvoanr on compute-1
Nov 24 09:28:33 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.0.0.compute-1.vvoanr on compute-1
Nov 24 09:28:33 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 7.9 deep-scrub starts
Nov 24 09:28:33 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 7.9 deep-scrub ok
Nov 24 09:28:34 compute-0 ceph-mon[74331]: 7.b scrub starts
Nov 24 09:28:34 compute-0 ceph-mon[74331]: 7.b scrub ok
Nov 24 09:28:34 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:34 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:34 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:34 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:34 compute-0 ceph-mon[74331]: 5.2 scrub starts
Nov 24 09:28:34 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:34 compute-0 ceph-mon[74331]: 5.2 scrub ok
Nov 24 09:28:34 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:34 compute-0 ceph-mon[74331]: Creating key for client.nfs.cephfs.0.0.compute-1.vvoanr
Nov 24 09:28:34 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.vvoanr", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Nov 24 09:28:34 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.vvoanr", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Nov 24 09:28:34 compute-0 ceph-mon[74331]: Ensuring nfs.cephfs.0 is in the ganesha grace table
Nov 24 09:28:34 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Nov 24 09:28:34 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Nov 24 09:28:34 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:28:34 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1660329950' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 24 09:28:34 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Nov 24 09:28:34 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Nov 24 09:28:34 compute-0 ceph-mon[74331]: Rados config object exists: conf-nfs.cephfs
Nov 24 09:28:34 compute-0 ceph-mon[74331]: Creating key for client.nfs.cephfs.0.0.compute-1.vvoanr-rgw
Nov 24 09:28:34 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.vvoanr-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 24 09:28:34 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.vvoanr-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 24 09:28:34 compute-0 ceph-mon[74331]: Bind address in nfs.cephfs.0.0.compute-1.vvoanr's ganesha conf is defaulting to empty
Nov 24 09:28:34 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:28:34 compute-0 ceph-mon[74331]: Deploying daemon nfs.cephfs.0.0.compute-1.vvoanr on compute-1
Nov 24 09:28:34 compute-0 ceph-mon[74331]: 7.9 deep-scrub starts
Nov 24 09:28:34 compute-0 ceph-mon[74331]: 7.9 deep-scrub ok
Nov 24 09:28:34 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).mds e8 new map
Nov 24 09:28:34 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).mds e8 print_map
                                           e8
                                           btime 2025-11-24T09:28:34:108975+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        8
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-24T09:28:11.441245+0000
                                           modified        2025-11-24T09:28:34.078187+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24181}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24181 members: 24181
                                           [mds.cephfs.compute-2.bbilht{0:24181} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/3576340281,v1:192.168.122.102:6805/3576340281] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.cibmfe{-1:14586} state up:standby seq 1 addr [v2:192.168.122.100:6806/3605740467,v1:192.168.122.100:6807/3605740467] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.vpamdk{-1:24302} state up:standby seq 1 addr [v2:192.168.122.101:6804/2884660857,v1:192.168.122.101:6805/2884660857] compat {c=[1],r=[1],i=[1fff]}]
Nov 24 09:28:34 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/2884660857,v1:192.168.122.101:6805/2884660857] up:boot
Nov 24 09:28:34 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/3576340281,v1:192.168.122.102:6805/3576340281] up:active
Nov 24 09:28:34 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.bbilht=up:active} 2 up:standby
Nov 24 09:28:34 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.vpamdk"} v 0)
Nov 24 09:28:34 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.vpamdk"}]: dispatch
Nov 24 09:28:34 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).mds e8 all = 0
Nov 24 09:28:34 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v19: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Nov 24 09:28:34 compute-0 sudo[96411]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtvicjwzrmlzakctkgxljlazrtbotcxv ; /usr/bin/python3'
Nov 24 09:28:34 compute-0 sudo[96411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:28:34 compute-0 python3[96413]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:28:34 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Nov 24 09:28:34 compute-0 podman[96414]: 2025-11-24 09:28:34.663488544 +0000 UTC m=+0.039416740 container create 12b50e1d0d014aec3c031134db4a60495c6ef9f8e1f43ad390e2a8f79bf7509a (image=quay.io/ceph/ceph:v19, name=pedantic_driscoll, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Nov 24 09:28:34 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Nov 24 09:28:34 compute-0 systemd[1]: Started libpod-conmon-12b50e1d0d014aec3c031134db4a60495c6ef9f8e1f43ad390e2a8f79bf7509a.scope.
Nov 24 09:28:34 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:28:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6bd81f0982e5e6fde8435be4ac774af4f383b430aee028e5b83db45ef0d522e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6bd81f0982e5e6fde8435be4ac774af4f383b430aee028e5b83db45ef0d522e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:34 compute-0 podman[96414]: 2025-11-24 09:28:34.722920546 +0000 UTC m=+0.098848762 container init 12b50e1d0d014aec3c031134db4a60495c6ef9f8e1f43ad390e2a8f79bf7509a (image=quay.io/ceph/ceph:v19, name=pedantic_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Nov 24 09:28:34 compute-0 podman[96414]: 2025-11-24 09:28:34.727787362 +0000 UTC m=+0.103715568 container start 12b50e1d0d014aec3c031134db4a60495c6ef9f8e1f43ad390e2a8f79bf7509a (image=quay.io/ceph/ceph:v19, name=pedantic_driscoll, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 09:28:34 compute-0 podman[96414]: 2025-11-24 09:28:34.730780042 +0000 UTC m=+0.106708238 container attach 12b50e1d0d014aec3c031134db4a60495c6ef9f8e1f43ad390e2a8f79bf7509a (image=quay.io/ceph/ceph:v19, name=pedantic_driscoll, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 24 09:28:34 compute-0 podman[96414]: 2025-11-24 09:28:34.647064617 +0000 UTC m=+0.022992833 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:28:35 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Nov 24 09:28:35 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3847201331' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Nov 24 09:28:35 compute-0 pedantic_driscoll[96429]: mimic
Nov 24 09:28:35 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 09:28:35 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:35 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 09:28:35 compute-0 systemd[1]: libpod-12b50e1d0d014aec3c031134db4a60495c6ef9f8e1f43ad390e2a8f79bf7509a.scope: Deactivated successfully.
Nov 24 09:28:35 compute-0 podman[96414]: 2025-11-24 09:28:35.109295241 +0000 UTC m=+0.485223437 container died 12b50e1d0d014aec3c031134db4a60495c6ef9f8e1f43ad390e2a8f79bf7509a (image=quay.io/ceph/ceph:v19, name=pedantic_driscoll, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Nov 24 09:28:35 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:35 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:28:35 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6bd81f0982e5e6fde8435be4ac774af4f383b430aee028e5b83db45ef0d522e-merged.mount: Deactivated successfully.
Nov 24 09:28:35 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.gkqxhl
Nov 24 09:28:35 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.gkqxhl
Nov 24 09:28:35 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.gkqxhl", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Nov 24 09:28:35 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.gkqxhl", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Nov 24 09:28:35 compute-0 podman[96414]: 2025-11-24 09:28:35.153928174 +0000 UTC m=+0.529856370 container remove 12b50e1d0d014aec3c031134db4a60495c6ef9f8e1f43ad390e2a8f79bf7509a (image=quay.io/ceph/ceph:v19, name=pedantic_driscoll, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:28:35 compute-0 ceph-mon[74331]: mds.? [v2:192.168.122.101:6804/2884660857,v1:192.168.122.101:6805/2884660857] up:boot
Nov 24 09:28:35 compute-0 ceph-mon[74331]: mds.? [v2:192.168.122.102:6804/3576340281,v1:192.168.122.102:6805/3576340281] up:active
Nov 24 09:28:35 compute-0 ceph-mon[74331]: fsmap cephfs:1 {0=cephfs.compute-2.bbilht=up:active} 2 up:standby
Nov 24 09:28:35 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.vpamdk"}]: dispatch
Nov 24 09:28:35 compute-0 ceph-mon[74331]: 6.a scrub starts
Nov 24 09:28:35 compute-0 ceph-mon[74331]: 6.a scrub ok
Nov 24 09:28:35 compute-0 ceph-mon[74331]: pgmap v19: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Nov 24 09:28:35 compute-0 ceph-mon[74331]: 7.10 scrub starts
Nov 24 09:28:35 compute-0 ceph-mon[74331]: 7.10 scrub ok
Nov 24 09:28:35 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3847201331' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Nov 24 09:28:35 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:35 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:35 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.gkqxhl", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Nov 24 09:28:35 compute-0 ceph-mgr[74626]: [cephadm INFO root] Ensuring nfs.cephfs.1 is in the ganesha grace table
Nov 24 09:28:35 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.1 is in the ganesha grace table
Nov 24 09:28:35 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Nov 24 09:28:35 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Nov 24 09:28:35 compute-0 systemd[1]: libpod-conmon-12b50e1d0d014aec3c031134db4a60495c6ef9f8e1f43ad390e2a8f79bf7509a.scope: Deactivated successfully.
Nov 24 09:28:35 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Nov 24 09:28:35 compute-0 sudo[96411]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:35 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:28:35 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:28:35 compute-0 ceph-mgr[74626]: [progress INFO root] Writing back 13 completed events
Nov 24 09:28:35 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 24 09:28:35 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:36 compute-0 sudo[96506]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxijpbmhlbidjwtlyvngqdurbjdxsvcw ; /usr/bin/python3'
Nov 24 09:28:36 compute-0 sudo[96506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:28:36 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:36 compute-0 ceph-mon[74331]: Creating key for client.nfs.cephfs.1.0.compute-2.gkqxhl
Nov 24 09:28:36 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.gkqxhl", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Nov 24 09:28:36 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.gkqxhl", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Nov 24 09:28:36 compute-0 ceph-mon[74331]: Ensuring nfs.cephfs.1 is in the ganesha grace table
Nov 24 09:28:36 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Nov 24 09:28:36 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Nov 24 09:28:36 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:28:36 compute-0 ceph-mon[74331]: 6.8 scrub starts
Nov 24 09:28:36 compute-0 ceph-mon[74331]: 6.8 scrub ok
Nov 24 09:28:36 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:36 compute-0 python3[96508]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:28:36 compute-0 podman[96509]: 2025-11-24 09:28:36.290659151 +0000 UTC m=+0.044455240 container create af5703d18c8cd191e407627d5c15dea25ffe196a5e92b9d722e7ff8f1cd59709 (image=quay.io/ceph/ceph:v19, name=beautiful_johnson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:28:36 compute-0 systemd[1]: Started libpod-conmon-af5703d18c8cd191e407627d5c15dea25ffe196a5e92b9d722e7ff8f1cd59709.scope.
Nov 24 09:28:36 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:28:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/903334f798e85e9b021f5f9fd549995235b505969075e160da8fd4f5976cbbcb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/903334f798e85e9b021f5f9fd549995235b505969075e160da8fd4f5976cbbcb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:36 compute-0 podman[96509]: 2025-11-24 09:28:36.353375661 +0000 UTC m=+0.107171780 container init af5703d18c8cd191e407627d5c15dea25ffe196a5e92b9d722e7ff8f1cd59709 (image=quay.io/ceph/ceph:v19, name=beautiful_johnson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 24 09:28:36 compute-0 podman[96509]: 2025-11-24 09:28:36.358888561 +0000 UTC m=+0.112684650 container start af5703d18c8cd191e407627d5c15dea25ffe196a5e92b9d722e7ff8f1cd59709 (image=quay.io/ceph/ceph:v19, name=beautiful_johnson, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Nov 24 09:28:36 compute-0 podman[96509]: 2025-11-24 09:28:36.362206679 +0000 UTC m=+0.116002788 container attach af5703d18c8cd191e407627d5c15dea25ffe196a5e92b9d722e7ff8f1cd59709 (image=quay.io/ceph/ceph:v19, name=beautiful_johnson, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Nov 24 09:28:36 compute-0 podman[96509]: 2025-11-24 09:28:36.274024268 +0000 UTC m=+0.027820397 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:28:36 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v20: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Nov 24 09:28:36 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).mds e9 new map
Nov 24 09:28:36 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).mds e9 print_map
                                           e9
                                           btime 2025-11-24T09:28:36:500458+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        8
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-24T09:28:11.441245+0000
                                           modified        2025-11-24T09:28:34.078187+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24181}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24181 members: 24181
                                           [mds.cephfs.compute-2.bbilht{0:24181} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/3576340281,v1:192.168.122.102:6805/3576340281] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.cibmfe{-1:14586} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/3605740467,v1:192.168.122.100:6807/3605740467] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.vpamdk{-1:24302} state up:standby seq 1 addr [v2:192.168.122.101:6804/2884660857,v1:192.168.122.101:6805/2884660857] compat {c=[1],r=[1],i=[1fff]}]
Nov 24 09:28:36 compute-0 ceph-mds[96241]: mds.cephfs.compute-0.cibmfe Updating MDS map to version 9 from mon.0
Nov 24 09:28:36 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/3605740467,v1:192.168.122.100:6807/3605740467] up:standby
Nov 24 09:28:36 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.bbilht=up:active} 2 up:standby
Nov 24 09:28:36 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions", "format": "json"} v 0)
Nov 24 09:28:36 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2906138256' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Nov 24 09:28:36 compute-0 beautiful_johnson[96524]: 
Nov 24 09:28:36 compute-0 beautiful_johnson[96524]: {"mon":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"mgr":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"osd":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"mds":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"rgw":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"overall":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":15}}
Nov 24 09:28:36 compute-0 systemd[1]: libpod-af5703d18c8cd191e407627d5c15dea25ffe196a5e92b9d722e7ff8f1cd59709.scope: Deactivated successfully.
Nov 24 09:28:36 compute-0 podman[96509]: 2025-11-24 09:28:36.765701188 +0000 UTC m=+0.519497277 container died af5703d18c8cd191e407627d5c15dea25ffe196a5e92b9d722e7ff8f1cd59709 (image=quay.io/ceph/ceph:v19, name=beautiful_johnson, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:28:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-903334f798e85e9b021f5f9fd549995235b505969075e160da8fd4f5976cbbcb-merged.mount: Deactivated successfully.
Nov 24 09:28:36 compute-0 podman[96509]: 2025-11-24 09:28:36.797436067 +0000 UTC m=+0.551232156 container remove af5703d18c8cd191e407627d5c15dea25ffe196a5e92b9d722e7ff8f1cd59709 (image=quay.io/ceph/ceph:v19, name=beautiful_johnson, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 09:28:36 compute-0 systemd[1]: libpod-conmon-af5703d18c8cd191e407627d5c15dea25ffe196a5e92b9d722e7ff8f1cd59709.scope: Deactivated successfully.
Nov 24 09:28:36 compute-0 sudo[96506]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:37 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).mds e10 new map
Nov 24 09:28:37 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).mds e10 print_map
                                           e10
                                           btime 2025-11-24T09:28:37:509059+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        8
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-24T09:28:11.441245+0000
                                           modified        2025-11-24T09:28:34.078187+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24181}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24181 members: 24181
                                           [mds.cephfs.compute-2.bbilht{0:24181} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/3576340281,v1:192.168.122.102:6805/3576340281] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.cibmfe{-1:14586} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/3605740467,v1:192.168.122.100:6807/3605740467] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.vpamdk{-1:24302} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/2884660857,v1:192.168.122.101:6805/2884660857] compat {c=[1],r=[1],i=[1fff]}]
Nov 24 09:28:37 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/2884660857,v1:192.168.122.101:6805/2884660857] up:standby
Nov 24 09:28:37 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.bbilht=up:active} 2 up:standby
Nov 24 09:28:37 compute-0 ceph-mon[74331]: 6.7 scrub starts
Nov 24 09:28:37 compute-0 ceph-mon[74331]: 6.7 scrub ok
Nov 24 09:28:37 compute-0 ceph-mon[74331]: pgmap v20: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Nov 24 09:28:37 compute-0 ceph-mon[74331]: mds.? [v2:192.168.122.100:6806/3605740467,v1:192.168.122.100:6807/3605740467] up:standby
Nov 24 09:28:37 compute-0 ceph-mon[74331]: fsmap cephfs:1 {0=cephfs.compute-2.bbilht=up:active} 2 up:standby
Nov 24 09:28:37 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2906138256' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Nov 24 09:28:37 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:28:38 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Nov 24 09:28:38 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Nov 24 09:28:38 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Nov 24 09:28:38 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Nov 24 09:28:38 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Nov 24 09:28:38 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.gkqxhl-rgw
Nov 24 09:28:38 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.gkqxhl-rgw
Nov 24 09:28:38 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.gkqxhl-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Nov 24 09:28:38 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.gkqxhl-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 24 09:28:38 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.gkqxhl-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 24 09:28:38 compute-0 ceph-mgr[74626]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.1.0.compute-2.gkqxhl's ganesha conf is defaulting to empty
Nov 24 09:28:38 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.1.0.compute-2.gkqxhl's ganesha conf is defaulting to empty
Nov 24 09:28:38 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:28:38 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:28:38 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.1.0.compute-2.gkqxhl on compute-2
Nov 24 09:28:38 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.1.0.compute-2.gkqxhl on compute-2
Nov 24 09:28:38 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v21: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 1.9 KiB/s wr, 5 op/s
Nov 24 09:28:38 compute-0 ceph-mon[74331]: 6.5 scrub starts
Nov 24 09:28:38 compute-0 ceph-mon[74331]: 6.5 scrub ok
Nov 24 09:28:38 compute-0 ceph-mon[74331]: mds.? [v2:192.168.122.101:6804/2884660857,v1:192.168.122.101:6805/2884660857] up:standby
Nov 24 09:28:38 compute-0 ceph-mon[74331]: fsmap cephfs:1 {0=cephfs.compute-2.bbilht=up:active} 2 up:standby
Nov 24 09:28:38 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Nov 24 09:28:38 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Nov 24 09:28:38 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.gkqxhl-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 24 09:28:38 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.gkqxhl-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 24 09:28:38 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:28:39 compute-0 ceph-mon[74331]: 6.2 scrub starts
Nov 24 09:28:39 compute-0 ceph-mon[74331]: Rados config object exists: conf-nfs.cephfs
Nov 24 09:28:39 compute-0 ceph-mon[74331]: Creating key for client.nfs.cephfs.1.0.compute-2.gkqxhl-rgw
Nov 24 09:28:39 compute-0 ceph-mon[74331]: 6.2 scrub ok
Nov 24 09:28:39 compute-0 ceph-mon[74331]: Bind address in nfs.cephfs.1.0.compute-2.gkqxhl's ganesha conf is defaulting to empty
Nov 24 09:28:39 compute-0 ceph-mon[74331]: Deploying daemon nfs.cephfs.1.0.compute-2.gkqxhl on compute-2
Nov 24 09:28:39 compute-0 ceph-mon[74331]: pgmap v21: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 1.9 KiB/s wr, 5 op/s
Nov 24 09:28:40 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v22: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 1.9 KiB/s wr, 5 op/s
Nov 24 09:28:40 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 09:28:40 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:40 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 09:28:40 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:40 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:28:40 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:40 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.ssprex
Nov 24 09:28:40 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.ssprex
Nov 24 09:28:40 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ssprex", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Nov 24 09:28:40 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ssprex", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Nov 24 09:28:40 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ssprex", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Nov 24 09:28:40 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:28:40 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:28:40 compute-0 ceph-mgr[74626]: [cephadm INFO root] Ensuring nfs.cephfs.2 is in the ganesha grace table
Nov 24 09:28:40 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.2 is in the ganesha grace table
Nov 24 09:28:40 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Nov 24 09:28:40 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Nov 24 09:28:40 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:28:40 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:28:40 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Nov 24 09:28:40 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:28:40 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:28:40 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:28:40 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:28:40 compute-0 ceph-mon[74331]: 6.3 scrub starts
Nov 24 09:28:40 compute-0 ceph-mon[74331]: 6.3 scrub ok
Nov 24 09:28:40 compute-0 ceph-mon[74331]: 6.d scrub starts
Nov 24 09:28:40 compute-0 ceph-mon[74331]: 6.d scrub ok
Nov 24 09:28:40 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:40 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:40 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:40 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ssprex", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Nov 24 09:28:40 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ssprex", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Nov 24 09:28:40 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Nov 24 09:28:40 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Nov 24 09:28:40 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:28:41 compute-0 ceph-mon[74331]: pgmap v22: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 1.9 KiB/s wr, 5 op/s
Nov 24 09:28:41 compute-0 ceph-mon[74331]: Creating key for client.nfs.cephfs.2.0.compute-0.ssprex
Nov 24 09:28:41 compute-0 ceph-mon[74331]: Ensuring nfs.cephfs.2 is in the ganesha grace table
Nov 24 09:28:41 compute-0 ceph-mon[74331]: 6.e scrub starts
Nov 24 09:28:41 compute-0 ceph-mon[74331]: 6.e scrub ok
Nov 24 09:28:42 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v23: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 2.6 KiB/s wr, 7 op/s
Nov 24 09:28:42 compute-0 ceph-mon[74331]: 6.19 scrub starts
Nov 24 09:28:42 compute-0 ceph-mon[74331]: 6.19 scrub ok
Nov 24 09:28:42 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:28:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Nov 24 09:28:43 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Nov 24 09:28:43 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Nov 24 09:28:43 compute-0 ceph-mon[74331]: pgmap v23: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 2.6 KiB/s wr, 7 op/s
Nov 24 09:28:43 compute-0 ceph-mon[74331]: 6.1a scrub starts
Nov 24 09:28:43 compute-0 ceph-mon[74331]: 6.1a scrub ok
Nov 24 09:28:43 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Nov 24 09:28:43 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Nov 24 09:28:43 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Nov 24 09:28:43 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Nov 24 09:28:43 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.ssprex-rgw
Nov 24 09:28:43 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.ssprex-rgw
Nov 24 09:28:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ssprex-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Nov 24 09:28:43 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ssprex-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 24 09:28:43 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ssprex-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 24 09:28:43 compute-0 ceph-mgr[74626]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.2.0.compute-0.ssprex's ganesha conf is defaulting to empty
Nov 24 09:28:43 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.2.0.compute-0.ssprex's ganesha conf is defaulting to empty
Nov 24 09:28:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 09:28:43 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:28:43 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.2.0.compute-0.ssprex on compute-0
Nov 24 09:28:43 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.2.0.compute-0.ssprex on compute-0
Nov 24 09:28:43 compute-0 sudo[96615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:28:43 compute-0 sudo[96615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:43 compute-0 sudo[96615]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:43 compute-0 sudo[96640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:28:43 compute-0 sudo[96640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:44 compute-0 podman[96707]: 2025-11-24 09:28:44.235664682 +0000 UTC m=+0.040126468 container create 77039d8aead11f1bc406de4af6b2d7c912c7695c0f2af79b22b5a562347b0b46 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:28:44 compute-0 systemd[1]: Started libpod-conmon-77039d8aead11f1bc406de4af6b2d7c912c7695c0f2af79b22b5a562347b0b46.scope.
Nov 24 09:28:44 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:28:44 compute-0 podman[96707]: 2025-11-24 09:28:44.288273499 +0000 UTC m=+0.092735305 container init 77039d8aead11f1bc406de4af6b2d7c912c7695c0f2af79b22b5a562347b0b46 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_colden, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:28:44 compute-0 podman[96707]: 2025-11-24 09:28:44.294712489 +0000 UTC m=+0.099174285 container start 77039d8aead11f1bc406de4af6b2d7c912c7695c0f2af79b22b5a562347b0b46 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_colden, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid)
Nov 24 09:28:44 compute-0 wonderful_colden[96724]: 167 167
Nov 24 09:28:44 compute-0 podman[96707]: 2025-11-24 09:28:44.298081202 +0000 UTC m=+0.102543008 container attach 77039d8aead11f1bc406de4af6b2d7c912c7695c0f2af79b22b5a562347b0b46 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Nov 24 09:28:44 compute-0 systemd[1]: libpod-77039d8aead11f1bc406de4af6b2d7c912c7695c0f2af79b22b5a562347b0b46.scope: Deactivated successfully.
Nov 24 09:28:44 compute-0 podman[96707]: 2025-11-24 09:28:44.298808661 +0000 UTC m=+0.103270447 container died 77039d8aead11f1bc406de4af6b2d7c912c7695c0f2af79b22b5a562347b0b46 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 09:28:44 compute-0 podman[96707]: 2025-11-24 09:28:44.220738851 +0000 UTC m=+0.025200667 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:28:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-93787ff84c3dbc48ab1ca1f6c2ec9286c895c4f167cfd5b044cf5b753e702ece-merged.mount: Deactivated successfully.
Nov 24 09:28:44 compute-0 podman[96707]: 2025-11-24 09:28:44.332083596 +0000 UTC m=+0.136545382 container remove 77039d8aead11f1bc406de4af6b2d7c912c7695c0f2af79b22b5a562347b0b46 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_colden, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Nov 24 09:28:44 compute-0 systemd[1]: libpod-conmon-77039d8aead11f1bc406de4af6b2d7c912c7695c0f2af79b22b5a562347b0b46.scope: Deactivated successfully.
Nov 24 09:28:44 compute-0 systemd[1]: Reloading.
Nov 24 09:28:44 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v24: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1.4 KiB/s wr, 3 op/s
Nov 24 09:28:44 compute-0 systemd-rc-local-generator[96765]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:28:44 compute-0 systemd-sysv-generator[96768]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:28:44 compute-0 systemd[1]: Reloading.
Nov 24 09:28:44 compute-0 ceph-mon[74331]: Rados config object exists: conf-nfs.cephfs
Nov 24 09:28:44 compute-0 ceph-mon[74331]: Creating key for client.nfs.cephfs.2.0.compute-0.ssprex-rgw
Nov 24 09:28:44 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ssprex-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 24 09:28:44 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ssprex-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 24 09:28:44 compute-0 ceph-mon[74331]: Bind address in nfs.cephfs.2.0.compute-0.ssprex's ganesha conf is defaulting to empty
Nov 24 09:28:44 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:28:44 compute-0 ceph-mon[74331]: Deploying daemon nfs.cephfs.2.0.compute-0.ssprex on compute-0
Nov 24 09:28:44 compute-0 systemd-rc-local-generator[96805]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:28:44 compute-0 systemd-sysv-generator[96808]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:28:44 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ssprex for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:28:45 compute-0 podman[96864]: 2025-11-24 09:28:45.077653535 +0000 UTC m=+0.038065616 container create 3adc7e4dbfb76acd70b92bdc8783d49c26735889ac1576ee9a74ae48f52acf62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:28:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3cdb7bdfdae5dcbcf1fe0536a4e1ce178bf9372983ea15fc13bc1f0a1a65f89/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3cdb7bdfdae5dcbcf1fe0536a4e1ce178bf9372983ea15fc13bc1f0a1a65f89/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3cdb7bdfdae5dcbcf1fe0536a4e1ce178bf9372983ea15fc13bc1f0a1a65f89/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3cdb7bdfdae5dcbcf1fe0536a4e1ce178bf9372983ea15fc13bc1f0a1a65f89/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ssprex-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:45 compute-0 podman[96864]: 2025-11-24 09:28:45.130792105 +0000 UTC m=+0.091204206 container init 3adc7e4dbfb76acd70b92bdc8783d49c26735889ac1576ee9a74ae48f52acf62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 09:28:45 compute-0 podman[96864]: 2025-11-24 09:28:45.135554303 +0000 UTC m=+0.095966374 container start 3adc7e4dbfb76acd70b92bdc8783d49c26735889ac1576ee9a74ae48f52acf62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Nov 24 09:28:45 compute-0 bash[96864]: 3adc7e4dbfb76acd70b92bdc8783d49c26735889ac1576ee9a74ae48f52acf62
Nov 24 09:28:45 compute-0 podman[96864]: 2025-11-24 09:28:45.060427578 +0000 UTC m=+0.020839679 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:28:45 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ssprex for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:28:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:28:45 : epoch 6924254d : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 24 09:28:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:28:45 : epoch 6924254d : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 24 09:28:45 compute-0 sudo[96640]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:45 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:28:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:28:45 : epoch 6924254d : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 24 09:28:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:28:45 : epoch 6924254d : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 24 09:28:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:28:45 : epoch 6924254d : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 24 09:28:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:28:45 : epoch 6924254d : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 24 09:28:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:28:45 : epoch 6924254d : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 24 09:28:45 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:45 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:28:45 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:45 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:28:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:28:45 : epoch 6924254d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:28:45 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:45 compute-0 ceph-mgr[74626]: [progress INFO root] complete: finished ev c180ba8d-7383-4ea2-88c7-36c847a2ea3c (Updating nfs.cephfs deployment (+3 -> 3))
Nov 24 09:28:45 compute-0 ceph-mgr[74626]: [progress INFO root] Completed event c180ba8d-7383-4ea2-88c7-36c847a2ea3c (Updating nfs.cephfs deployment (+3 -> 3)) in 12 seconds
Nov 24 09:28:45 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:28:45 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:45 compute-0 ceph-mgr[74626]: [progress INFO root] update: starting ev 5ad37125-ccf6-4f0e-b4f4-80754f135960 (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Nov 24 09:28:45 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/monitor_password}] v 0)
Nov 24 09:28:45 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:45 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-1.rsdpvy on compute-1
Nov 24 09:28:45 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-1.rsdpvy on compute-1
Nov 24 09:28:45 compute-0 ceph-mgr[74626]: [progress INFO root] Writing back 14 completed events
Nov 24 09:28:45 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 24 09:28:45 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:45 compute-0 ceph-mon[74331]: pgmap v24: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1.4 KiB/s wr, 3 op/s
Nov 24 09:28:45 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:45 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:45 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:45 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:45 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:45 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:46 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v25: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1.4 KiB/s wr, 3 op/s
Nov 24 09:28:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:28:46 : epoch 6924254d : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Nov 24 09:28:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:28:46 : epoch 6924254d : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Nov 24 09:28:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:28:46 : epoch 6924254d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:28:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:28:46 : epoch 6924254d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:28:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:28:46 : epoch 6924254d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:28:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:28:46 : epoch 6924254d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:28:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:28:46 : epoch 6924254d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:28:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:28:46 : epoch 6924254d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:28:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:28:46 : epoch 6924254d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:28:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:28:46 : epoch 6924254d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:28:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:28:46 : epoch 6924254d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:28:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:28:46 : epoch 6924254d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:28:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:28:46 : epoch 6924254d : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000003:nfs.cephfs.2: -2
Nov 24 09:28:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:28:46 : epoch 6924254d : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:28:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:28:46 : epoch 6924254d : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 24 09:28:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:28:46 : epoch 6924254d : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 24 09:28:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:28:46 : epoch 6924254d : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 24 09:28:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:28:46 : epoch 6924254d : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 24 09:28:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:28:46 : epoch 6924254d : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 24 09:28:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:28:46 : epoch 6924254d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 24 09:28:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:28:46 : epoch 6924254d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:28:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:28:46 : epoch 6924254d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:28:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:28:46 : epoch 6924254d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:28:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:28:46 : epoch 6924254d : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 24 09:28:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:28:46 : epoch 6924254d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:28:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:28:46 : epoch 6924254d : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 24 09:28:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:28:46 : epoch 6924254d : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 24 09:28:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:28:46 : epoch 6924254d : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 24 09:28:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:28:46 : epoch 6924254d : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 24 09:28:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:28:46 : epoch 6924254d : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 24 09:28:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:28:46 : epoch 6924254d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 24 09:28:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:28:46 : epoch 6924254d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 24 09:28:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:28:46 : epoch 6924254d : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 24 09:28:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:28:46 : epoch 6924254d : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 24 09:28:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:28:46 : epoch 6924254d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 24 09:28:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:28:46 : epoch 6924254d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 24 09:28:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:28:46 : epoch 6924254d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 24 09:28:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:28:46 : epoch 6924254d : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 09:28:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:28:46 : epoch 6924254d : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 24 09:28:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:28:46 : epoch 6924254d : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 09:28:46 compute-0 ceph-mon[74331]: Deploying daemon haproxy.nfs.cephfs.compute-1.rsdpvy on compute-1
Nov 24 09:28:47 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:28:47 compute-0 ceph-mon[74331]: pgmap v25: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1.4 KiB/s wr, 3 op/s
Nov 24 09:28:48 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v26: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1.6 KiB/s wr, 4 op/s
Nov 24 09:28:49 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 09:28:49 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:49 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 09:28:49 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:49 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Nov 24 09:28:49 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:49 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-0.jzeayf on compute-0
Nov 24 09:28:49 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-0.jzeayf on compute-0
Nov 24 09:28:49 compute-0 sudo[96933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:28:49 compute-0 sudo[96933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:49 compute-0 sudo[96933]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:49 compute-0 sudo[96958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/haproxy:2.3 --timeout 895 _orch deploy --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:28:49 compute-0 sudo[96958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:28:49 compute-0 ceph-mon[74331]: pgmap v26: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1.6 KiB/s wr, 4 op/s
Nov 24 09:28:49 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:49 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:49 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:50 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v27: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 853 B/s wr, 2 op/s
Nov 24 09:28:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:28:50 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab20000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:28:50 compute-0 ceph-mon[74331]: Deploying daemon haproxy.nfs.cephfs.compute-0.jzeayf on compute-0
Nov 24 09:28:51 compute-0 ceph-mon[74331]: pgmap v27: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 853 B/s wr, 2 op/s
Nov 24 09:28:51 compute-0 podman[97024]: 2025-11-24 09:28:51.899422847 +0000 UTC m=+2.251218108 container create 2656586acbfe8b7c86aa514069c536b48e3c4df7156616713fd8f8432744cf87 (image=quay.io/ceph/haproxy:2.3, name=suspicious_chebyshev)
Nov 24 09:28:51 compute-0 systemd[1]: Started libpod-conmon-2656586acbfe8b7c86aa514069c536b48e3c4df7156616713fd8f8432744cf87.scope.
Nov 24 09:28:51 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:28:51 compute-0 podman[97024]: 2025-11-24 09:28:51.882018555 +0000 UTC m=+2.233813836 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Nov 24 09:28:51 compute-0 podman[97024]: 2025-11-24 09:28:51.979013884 +0000 UTC m=+2.330809155 container init 2656586acbfe8b7c86aa514069c536b48e3c4df7156616713fd8f8432744cf87 (image=quay.io/ceph/haproxy:2.3, name=suspicious_chebyshev)
Nov 24 09:28:51 compute-0 podman[97024]: 2025-11-24 09:28:51.985742041 +0000 UTC m=+2.337537302 container start 2656586acbfe8b7c86aa514069c536b48e3c4df7156616713fd8f8432744cf87 (image=quay.io/ceph/haproxy:2.3, name=suspicious_chebyshev)
Nov 24 09:28:51 compute-0 podman[97024]: 2025-11-24 09:28:51.98852699 +0000 UTC m=+2.340322251 container attach 2656586acbfe8b7c86aa514069c536b48e3c4df7156616713fd8f8432744cf87 (image=quay.io/ceph/haproxy:2.3, name=suspicious_chebyshev)
Nov 24 09:28:51 compute-0 suspicious_chebyshev[97142]: 0 0
Nov 24 09:28:51 compute-0 systemd[1]: libpod-2656586acbfe8b7c86aa514069c536b48e3c4df7156616713fd8f8432744cf87.scope: Deactivated successfully.
Nov 24 09:28:51 compute-0 podman[97024]: 2025-11-24 09:28:51.990937619 +0000 UTC m=+2.342732880 container died 2656586acbfe8b7c86aa514069c536b48e3c4df7156616713fd8f8432744cf87 (image=quay.io/ceph/haproxy:2.3, name=suspicious_chebyshev)
Nov 24 09:28:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-14cbf238b9a2cfe3323a7553dbffb5d6b35098c59ee945d07774bd2464ec06d5-merged.mount: Deactivated successfully.
Nov 24 09:28:52 compute-0 podman[97024]: 2025-11-24 09:28:52.02437058 +0000 UTC m=+2.376165831 container remove 2656586acbfe8b7c86aa514069c536b48e3c4df7156616713fd8f8432744cf87 (image=quay.io/ceph/haproxy:2.3, name=suspicious_chebyshev)
Nov 24 09:28:52 compute-0 systemd[1]: libpod-conmon-2656586acbfe8b7c86aa514069c536b48e3c4df7156616713fd8f8432744cf87.scope: Deactivated successfully.
Nov 24 09:28:52 compute-0 systemd[1]: Reloading.
Nov 24 09:28:52 compute-0 systemd-rc-local-generator[97189]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:28:52 compute-0 systemd-sysv-generator[97192]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:28:52 compute-0 systemd[1]: Reloading.
Nov 24 09:28:52 compute-0 systemd-rc-local-generator[97231]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:28:52 compute-0 systemd-sysv-generator[97234]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:28:52 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v28: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 4.7 KiB/s rd, 1.7 KiB/s wr, 6 op/s
Nov 24 09:28:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:28:52 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab14001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:28:52 compute-0 systemd[1]: Starting Ceph haproxy.nfs.cephfs.compute-0.jzeayf for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:28:52 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:28:52 compute-0 podman[97289]: 2025-11-24 09:28:52.84850162 +0000 UTC m=+0.112003443 container create 6c3a81d73f056383702bf60c1dab3f213ae48261b4107ee30655cbadd5ed4114 (image=quay.io/ceph/haproxy:2.3, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf)
Nov 24 09:28:52 compute-0 podman[97289]: 2025-11-24 09:28:52.758810702 +0000 UTC m=+0.022312585 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Nov 24 09:28:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8f5a372348a327b2e6b5a0b76ec824e6ea2b8fc83fa08f5ad81abe4f6c0a0f9/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Nov 24 09:28:52 compute-0 podman[97289]: 2025-11-24 09:28:52.896506372 +0000 UTC m=+0.160008215 container init 6c3a81d73f056383702bf60c1dab3f213ae48261b4107ee30655cbadd5ed4114 (image=quay.io/ceph/haproxy:2.3, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf)
Nov 24 09:28:52 compute-0 podman[97289]: 2025-11-24 09:28:52.902245255 +0000 UTC m=+0.165747068 container start 6c3a81d73f056383702bf60c1dab3f213ae48261b4107ee30655cbadd5ed4114 (image=quay.io/ceph/haproxy:2.3, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf)
Nov 24 09:28:52 compute-0 bash[97289]: 6c3a81d73f056383702bf60c1dab3f213ae48261b4107ee30655cbadd5ed4114
Nov 24 09:28:52 compute-0 systemd[1]: Started Ceph haproxy.nfs.cephfs.compute-0.jzeayf for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:28:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [NOTICE] 327/092852 (2) : New worker #1 (4) forked
Nov 24 09:28:52 compute-0 sudo[96958]: pam_unix(sudo:session): session closed for user root
Nov 24 09:28:52 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:28:52 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:52 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:28:52 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:52 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Nov 24 09:28:52 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:52 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-2.jwgmiu on compute-2
Nov 24 09:28:52 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-2.jwgmiu on compute-2
Nov 24 09:28:53 compute-0 ceph-mon[74331]: pgmap v28: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 4.7 KiB/s rd, 1.7 KiB/s wr, 6 op/s
Nov 24 09:28:53 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:53 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:53 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:53 compute-0 ceph-mon[74331]: Deploying daemon haproxy.nfs.cephfs.compute-2.jwgmiu on compute-2
Nov 24 09:28:54 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:28:54 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:28:54 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v29: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Nov 24 09:28:54 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:28:54 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab20000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:28:55 compute-0 ceph-mon[74331]: pgmap v29: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Nov 24 09:28:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:28:56 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab00000fa0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:28:56 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v30: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Nov 24 09:28:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:28:56 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab14001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:28:57 compute-0 ceph-mon[74331]: pgmap v30: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Nov 24 09:28:57 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 09:28:57 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:57 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 09:28:57 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:57 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Nov 24 09:28:57 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:57 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/keepalived_password}] v 0)
Nov 24 09:28:57 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:57 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 24 09:28:57 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 24 09:28:57 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 24 09:28:57 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 24 09:28:57 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Nov 24 09:28:57 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Nov 24 09:28:57 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-2.gcugek on compute-2
Nov 24 09:28:57 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-2.gcugek on compute-2
Nov 24 09:28:57 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:28:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:28:58 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc0016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:28:58 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v31: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Nov 24 09:28:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:28:58 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab200021f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:28:58 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:58 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:58 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:58 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:28:58 compute-0 ceph-mon[74331]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 24 09:28:58 compute-0 ceph-mon[74331]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 24 09:28:58 compute-0 ceph-mon[74331]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Nov 24 09:28:58 compute-0 ceph-mon[74331]: Deploying daemon keepalived.nfs.cephfs.compute-2.gcugek on compute-2
Nov 24 09:28:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:28:58 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab00001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:28:59 compute-0 ceph-mon[74331]: pgmap v31: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Nov 24 09:29:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:00 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab14001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:00 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v32: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:29:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:00 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc0016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:00 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab200021f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:01 compute-0 ceph-mon[74331]: pgmap v32: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:29:02 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:02 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab00001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:02 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v33: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 938 B/s wr, 4 op/s
Nov 24 09:29:02 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:02 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab14001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:02 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:29:02 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:02 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc0016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:03 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 09:29:03 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:03 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 09:29:03 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:03 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Nov 24 09:29:03 compute-0 ceph-mon[74331]: pgmap v33: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 938 B/s wr, 4 op/s
Nov 24 09:29:03 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:03 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:03 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Nov 24 09:29:03 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Nov 24 09:29:03 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 24 09:29:03 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 24 09:29:03 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 24 09:29:03 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 24 09:29:03 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-1.vrgskq on compute-1
Nov 24 09:29:03 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-1.vrgskq on compute-1
Nov 24 09:29:04 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:04 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab200021f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:04 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v34: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:29:04 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:04 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab00001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:04 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:04 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:04 compute-0 ceph-mon[74331]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Nov 24 09:29:04 compute-0 ceph-mon[74331]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 24 09:29:04 compute-0 ceph-mon[74331]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 24 09:29:04 compute-0 ceph-mon[74331]: Deploying daemon keepalived.nfs.cephfs.compute-1.vrgskq on compute-1
Nov 24 09:29:04 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:04 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab14001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:04 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/092904 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:29:05 compute-0 ceph-mon[74331]: pgmap v34: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:29:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:06 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:06 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v35: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:29:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:06 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab200095a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:06 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab200095a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:07 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:29:07 compute-0 ceph-mon[74331]: pgmap v35: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:29:07 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 09:29:07 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:07 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 09:29:07 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:07 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Nov 24 09:29:07 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:07 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 24 09:29:07 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 24 09:29:07 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Nov 24 09:29:07 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Nov 24 09:29:07 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 24 09:29:07 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 24 09:29:07 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-0.mglptr on compute-0
Nov 24 09:29:07 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-0.mglptr on compute-0
Nov 24 09:29:07 compute-0 sudo[97318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:29:07 compute-0 sudo[97318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:07 compute-0 sudo[97318]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:08 compute-0 sudo[97343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/keepalived:2.2.4 --timeout 895 _orch deploy --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:29:08 compute-0 sudo[97343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:08 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab14001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:08 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v36: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:29:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:08 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:08 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab200095a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:08 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:08 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:08 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:08 compute-0 ceph-mon[74331]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 24 09:29:08 compute-0 ceph-mon[74331]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Nov 24 09:29:08 compute-0 ceph-mon[74331]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 24 09:29:08 compute-0 ceph-mon[74331]: Deploying daemon keepalived.nfs.cephfs.compute-0.mglptr on compute-0
Nov 24 09:29:09 compute-0 ceph-mon[74331]: pgmap v36: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:29:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:10 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab200095a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:10 compute-0 ceph-mgr[74626]: [balancer INFO root] Optimize plan auto_2025-11-24_09:29:10
Nov 24 09:29:10 compute-0 ceph-mgr[74626]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 09:29:10 compute-0 ceph-mgr[74626]: [balancer INFO root] do_upmap
Nov 24 09:29:10 compute-0 ceph-mgr[74626]: [balancer INFO root] pools ['default.rgw.log', 'volumes', '.mgr', 'images', 'vms', 'cephfs.cephfs.meta', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', '.nfs', 'backups']
Nov 24 09:29:10 compute-0 ceph-mgr[74626]: [balancer INFO root] prepared 0/10 upmap changes
Nov 24 09:29:10 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v37: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:29:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:10 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab14001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:10 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 09:29:10 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:29:10 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 09:29:10 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:29:10 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:29:10 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:29:10 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:29:10 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:29:10 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:29:10 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:29:10 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:29:10 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:29:10 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 09:29:10 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:29:10 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:29:10 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:29:10 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 1)
Nov 24 09:29:10 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:29:10 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 1)
Nov 24 09:29:10 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:29:10 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 24 09:29:10 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:29:10 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Nov 24 09:29:10 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:29:10 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 1)
Nov 24 09:29:10 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0)
Nov 24 09:29:10 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 09:29:10 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:29:10 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:29:10 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:29:10 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:29:10 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 09:29:10 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:29:10 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:29:10 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:29:10 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:29:10 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:29:10 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:29:10 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 09:29:10 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:29:10 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:29:10 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:29:10 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:29:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:10 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:10 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Nov 24 09:29:10 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 09:29:11 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 24 09:29:11 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Nov 24 09:29:11 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Nov 24 09:29:11 compute-0 ceph-mgr[74626]: [progress INFO root] update: starting ev 6a4e7948-04a8-43b1-a34f-d4bf0f3c7a00 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Nov 24 09:29:11 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0)
Nov 24 09:29:11 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 09:29:11 compute-0 podman[97408]: 2025-11-24 09:29:11.035569353 +0000 UTC m=+2.667530359 container create b9ba53078652799f0d3ffafd89167f525cec724616a2dbccbcef5926ba547869 (image=quay.io/ceph/keepalived:2.2.4, name=serene_sinoussi, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, com.redhat.component=keepalived-container, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, architecture=x86_64, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, distribution-scope=public, version=2.2.4, io.openshift.tags=Ceph keepalived)
Nov 24 09:29:11 compute-0 systemd[1]: Started libpod-conmon-b9ba53078652799f0d3ffafd89167f525cec724616a2dbccbcef5926ba547869.scope.
Nov 24 09:29:11 compute-0 podman[97408]: 2025-11-24 09:29:11.020639032 +0000 UTC m=+2.652600188 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Nov 24 09:29:11 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:29:11 compute-0 podman[97408]: 2025-11-24 09:29:11.125746543 +0000 UTC m=+2.757707569 container init b9ba53078652799f0d3ffafd89167f525cec724616a2dbccbcef5926ba547869 (image=quay.io/ceph/keepalived:2.2.4, name=serene_sinoussi, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, description=keepalived for Ceph, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4)
Nov 24 09:29:11 compute-0 podman[97408]: 2025-11-24 09:29:11.135935396 +0000 UTC m=+2.767896392 container start b9ba53078652799f0d3ffafd89167f525cec724616a2dbccbcef5926ba547869 (image=quay.io/ceph/keepalived:2.2.4, name=serene_sinoussi, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, vendor=Red Hat, Inc., version=2.2.4, build-date=2023-02-22T09:23:20, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, release=1793, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, com.redhat.component=keepalived-container)
Nov 24 09:29:11 compute-0 podman[97408]: 2025-11-24 09:29:11.138721605 +0000 UTC m=+2.770682631 container attach b9ba53078652799f0d3ffafd89167f525cec724616a2dbccbcef5926ba547869 (image=quay.io/ceph/keepalived:2.2.4, name=serene_sinoussi, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, distribution-scope=public, build-date=2023-02-22T09:23:20, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, version=2.2.4, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git)
Nov 24 09:29:11 compute-0 systemd[1]: libpod-b9ba53078652799f0d3ffafd89167f525cec724616a2dbccbcef5926ba547869.scope: Deactivated successfully.
Nov 24 09:29:11 compute-0 serene_sinoussi[97503]: 0 0
Nov 24 09:29:11 compute-0 podman[97408]: 2025-11-24 09:29:11.146306733 +0000 UTC m=+2.778267739 container died b9ba53078652799f0d3ffafd89167f525cec724616a2dbccbcef5926ba547869 (image=quay.io/ceph/keepalived:2.2.4, name=serene_sinoussi, version=2.2.4, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, distribution-scope=public)
Nov 24 09:29:11 compute-0 conmon[97503]: conmon b9ba53078652799f0d3f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b9ba53078652799f0d3ffafd89167f525cec724616a2dbccbcef5926ba547869.scope/container/memory.events
Nov 24 09:29:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e28c6c2083d4f10f42abe3e48238c6ac2240ca7d7b523b3e4401d3260a97618-merged.mount: Deactivated successfully.
Nov 24 09:29:11 compute-0 podman[97408]: 2025-11-24 09:29:11.189009444 +0000 UTC m=+2.820970450 container remove b9ba53078652799f0d3ffafd89167f525cec724616a2dbccbcef5926ba547869 (image=quay.io/ceph/keepalived:2.2.4, name=serene_sinoussi, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, version=2.2.4)
Nov 24 09:29:11 compute-0 systemd[1]: libpod-conmon-b9ba53078652799f0d3ffafd89167f525cec724616a2dbccbcef5926ba547869.scope: Deactivated successfully.
Nov 24 09:29:11 compute-0 systemd[1]: Reloading.
Nov 24 09:29:11 compute-0 systemd-rc-local-generator[97553]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:29:11 compute-0 systemd-sysv-generator[97557]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:29:11 compute-0 systemd[1]: Reloading.
Nov 24 09:29:11 compute-0 systemd-sysv-generator[97598]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:29:11 compute-0 systemd-rc-local-generator[97594]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:29:11 compute-0 systemd[1]: Starting Ceph keepalived.nfs.cephfs.compute-0.mglptr for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:29:12 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Nov 24 09:29:12 compute-0 ceph-mon[74331]: pgmap v37: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:29:12 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 24 09:29:12 compute-0 ceph-mon[74331]: osdmap e52: 3 total, 3 up, 3 in
Nov 24 09:29:12 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 09:29:12 compute-0 podman[97652]: 2025-11-24 09:29:12.025349837 +0000 UTC m=+0.046210328 container create da5e2e82794b556dfcd8ea30635453752d519b3ce5ab3e77ac09ab6f644d0021 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-0-mglptr, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, description=keepalived for Ceph, io.buildah.version=1.28.2, vcs-type=git, io.openshift.expose-services=, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., version=2.2.4, architecture=x86_64, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793)
Nov 24 09:29:12 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 24 09:29:12 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Nov 24 09:29:12 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Nov 24 09:29:12 compute-0 ceph-mgr[74626]: [progress INFO root] update: starting ev 5fb23d80-0b5c-4fb0-955a-9917d7e81f9d (PG autoscaler increasing pool 9 PGs from 1 to 32)
Nov 24 09:29:12 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0)
Nov 24 09:29:12 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 09:29:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25f44e9107c874c2fac4b11c2910ddcfdfb4aed8b82b247288baa1d15b178366/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:29:12 compute-0 podman[97652]: 2025-11-24 09:29:12.076719664 +0000 UTC m=+0.097580155 container init da5e2e82794b556dfcd8ea30635453752d519b3ce5ab3e77ac09ab6f644d0021 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-0-mglptr, vendor=Red Hat, Inc., name=keepalived, com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, release=1793, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public)
Nov 24 09:29:12 compute-0 podman[97652]: 2025-11-24 09:29:12.080813965 +0000 UTC m=+0.101674426 container start da5e2e82794b556dfcd8ea30635453752d519b3ce5ab3e77ac09ab6f644d0021 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-0-mglptr, name=keepalived, com.redhat.component=keepalived-container, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793)
Nov 24 09:29:12 compute-0 bash[97652]: da5e2e82794b556dfcd8ea30635453752d519b3ce5ab3e77ac09ab6f644d0021
Nov 24 09:29:12 compute-0 podman[97652]: 2025-11-24 09:29:12.004716026 +0000 UTC m=+0.025576517 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Nov 24 09:29:12 compute-0 systemd[1]: Started Ceph keepalived.nfs.cephfs.compute-0.mglptr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:29:12 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-0-mglptr[97666]: Mon Nov 24 09:29:12 2025: Starting Keepalived v2.2.4 (08/21,2021)
Nov 24 09:29:12 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-0-mglptr[97666]: Mon Nov 24 09:29:12 2025: Running on Linux 5.14.0-639.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025 (built for Linux 5.14.0)
Nov 24 09:29:12 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-0-mglptr[97666]: Mon Nov 24 09:29:12 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Nov 24 09:29:12 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-0-mglptr[97666]: Mon Nov 24 09:29:12 2025: Configuration file /etc/keepalived/keepalived.conf
Nov 24 09:29:12 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-0-mglptr[97666]: Mon Nov 24 09:29:12 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Nov 24 09:29:12 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-0-mglptr[97666]: Mon Nov 24 09:29:12 2025: Starting VRRP child process, pid=4
Nov 24 09:29:12 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-0-mglptr[97666]: Mon Nov 24 09:29:12 2025: Startup complete
Nov 24 09:29:12 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-0-mglptr[97666]: Mon Nov 24 09:29:12 2025: (VI_0) Entering BACKUP STATE (init)
Nov 24 09:29:12 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-0-mglptr[97666]: Mon Nov 24 09:29:12 2025: VRRP_Script(check_backend) succeeded
Nov 24 09:29:12 compute-0 sudo[97343]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:12 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:29:12 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:12 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:29:12 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:12 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Nov 24 09:29:12 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:12 compute-0 ceph-mgr[74626]: [progress INFO root] complete: finished ev 5ad37125-ccf6-4f0e-b4f4-80754f135960 (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Nov 24 09:29:12 compute-0 ceph-mgr[74626]: [progress INFO root] Completed event 5ad37125-ccf6-4f0e-b4f4-80754f135960 (Updating ingress.nfs.cephfs deployment (+6 -> 6)) in 27 seconds
Nov 24 09:29:12 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Nov 24 09:29:12 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:12 compute-0 ceph-mgr[74626]: [progress INFO root] update: starting ev 9a6552b4-32fc-4fcc-9154-b9a8097e1d2b (Updating alertmanager deployment (+1 -> 1))
Nov 24 09:29:12 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Deploying daemon alertmanager.compute-0 on compute-0
Nov 24 09:29:12 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Deploying daemon alertmanager.compute-0 on compute-0
Nov 24 09:29:12 compute-0 sudo[97674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:29:12 compute-0 sudo[97674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:12 compute-0 sudo[97674]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:12 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:12 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:12 compute-0 sudo[97699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/alertmanager:v0.25.0 --timeout 895 _orch deploy --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:29:12 compute-0 sudo[97699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:12 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v40: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 895 B/s rd, 127 B/s wr, 1 op/s
Nov 24 09:29:12 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0)
Nov 24 09:29:12 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 09:29:12 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0)
Nov 24 09:29:12 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 09:29:12 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:12 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab00003250 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:12 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:29:12 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:12 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Nov 24 09:29:13 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 24 09:29:13 compute-0 ceph-mon[74331]: osdmap e53: 3 total, 3 up, 3 in
Nov 24 09:29:13 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 09:29:13 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:13 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:13 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:13 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:13 compute-0 ceph-mon[74331]: Deploying daemon alertmanager.compute-0 on compute-0
Nov 24 09:29:13 compute-0 ceph-mon[74331]: pgmap v40: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 895 B/s rd, 127 B/s wr, 1 op/s
Nov 24 09:29:13 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 09:29:13 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 09:29:13 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 24 09:29:13 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 09:29:13 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 09:29:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Nov 24 09:29:13 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Nov 24 09:29:13 compute-0 ceph-mgr[74626]: [progress INFO root] update: starting ev 628e8e7c-7958-4111-9b6d-8399b272ce91 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Nov 24 09:29:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0)
Nov 24 09:29:13 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 09:29:13 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 54 pg[8.0( v 38'12 (0'0,38'12] local-lis/les=37/38 n=6 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=54 pruub=15.110892296s) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 lcod 38'11 mlcod 38'11 active pruub 174.135833740s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:13 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 54 pg[9.0( v 45'1130 (0'0,45'1130] local-lis/les=39/40 n=178 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=54 pruub=9.133955002s) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 lcod 45'1129 mlcod 45'1129 active pruub 168.159286499s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:13 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 54 pg[8.0( v 38'12 lc 0'0 (0'0,38'12] local-lis/les=37/38 n=0 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=54 pruub=15.110892296s) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 lcod 38'11 mlcod 0'0 unknown pruub 174.135833740s@ mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:13 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 54 pg[9.0( v 45'1130 lc 0'0 (0'0,45'1130] local-lis/les=39/40 n=5 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=54 pruub=9.133955002s) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 lcod 45'1129 mlcod 0'0 unknown pruub 168.159286499s@ mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:13 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x558d2147ed80) operator()   moving buffer(0x558d21e6d2e8 space 0x558d21d96010 0x0~1000 clean)
Nov 24 09:29:13 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x558d2147ed80) operator()   moving buffer(0x558d21e983e8 space 0x558d21d65940 0x0~1000 clean)
Nov 24 09:29:13 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x558d2147ed80) operator()   moving buffer(0x558d21e687a8 space 0x558d21ec8b70 0x0~1000 clean)
Nov 24 09:29:13 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x558d2147ed80) operator()   moving buffer(0x558d21e76b68 space 0x558d21d96900 0x0~1000 clean)
Nov 24 09:29:13 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x558d2147ed80) operator()   moving buffer(0x558d21e80ca8 space 0x558d21ec8de0 0x0~1000 clean)
Nov 24 09:29:13 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x558d2147ed80) operator()   moving buffer(0x558d21e71428 space 0x558d21d960e0 0x0~1000 clean)
Nov 24 09:29:13 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x558d2147ed80) operator()   moving buffer(0x558d21e777e8 space 0x558d21d96eb0 0x0~1000 clean)
Nov 24 09:29:13 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x558d2147ed80) operator()   moving buffer(0x558d21e63a68 space 0x558d21ec8690 0x0~1000 clean)
Nov 24 09:29:13 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x558d2147ed80) operator()   moving buffer(0x558d21e76668 space 0x558d21d969d0 0x0~1000 clean)
Nov 24 09:29:13 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x558d2147ed80) operator()   moving buffer(0x558d21de0348 space 0x558d21d65bb0 0x0~1000 clean)
Nov 24 09:29:13 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x558d2147ed80) operator()   moving buffer(0x558d21e6c528 space 0x558d21d65050 0x0~1000 clean)
Nov 24 09:29:13 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x558d2147ed80) operator()   moving buffer(0x558d21eca028 space 0x558d21d96c40 0x0~1000 clean)
Nov 24 09:29:13 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x558d2147ed80) operator()   moving buffer(0x558d21c29d88 space 0x558d21d65a10 0x0~1000 clean)
Nov 24 09:29:13 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x558d2147ed80) operator()   moving buffer(0x558d21e6c028 space 0x558d21ec8420 0x0~1000 clean)
Nov 24 09:29:13 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x558d2147ed80) operator()   moving buffer(0x558d21e76708 space 0x558d21d96280 0x0~1000 clean)
Nov 24 09:29:13 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x558d2147ed80) operator()   moving buffer(0x558d21e69b08 space 0x558d21c1a010 0x0~1000 clean)
Nov 24 09:29:13 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x558d2147ed80) operator()   moving buffer(0x558d21de1b08 space 0x558d21d96f80 0x0~1000 clean)
Nov 24 09:29:13 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x558d2147ed80) operator()   moving buffer(0x558d21c04668 space 0x558d21d65ae0 0x0~1000 clean)
Nov 24 09:29:13 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x558d2147ed80) operator()   moving buffer(0x558d21c1d248 space 0x558d21ec8eb0 0x0~1000 clean)
Nov 24 09:29:13 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x558d2147ed80) operator()   moving buffer(0x558d21e98348 space 0x558d21d961b0 0x0~1000 clean)
Nov 24 09:29:13 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x558d2147ed80) operator()   moving buffer(0x558d21e69388 space 0x558d21d96760 0x0~1000 clean)
Nov 24 09:29:13 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x558d2147ed80) operator()   moving buffer(0x558d21e77568 space 0x558d21d96aa0 0x0~1000 clean)
Nov 24 09:29:13 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x558d2147ed80) operator()   moving buffer(0x558d21e71068 space 0x558d21c1bef0 0x0~1000 clean)
Nov 24 09:29:13 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x558d2147ed80) operator()   moving buffer(0x558d21e70028 space 0x558d21defc80 0x0~1000 clean)
Nov 24 09:29:13 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x558d2147ed80) operator()   moving buffer(0x558d21e76348 space 0x558d21d96d10 0x0~1000 clean)
Nov 24 09:29:13 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x558d2147ed80) operator()   moving buffer(0x558d21eca528 space 0x558d21d96b70 0x0~1000 clean)
Nov 24 09:29:13 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x558d2147ed80) operator()   moving buffer(0x558d21e68a28 space 0x558d21ec85c0 0x0~1000 clean)
Nov 24 09:29:13 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x558d2147ed80) operator()   moving buffer(0x558d21e779c8 space 0x558d21d96de0 0x0~1000 clean)
Nov 24 09:29:13 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x558d2147ed80) operator()   moving buffer(0x558d21e697e8 space 0x558d21d96690 0x0~1000 clean)
Nov 24 09:29:13 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x558d2147ed80) operator()   moving buffer(0x558d21ecb2e8 space 0x558d21d65c80 0x0~1000 clean)
Nov 24 09:29:14 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Nov 24 09:29:14 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 24 09:29:14 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 09:29:14 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 09:29:14 compute-0 ceph-mon[74331]: osdmap e54: 3 total, 3 up, 3 in
Nov 24 09:29:14 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 09:29:14 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 24 09:29:14 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Nov 24 09:29:14 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Nov 24 09:29:14 compute-0 ceph-mgr[74626]: [progress INFO root] update: starting ev a6e243c1-72b2-42c6-9e86-65da8cd0af65 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Nov 24 09:29:14 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"} v 0)
Nov 24 09:29:14 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.14( v 38'12 lc 0'0 (0'0,38'12] local-lis/les=37/38 n=0 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.15( v 45'1130 lc 0'0 (0'0,45'1130] local-lis/les=39/40 n=5 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.14( v 45'1130 lc 0'0 (0'0,45'1130] local-lis/les=39/40 n=5 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.15( v 38'12 lc 0'0 (0'0,38'12] local-lis/les=37/38 n=0 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.17( v 45'1130 lc 0'0 (0'0,45'1130] local-lis/les=39/40 n=5 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.16( v 38'12 lc 0'0 (0'0,38'12] local-lis/les=37/38 n=0 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.16( v 45'1130 lc 0'0 (0'0,45'1130] local-lis/les=39/40 n=5 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.17( v 38'12 lc 0'0 (0'0,38'12] local-lis/les=37/38 n=0 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.11( v 45'1130 lc 0'0 (0'0,45'1130] local-lis/les=39/40 n=6 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.10( v 38'12 lc 0'0 (0'0,38'12] local-lis/les=37/38 n=0 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.10( v 45'1130 lc 0'0 (0'0,45'1130] local-lis/les=39/40 n=6 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:14 : epoch 6924254d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.11( v 38'12 lc 0'0 (0'0,38'12] local-lis/les=37/38 n=0 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.2( v 38'12 lc 0'0 (0'0,38'12] local-lis/les=37/38 n=1 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.3( v 45'1130 lc 0'0 (0'0,45'1130] local-lis/les=39/40 n=6 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.2( v 45'1130 lc 0'0 (0'0,45'1130] local-lis/les=39/40 n=6 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.3( v 38'12 lc 0'0 (0'0,38'12] local-lis/les=37/38 n=1 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.f( v 38'12 lc 0'0 (0'0,38'12] local-lis/les=37/38 n=0 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.e( v 45'1130 lc 0'0 (0'0,45'1130] local-lis/les=39/40 n=6 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.9( v 45'1130 lc 0'0 (0'0,45'1130] local-lis/les=39/40 n=6 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.8( v 38'12 lc 0'0 (0'0,38'12] local-lis/les=37/38 n=0 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.8( v 45'1130 lc 0'0 (0'0,45'1130] local-lis/les=39/40 n=6 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.9( v 38'12 lc 0'0 (0'0,38'12] local-lis/les=37/38 n=0 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.b( v 45'1130 lc 0'0 (0'0,45'1130] local-lis/les=39/40 n=6 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.a( v 38'12 lc 0'0 (0'0,38'12] local-lis/les=37/38 n=0 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.f( v 45'1130 lc 0'0 (0'0,45'1130] local-lis/les=39/40 n=6 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.e( v 38'12 lc 0'0 (0'0,38'12] local-lis/les=37/38 n=0 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.c( v 45'1130 lc 0'0 (0'0,45'1130] local-lis/les=39/40 n=6 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.d( v 38'12 lc 0'0 (0'0,38'12] local-lis/les=37/38 n=0 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.c( v 38'12 lc 0'0 (0'0,38'12] local-lis/les=37/38 n=0 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.d( v 45'1130 lc 0'0 (0'0,45'1130] local-lis/les=39/40 n=6 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.a( v 45'1130 lc 0'0 (0'0,45'1130] local-lis/les=39/40 n=6 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.b( v 38'12 lc 0'0 (0'0,38'12] local-lis/les=37/38 n=0 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.1( v 38'12 (0'0,38'12] local-lis/les=37/38 n=1 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.1( v 45'1130 lc 0'0 (0'0,45'1130] local-lis/les=39/40 n=6 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.6( v 45'1130 lc 0'0 (0'0,45'1130] local-lis/les=39/40 n=6 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.7( v 38'12 lc 0'0 (0'0,38'12] local-lis/les=37/38 n=0 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.7( v 45'1130 lc 0'0 (0'0,45'1130] local-lis/les=39/40 n=6 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.6( v 38'12 lc 0'0 (0'0,38'12] local-lis/les=37/38 n=1 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.5( v 38'12 lc 0'0 (0'0,38'12] local-lis/les=37/38 n=1 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.4( v 45'1130 lc 0'0 (0'0,45'1130] local-lis/les=39/40 n=6 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.5( v 45'1130 lc 0'0 (0'0,45'1130] local-lis/les=39/40 n=6 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.1a( v 45'1130 lc 0'0 (0'0,45'1130] local-lis/les=39/40 n=5 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.1b( v 38'12 lc 0'0 (0'0,38'12] local-lis/les=37/38 n=0 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.4( v 38'12 lc 0'0 (0'0,38'12] local-lis/les=37/38 n=1 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.1a( v 38'12 lc 0'0 (0'0,38'12] local-lis/les=37/38 n=0 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.1b( v 45'1130 lc 0'0 (0'0,45'1130] local-lis/les=39/40 n=5 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.19( v 38'12 lc 0'0 (0'0,38'12] local-lis/les=37/38 n=0 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.18( v 45'1130 lc 0'0 (0'0,45'1130] local-lis/les=39/40 n=5 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.19( v 45'1130 lc 0'0 (0'0,45'1130] local-lis/les=39/40 n=5 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.18( v 38'12 lc 0'0 (0'0,38'12] local-lis/les=37/38 n=0 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.1f( v 38'12 lc 0'0 (0'0,38'12] local-lis/les=37/38 n=0 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.1f( v 45'1130 lc 0'0 (0'0,45'1130] local-lis/les=39/40 n=5 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.1e( v 38'12 lc 0'0 (0'0,38'12] local-lis/les=37/38 n=0 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.1c( v 45'1130 lc 0'0 (0'0,45'1130] local-lis/les=39/40 n=5 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.1d( v 38'12 lc 0'0 (0'0,38'12] local-lis/les=37/38 n=0 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.1d( v 45'1130 lc 0'0 (0'0,45'1130] local-lis/les=39/40 n=5 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.1c( v 38'12 lc 0'0 (0'0,38'12] local-lis/les=37/38 n=0 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.13( v 38'12 lc 0'0 (0'0,38'12] local-lis/les=37/38 n=0 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.12( v 45'1130 lc 0'0 (0'0,45'1130] local-lis/les=39/40 n=6 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.1e( v 45'1130 lc 0'0 (0'0,45'1130] local-lis/les=39/40 n=5 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.13( v 45'1130 lc 0'0 (0'0,45'1130] local-lis/les=39/40 n=5 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.12( v 38'12 lc 0'0 (0'0,38'12] local-lis/les=37/38 n=0 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.14( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=5 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.17( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=5 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.14( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.16( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.15( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.11( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.17( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.10( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.15( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=5 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.10( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.16( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=5 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.11( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.2( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.3( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.e( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.2( v 38'12 (0'0,38'12] local-lis/les=54/55 n=1 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.9( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.3( v 38'12 (0'0,38'12] local-lis/les=54/55 n=1 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.8( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.9( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.8( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.a( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.b( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.f( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.d( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.c( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.e( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.c( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.d( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.b( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.a( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.0( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=5 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 lcod 45'1129 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.0( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 lcod 38'11 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.1( v 38'12 (0'0,38'12] local-lis/les=54/55 n=1 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.6( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.7( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.7( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.f( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.6( v 38'12 (0'0,38'12] local-lis/les=54/55 n=1 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.4( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.5( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.5( v 38'12 (0'0,38'12] local-lis/les=54/55 n=1 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.1a( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=5 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.1b( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.1b( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=5 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.1a( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.4( v 38'12 (0'0,38'12] local-lis/les=54/55 n=1 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.19( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.18( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.18( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=5 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.19( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=5 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.1f( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.1f( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=5 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.1e( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.1d( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.1c( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.1c( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=5 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.1d( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=5 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.12( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.13( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=5 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.13( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.1( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[9.1e( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=5 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 55 pg[8.12( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=37/37 les/c/f=38/38/0 sis=54) [0] r=0 lpr=54 pi=[37,54)/1 crt=38'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:14 compute-0 podman[97764]: 2025-11-24 09:29:14.163372342 +0000 UTC m=+1.513497104 volume create 99deafe4734b4860bb1a7448c143027503770db0abb24582ae8a476112b56c91
Nov 24 09:29:14 compute-0 podman[97764]: 2025-11-24 09:29:14.149333744 +0000 UTC m=+1.499458525 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Nov 24 09:29:14 compute-0 podman[97764]: 2025-11-24 09:29:14.174028958 +0000 UTC m=+1.524153719 container create 3a7631ccec4c31b03ce62a210c93f82b672fa95496a781cd564068a566fe35bf (image=quay.io/prometheus/alertmanager:v0.25.0, name=recursing_kirch, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:29:14 compute-0 systemd[1]: Started libpod-conmon-3a7631ccec4c31b03ce62a210c93f82b672fa95496a781cd564068a566fe35bf.scope.
Nov 24 09:29:14 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:29:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f54ba078689f8c33c86633aa1fc78eaf4e414e8b74e02136d5b9878638e5757/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Nov 24 09:29:14 compute-0 podman[97764]: 2025-11-24 09:29:14.251200904 +0000 UTC m=+1.601325695 container init 3a7631ccec4c31b03ce62a210c93f82b672fa95496a781cd564068a566fe35bf (image=quay.io/prometheus/alertmanager:v0.25.0, name=recursing_kirch, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:29:14 compute-0 podman[97764]: 2025-11-24 09:29:14.259207823 +0000 UTC m=+1.609332584 container start 3a7631ccec4c31b03ce62a210c93f82b672fa95496a781cd564068a566fe35bf (image=quay.io/prometheus/alertmanager:v0.25.0, name=recursing_kirch, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:29:14 compute-0 podman[97764]: 2025-11-24 09:29:14.262499545 +0000 UTC m=+1.612624376 container attach 3a7631ccec4c31b03ce62a210c93f82b672fa95496a781cd564068a566fe35bf (image=quay.io/prometheus/alertmanager:v0.25.0, name=recursing_kirch, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:29:14 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:14 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab200095a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:14 compute-0 recursing_kirch[97903]: 65534 65534
Nov 24 09:29:14 compute-0 systemd[1]: libpod-3a7631ccec4c31b03ce62a210c93f82b672fa95496a781cd564068a566fe35bf.scope: Deactivated successfully.
Nov 24 09:29:14 compute-0 conmon[97903]: conmon 3a7631ccec4c31b03ce6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3a7631ccec4c31b03ce62a210c93f82b672fa95496a781cd564068a566fe35bf.scope/container/memory.events
Nov 24 09:29:14 compute-0 podman[97764]: 2025-11-24 09:29:14.266019633 +0000 UTC m=+1.616144414 container died 3a7631ccec4c31b03ce62a210c93f82b672fa95496a781cd564068a566fe35bf (image=quay.io/prometheus/alertmanager:v0.25.0, name=recursing_kirch, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:29:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f54ba078689f8c33c86633aa1fc78eaf4e414e8b74e02136d5b9878638e5757-merged.mount: Deactivated successfully.
Nov 24 09:29:14 compute-0 podman[97764]: 2025-11-24 09:29:14.339431386 +0000 UTC m=+1.689556147 container remove 3a7631ccec4c31b03ce62a210c93f82b672fa95496a781cd564068a566fe35bf (image=quay.io/prometheus/alertmanager:v0.25.0, name=recursing_kirch, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:29:14 compute-0 podman[97764]: 2025-11-24 09:29:14.355966087 +0000 UTC m=+1.706090868 volume remove 99deafe4734b4860bb1a7448c143027503770db0abb24582ae8a476112b56c91
Nov 24 09:29:14 compute-0 systemd[1]: libpod-conmon-3a7631ccec4c31b03ce62a210c93f82b672fa95496a781cd564068a566fe35bf.scope: Deactivated successfully.
Nov 24 09:29:14 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v43: 260 pgs: 62 unknown, 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 2 op/s
Nov 24 09:29:14 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Nov 24 09:29:14 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 09:29:14 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Nov 24 09:29:14 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 09:29:14 compute-0 podman[97921]: 2025-11-24 09:29:14.464092852 +0000 UTC m=+0.064392380 volume create bf836f83d6bdb20fc9bb37462c222917de460c0adf3419362de0afccd7efc33b
Nov 24 09:29:14 compute-0 podman[97921]: 2025-11-24 09:29:14.477065594 +0000 UTC m=+0.077365132 container create 5c7e81e6410c804a1cd05a522b99666bfc69993be6fbba3a0178538c83d807a9 (image=quay.io/prometheus/alertmanager:v0.25.0, name=practical_ptolemy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:29:14 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:14 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab200095a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:14 compute-0 systemd[1]: Started libpod-conmon-5c7e81e6410c804a1cd05a522b99666bfc69993be6fbba3a0178538c83d807a9.scope.
Nov 24 09:29:14 compute-0 podman[97921]: 2025-11-24 09:29:14.440864415 +0000 UTC m=+0.041163983 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Nov 24 09:29:14 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:29:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1b1f7bffbfc5de3149b9c684b6c4eeae16a8c7df961d3081c71b3f7bbf3f467/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Nov 24 09:29:14 compute-0 podman[97921]: 2025-11-24 09:29:14.559688106 +0000 UTC m=+0.159987644 container init 5c7e81e6410c804a1cd05a522b99666bfc69993be6fbba3a0178538c83d807a9 (image=quay.io/prometheus/alertmanager:v0.25.0, name=practical_ptolemy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:29:14 compute-0 podman[97921]: 2025-11-24 09:29:14.567562182 +0000 UTC m=+0.167861710 container start 5c7e81e6410c804a1cd05a522b99666bfc69993be6fbba3a0178538c83d807a9 (image=quay.io/prometheus/alertmanager:v0.25.0, name=practical_ptolemy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:29:14 compute-0 practical_ptolemy[97937]: 65534 65534
Nov 24 09:29:14 compute-0 systemd[1]: libpod-5c7e81e6410c804a1cd05a522b99666bfc69993be6fbba3a0178538c83d807a9.scope: Deactivated successfully.
Nov 24 09:29:14 compute-0 podman[97921]: 2025-11-24 09:29:14.571227744 +0000 UTC m=+0.171527272 container attach 5c7e81e6410c804a1cd05a522b99666bfc69993be6fbba3a0178538c83d807a9 (image=quay.io/prometheus/alertmanager:v0.25.0, name=practical_ptolemy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:29:14 compute-0 podman[97921]: 2025-11-24 09:29:14.57147854 +0000 UTC m=+0.171778058 container died 5c7e81e6410c804a1cd05a522b99666bfc69993be6fbba3a0178538c83d807a9 (image=quay.io/prometheus/alertmanager:v0.25.0, name=practical_ptolemy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:29:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1b1f7bffbfc5de3149b9c684b6c4eeae16a8c7df961d3081c71b3f7bbf3f467-merged.mount: Deactivated successfully.
Nov 24 09:29:14 compute-0 podman[97921]: 2025-11-24 09:29:14.609055153 +0000 UTC m=+0.209354671 container remove 5c7e81e6410c804a1cd05a522b99666bfc69993be6fbba3a0178538c83d807a9 (image=quay.io/prometheus/alertmanager:v0.25.0, name=practical_ptolemy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:29:14 compute-0 podman[97921]: 2025-11-24 09:29:14.611715049 +0000 UTC m=+0.212014567 volume remove bf836f83d6bdb20fc9bb37462c222917de460c0adf3419362de0afccd7efc33b
Nov 24 09:29:14 compute-0 systemd[1]: libpod-conmon-5c7e81e6410c804a1cd05a522b99666bfc69993be6fbba3a0178538c83d807a9.scope: Deactivated successfully.
Nov 24 09:29:14 compute-0 systemd[1]: Reloading.
Nov 24 09:29:14 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Nov 24 09:29:14 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Nov 24 09:29:14 compute-0 systemd-rc-local-generator[97979]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:29:14 compute-0 systemd-sysv-generator[97984]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:29:14 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:14 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab00003b70 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:14 compute-0 systemd[1]: Reloading.
Nov 24 09:29:14 compute-0 systemd-rc-local-generator[98020]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:29:15 compute-0 systemd-sysv-generator[98024]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:29:15 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Nov 24 09:29:15 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Nov 24 09:29:15 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 09:29:15 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 09:29:15 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Nov 24 09:29:15 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 24 09:29:15 compute-0 ceph-mon[74331]: osdmap e55: 3 total, 3 up, 3 in
Nov 24 09:29:15 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 09:29:15 compute-0 ceph-mon[74331]: pgmap v43: 260 pgs: 62 unknown, 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 2 op/s
Nov 24 09:29:15 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 09:29:15 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 09:29:15 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Nov 24 09:29:15 compute-0 ceph-mgr[74626]: [progress INFO root] update: starting ev 9d81a659-3b1f-458d-a3a4-75886645fc36 (PG autoscaler increasing pool 12 PGs from 1 to 32)
Nov 24 09:29:15 compute-0 ceph-mgr[74626]: [progress INFO root] complete: finished ev 6a4e7948-04a8-43b1-a34f-d4bf0f3c7a00 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Nov 24 09:29:15 compute-0 ceph-mgr[74626]: [progress INFO root] Completed event 6a4e7948-04a8-43b1-a34f-d4bf0f3c7a00 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 4 seconds
Nov 24 09:29:15 compute-0 ceph-mgr[74626]: [progress INFO root] complete: finished ev 5fb23d80-0b5c-4fb0-955a-9917d7e81f9d (PG autoscaler increasing pool 9 PGs from 1 to 32)
Nov 24 09:29:15 compute-0 ceph-mgr[74626]: [progress INFO root] Completed event 5fb23d80-0b5c-4fb0-955a-9917d7e81f9d (PG autoscaler increasing pool 9 PGs from 1 to 32) in 3 seconds
Nov 24 09:29:15 compute-0 ceph-mgr[74626]: [progress INFO root] complete: finished ev 628e8e7c-7958-4111-9b6d-8399b272ce91 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Nov 24 09:29:15 compute-0 ceph-mgr[74626]: [progress INFO root] Completed event 628e8e7c-7958-4111-9b6d-8399b272ce91 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 2 seconds
Nov 24 09:29:15 compute-0 ceph-mgr[74626]: [progress INFO root] complete: finished ev a6e243c1-72b2-42c6-9e86-65da8cd0af65 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Nov 24 09:29:15 compute-0 ceph-mgr[74626]: [progress INFO root] Completed event a6e243c1-72b2-42c6-9e86-65da8cd0af65 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 1 seconds
Nov 24 09:29:15 compute-0 ceph-mgr[74626]: [progress INFO root] complete: finished ev 9d81a659-3b1f-458d-a3a4-75886645fc36 (PG autoscaler increasing pool 12 PGs from 1 to 32)
Nov 24 09:29:15 compute-0 ceph-mgr[74626]: [progress INFO root] Completed event 9d81a659-3b1f-458d-a3a4-75886645fc36 (PG autoscaler increasing pool 12 PGs from 1 to 32) in 0 seconds
Nov 24 09:29:15 compute-0 systemd[1]: Starting Ceph alertmanager.compute-0 for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:29:15 compute-0 podman[98080]: 2025-11-24 09:29:15.415928464 +0000 UTC m=+0.039141963 volume create aefd7ce6b9dfb4441a7c905b4ba016f6fff7115a1321898dd2b88ee2cc7ec854
Nov 24 09:29:15 compute-0 podman[98080]: 2025-11-24 09:29:15.426687372 +0000 UTC m=+0.049900871 container create 32681d7ec5cc8674cee7672941d75d1674b5a61184918a28db89f06c57c7c5f8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:29:15 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 56 pg[11.0( empty local-lis/les=43/44 n=0 ec=43/43 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=10.804900169s) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active pruub 172.204727173s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:15 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 56 pg[11.0( empty local-lis/les=43/44 n=0 ec=43/43 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=10.804900169s) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown pruub 172.204727173s@ mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9128bf420034ef6f01c3d1c6331bf79eff09ec3ed84cf7539f0b02de21039bd9/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Nov 24 09:29:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9128bf420034ef6f01c3d1c6331bf79eff09ec3ed84cf7539f0b02de21039bd9/merged/etc/alertmanager supports timestamps until 2038 (0x7fffffff)
Nov 24 09:29:15 compute-0 podman[98080]: 2025-11-24 09:29:15.490407744 +0000 UTC m=+0.113621263 container init 32681d7ec5cc8674cee7672941d75d1674b5a61184918a28db89f06c57c7c5f8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:29:15 compute-0 podman[98080]: 2025-11-24 09:29:15.401693131 +0000 UTC m=+0.024906650 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Nov 24 09:29:15 compute-0 podman[98080]: 2025-11-24 09:29:15.497517341 +0000 UTC m=+0.120730840 container start 32681d7ec5cc8674cee7672941d75d1674b5a61184918a28db89f06c57c7c5f8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:29:15 compute-0 bash[98080]: 32681d7ec5cc8674cee7672941d75d1674b5a61184918a28db89f06c57c7c5f8
Nov 24 09:29:15 compute-0 systemd[1]: Started Ceph alertmanager.compute-0 for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:29:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[98094]: ts=2025-11-24T09:29:15.524Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)"
Nov 24 09:29:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[98094]: ts=2025-11-24T09:29:15.524Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)"
Nov 24 09:29:15 compute-0 ceph-mgr[74626]: [progress INFO root] Writing back 20 completed events
Nov 24 09:29:15 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 24 09:29:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[98094]: ts=2025-11-24T09:29:15.533Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.122.100 port=9094
Nov 24 09:29:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[98094]: ts=2025-11-24T09:29:15.535Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Nov 24 09:29:15 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:15 compute-0 ceph-mgr[74626]: [progress WARNING root] Starting Global Recovery Event,124 pgs not in active + clean state
Nov 24 09:29:15 compute-0 sudo[97699]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:15 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:29:15 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:15 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:29:15 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[98094]: ts=2025-11-24T09:29:15.577Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml
Nov 24 09:29:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[98094]: ts=2025-11-24T09:29:15.578Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml
Nov 24 09:29:15 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Nov 24 09:29:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[98094]: ts=2025-11-24T09:29:15.582Z caller=tls_config.go:232 level=info msg="Listening on" address=192.168.122.100:9093
Nov 24 09:29:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[98094]: ts=2025-11-24T09:29:15.582Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=192.168.122.100:9093
Nov 24 09:29:15 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:15 compute-0 ceph-mgr[74626]: [progress INFO root] complete: finished ev 9a6552b4-32fc-4fcc-9154-b9a8097e1d2b (Updating alertmanager deployment (+1 -> 1))
Nov 24 09:29:15 compute-0 ceph-mgr[74626]: [progress INFO root] Completed event 9a6552b4-32fc-4fcc-9154-b9a8097e1d2b (Updating alertmanager deployment (+1 -> 1)) in 3 seconds
Nov 24 09:29:15 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Nov 24 09:29:15 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:15 compute-0 ceph-mgr[74626]: [progress INFO root] update: starting ev de8470b8-72e9-4ae4-b13f-249be9f0e534 (Updating grafana deployment (+1 -> 1))
Nov 24 09:29:15 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.services.monitoring] Regenerating cephadm self-signed grafana TLS certificates
Nov 24 09:29:15 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Regenerating cephadm self-signed grafana TLS certificates
Nov 24 09:29:15 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.grafana_cert}] v 0)
Nov 24 09:29:15 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:15 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.grafana_key}] v 0)
Nov 24 09:29:15 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:15 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"} v 0)
Nov 24 09:29:15 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Nov 24 09:29:15 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Nov 24 09:29:15 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_SSL_VERIFY}] v 0)
Nov 24 09:29:15 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:15 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Deploying daemon grafana.compute-0 on compute-0
Nov 24 09:29:15 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Deploying daemon grafana.compute-0 on compute-0
Nov 24 09:29:15 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Nov 24 09:29:15 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Nov 24 09:29:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-0-mglptr[97666]: Mon Nov 24 09:29:15 2025: (VI_0) Entering MASTER STATE
Nov 24 09:29:15 compute-0 sudo[98114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:29:15 compute-0 sudo[98114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:15 compute-0 sudo[98114]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:15 compute-0 sudo[98139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/grafana:10.4.0 --timeout 895 _orch deploy --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:29:15 compute-0 sudo[98139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:16 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Nov 24 09:29:16 compute-0 ceph-mon[74331]: 9.14 scrub starts
Nov 24 09:29:16 compute-0 ceph-mon[74331]: 9.14 scrub ok
Nov 24 09:29:16 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Nov 24 09:29:16 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 09:29:16 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 09:29:16 compute-0 ceph-mon[74331]: osdmap e56: 3 total, 3 up, 3 in
Nov 24 09:29:16 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:16 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:16 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:16 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:16 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:16 compute-0 ceph-mon[74331]: Regenerating cephadm self-signed grafana TLS certificates
Nov 24 09:29:16 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:16 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:16 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Nov 24 09:29:16 compute-0 ceph-mon[74331]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Nov 24 09:29:16 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:16 compute-0 ceph-mon[74331]: Deploying daemon grafana.compute-0 on compute-0
Nov 24 09:29:16 compute-0 ceph-mon[74331]: 9.17 scrub starts
Nov 24 09:29:16 compute-0 ceph-mon[74331]: 9.17 scrub ok
Nov 24 09:29:16 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Nov 24 09:29:16 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.17( empty local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.15( empty local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.16( empty local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.14( empty local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.13( empty local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.1( empty local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.12( empty local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.c( empty local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.b( empty local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.a( empty local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.9( empty local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.d( empty local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.e( empty local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.f( empty local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.8( empty local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.2( empty local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.4( empty local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.3( empty local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.5( empty local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.6( empty local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.7( empty local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.18( empty local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.19( empty local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.1a( empty local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.1b( empty local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.1c( empty local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.1d( empty local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.1e( empty local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.1f( empty local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.10( empty local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.11( empty local-lis/les=43/44 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.17( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.15( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.14( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.16( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.13( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.1( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.0( empty local-lis/les=56/57 n=0 ec=43/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.12( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.c( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.a( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.9( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.d( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.e( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.f( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.8( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.2( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.4( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.5( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.6( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.3( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.18( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.7( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.19( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.1a( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.1b( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.b( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.1c( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.1d( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.1e( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.1f( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.10( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 57 pg[11.11( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:16 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:16 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v46: 322 pgs: 124 unknown, 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:29:16 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"} v 0)
Nov 24 09:29:16 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 09:29:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:16 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab14001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:16 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Nov 24 09:29:16 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Nov 24 09:29:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:16 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab200095a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:17 : epoch 6924254d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:29:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:17 : epoch 6924254d : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:29:17 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Nov 24 09:29:17 compute-0 ceph-mon[74331]: osdmap e57: 3 total, 3 up, 3 in
Nov 24 09:29:17 compute-0 ceph-mon[74331]: pgmap v46: 322 pgs: 124 unknown, 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:29:17 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 09:29:17 compute-0 ceph-mon[74331]: 10.1b scrub starts
Nov 24 09:29:17 compute-0 ceph-mon[74331]: 10.1b scrub ok
Nov 24 09:29:17 compute-0 ceph-mon[74331]: 8.16 scrub starts
Nov 24 09:29:17 compute-0 ceph-mon[74331]: 8.16 scrub ok
Nov 24 09:29:17 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 09:29:17 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Nov 24 09:29:17 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Nov 24 09:29:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[98094]: ts=2025-11-24T09:29:17.536Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000837168s
Nov 24 09:29:17 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Nov 24 09:29:17 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Nov 24 09:29:17 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:29:18 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Nov 24 09:29:18 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 09:29:18 compute-0 ceph-mon[74331]: osdmap e58: 3 total, 3 up, 3 in
Nov 24 09:29:18 compute-0 ceph-mon[74331]: 10.1e scrub starts
Nov 24 09:29:18 compute-0 ceph-mon[74331]: 10.1e scrub ok
Nov 24 09:29:18 compute-0 ceph-mon[74331]: 8.14 scrub starts
Nov 24 09:29:18 compute-0 ceph-mon[74331]: 8.14 scrub ok
Nov 24 09:29:18 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Nov 24 09:29:18 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Nov 24 09:29:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:18 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab00003b70 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:18 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v49: 353 pgs: 31 unknown, 322 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:29:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:18 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:18 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Nov 24 09:29:18 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Nov 24 09:29:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:18 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab14003cc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:19 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Nov 24 09:29:19 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Nov 24 09:29:19 compute-0 ceph-mon[74331]: osdmap e59: 3 total, 3 up, 3 in
Nov 24 09:29:19 compute-0 ceph-mon[74331]: pgmap v49: 353 pgs: 31 unknown, 322 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:29:19 compute-0 ceph-mon[74331]: 10.1c scrub starts
Nov 24 09:29:19 compute-0 ceph-mon[74331]: 10.1c scrub ok
Nov 24 09:29:19 compute-0 ceph-mon[74331]: 9.11 scrub starts
Nov 24 09:29:19 compute-0 ceph-mon[74331]: 9.11 scrub ok
Nov 24 09:29:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:20 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab200095a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:20 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v50: 353 pgs: 31 unknown, 322 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:29:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:20 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab00003b70 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:20 compute-0 ceph-mgr[74626]: [progress INFO root] Writing back 21 completed events
Nov 24 09:29:20 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 24 09:29:20 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:20 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Nov 24 09:29:20 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Nov 24 09:29:20 compute-0 ceph-mon[74331]: 10.10 deep-scrub starts
Nov 24 09:29:20 compute-0 ceph-mon[74331]: 10.10 deep-scrub ok
Nov 24 09:29:20 compute-0 ceph-mon[74331]: 8.15 scrub starts
Nov 24 09:29:20 compute-0 ceph-mon[74331]: 8.15 scrub ok
Nov 24 09:29:20 compute-0 ceph-mon[74331]: 10.12 scrub starts
Nov 24 09:29:20 compute-0 ceph-mon[74331]: 10.12 scrub ok
Nov 24 09:29:20 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:20 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:21 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Nov 24 09:29:21 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Nov 24 09:29:21 compute-0 ceph-mon[74331]: pgmap v50: 353 pgs: 31 unknown, 322 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:29:21 compute-0 ceph-mon[74331]: 8.10 scrub starts
Nov 24 09:29:21 compute-0 ceph-mon[74331]: 8.10 scrub ok
Nov 24 09:29:21 compute-0 ceph-mon[74331]: 10.1d scrub starts
Nov 24 09:29:21 compute-0 ceph-mon[74331]: 10.1d scrub ok
Nov 24 09:29:21 compute-0 podman[98205]: 2025-11-24 09:29:21.952766559 +0000 UTC m=+5.711543387 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Nov 24 09:29:21 compute-0 podman[98205]: 2025-11-24 09:29:21.976337044 +0000 UTC m=+5.735113842 container create 6c4c31965d55e0c18e8b8bef1ad3cb3e669b6cc769accd496983eb228bf3ed6a (image=quay.io/ceph/grafana:10.4.0, name=interesting_hodgkin, maintainer=Grafana Labs <hello@grafana.com>)
Nov 24 09:29:22 compute-0 systemd[1]: Started libpod-conmon-6c4c31965d55e0c18e8b8bef1ad3cb3e669b6cc769accd496983eb228bf3ed6a.scope.
Nov 24 09:29:22 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:29:22 compute-0 podman[98205]: 2025-11-24 09:29:22.045778869 +0000 UTC m=+5.804555657 container init 6c4c31965d55e0c18e8b8bef1ad3cb3e669b6cc769accd496983eb228bf3ed6a (image=quay.io/ceph/grafana:10.4.0, name=interesting_hodgkin, maintainer=Grafana Labs <hello@grafana.com>)
Nov 24 09:29:22 compute-0 podman[98205]: 2025-11-24 09:29:22.052481785 +0000 UTC m=+5.811258573 container start 6c4c31965d55e0c18e8b8bef1ad3cb3e669b6cc769accd496983eb228bf3ed6a (image=quay.io/ceph/grafana:10.4.0, name=interesting_hodgkin, maintainer=Grafana Labs <hello@grafana.com>)
Nov 24 09:29:22 compute-0 podman[98205]: 2025-11-24 09:29:22.055190573 +0000 UTC m=+5.813967381 container attach 6c4c31965d55e0c18e8b8bef1ad3cb3e669b6cc769accd496983eb228bf3ed6a (image=quay.io/ceph/grafana:10.4.0, name=interesting_hodgkin, maintainer=Grafana Labs <hello@grafana.com>)
Nov 24 09:29:22 compute-0 interesting_hodgkin[98418]: 472 0
Nov 24 09:29:22 compute-0 systemd[1]: libpod-6c4c31965d55e0c18e8b8bef1ad3cb3e669b6cc769accd496983eb228bf3ed6a.scope: Deactivated successfully.
Nov 24 09:29:22 compute-0 podman[98205]: 2025-11-24 09:29:22.057169612 +0000 UTC m=+5.815946400 container died 6c4c31965d55e0c18e8b8bef1ad3cb3e669b6cc769accd496983eb228bf3ed6a (image=quay.io/ceph/grafana:10.4.0, name=interesting_hodgkin, maintainer=Grafana Labs <hello@grafana.com>)
Nov 24 09:29:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-371ba44d1ddd516f68f772d1497a233f388bc8b2413abaebef6725914419b9d8-merged.mount: Deactivated successfully.
Nov 24 09:29:22 compute-0 podman[98205]: 2025-11-24 09:29:22.104025626 +0000 UTC m=+5.862802424 container remove 6c4c31965d55e0c18e8b8bef1ad3cb3e669b6cc769accd496983eb228bf3ed6a (image=quay.io/ceph/grafana:10.4.0, name=interesting_hodgkin, maintainer=Grafana Labs <hello@grafana.com>)
Nov 24 09:29:22 compute-0 systemd[1]: libpod-conmon-6c4c31965d55e0c18e8b8bef1ad3cb3e669b6cc769accd496983eb228bf3ed6a.scope: Deactivated successfully.
Nov 24 09:29:22 compute-0 podman[98438]: 2025-11-24 09:29:22.174584428 +0000 UTC m=+0.051806828 container create 78544ff24afe8bc363e16bf63208a67a9cea60865a637213fb20e07c3dd25d94 (image=quay.io/ceph/grafana:10.4.0, name=hungry_solomon, maintainer=Grafana Labs <hello@grafana.com>)
Nov 24 09:29:22 compute-0 systemd[1]: Started libpod-conmon-78544ff24afe8bc363e16bf63208a67a9cea60865a637213fb20e07c3dd25d94.scope.
Nov 24 09:29:22 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:29:22 compute-0 podman[98438]: 2025-11-24 09:29:22.235128182 +0000 UTC m=+0.112350592 container init 78544ff24afe8bc363e16bf63208a67a9cea60865a637213fb20e07c3dd25d94 (image=quay.io/ceph/grafana:10.4.0, name=hungry_solomon, maintainer=Grafana Labs <hello@grafana.com>)
Nov 24 09:29:22 compute-0 podman[98438]: 2025-11-24 09:29:22.147823363 +0000 UTC m=+0.025045813 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Nov 24 09:29:22 compute-0 podman[98438]: 2025-11-24 09:29:22.241634594 +0000 UTC m=+0.118856994 container start 78544ff24afe8bc363e16bf63208a67a9cea60865a637213fb20e07c3dd25d94 (image=quay.io/ceph/grafana:10.4.0, name=hungry_solomon, maintainer=Grafana Labs <hello@grafana.com>)
Nov 24 09:29:22 compute-0 hungry_solomon[98455]: 472 0
Nov 24 09:29:22 compute-0 systemd[1]: libpod-78544ff24afe8bc363e16bf63208a67a9cea60865a637213fb20e07c3dd25d94.scope: Deactivated successfully.
Nov 24 09:29:22 compute-0 podman[98438]: 2025-11-24 09:29:22.2454992 +0000 UTC m=+0.122721600 container attach 78544ff24afe8bc363e16bf63208a67a9cea60865a637213fb20e07c3dd25d94 (image=quay.io/ceph/grafana:10.4.0, name=hungry_solomon, maintainer=Grafana Labs <hello@grafana.com>)
Nov 24 09:29:22 compute-0 podman[98438]: 2025-11-24 09:29:22.245874899 +0000 UTC m=+0.123097299 container died 78544ff24afe8bc363e16bf63208a67a9cea60865a637213fb20e07c3dd25d94 (image=quay.io/ceph/grafana:10.4.0, name=hungry_solomon, maintainer=Grafana Labs <hello@grafana.com>)
Nov 24 09:29:22 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:22 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab14003cc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a1b88d77c4fc1ae478f6f77afdae4ebad12ed31b3efd4f744418e33ace3c23b-merged.mount: Deactivated successfully.
Nov 24 09:29:22 compute-0 podman[98438]: 2025-11-24 09:29:22.29142181 +0000 UTC m=+0.168644210 container remove 78544ff24afe8bc363e16bf63208a67a9cea60865a637213fb20e07c3dd25d94 (image=quay.io/ceph/grafana:10.4.0, name=hungry_solomon, maintainer=Grafana Labs <hello@grafana.com>)
Nov 24 09:29:22 compute-0 systemd[1]: libpod-conmon-78544ff24afe8bc363e16bf63208a67a9cea60865a637213fb20e07c3dd25d94.scope: Deactivated successfully.
Nov 24 09:29:22 compute-0 systemd[1]: Reloading.
Nov 24 09:29:22 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v51: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:29:22 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"} v 0)
Nov 24 09:29:22 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 09:29:22 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Nov 24 09:29:22 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 09:29:22 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Nov 24 09:29:22 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 09:29:22 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Nov 24 09:29:22 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 24 09:29:22 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Nov 24 09:29:22 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 09:29:22 compute-0 systemd-sysv-generator[98503]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:29:22 compute-0 systemd-rc-local-generator[98498]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:29:22 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:22 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:22 compute-0 systemd[1]: Reloading.
Nov 24 09:29:22 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e59 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:29:22 compute-0 systemd-rc-local-generator[98565]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:29:22 compute-0 systemd-sysv-generator[98568]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:29:22 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Nov 24 09:29:22 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Nov 24 09:29:22 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Nov 24 09:29:22 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 09:29:22 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 09:29:22 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 09:29:22 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 24 09:29:22 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 09:29:22 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Nov 24 09:29:22 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[12.19( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[12.1c( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[10.1b( empty local-lis/les=0/0 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[10.18( empty local-lis/les=0/0 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[10.19( empty local-lis/les=0/0 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[10.5( empty local-lis/les=0/0 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[10.2( empty local-lis/les=0/0 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[12.8( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[12.a( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[12.e( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[10.8( empty local-lis/les=0/0 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[12.c( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[12.b( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[12.6( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[10.13( empty local-lis/les=0/0 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[10.15( empty local-lis/les=0/0 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[10.14( empty local-lis/les=0/0 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[12.12( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[12.10( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[11.17( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.322216034s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 active pruub 178.106506348s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[11.17( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.322190285s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 178.106506348s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[8.14( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=15.283275604s) [1] r=-1 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 active pruub 184.068496704s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[8.14( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=15.283259392s) [1] r=-1 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.068496704s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[11.16( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.325256348s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 active pruub 178.110626221s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[11.16( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.325242043s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 178.110626221s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[8.15( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=15.283199310s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 active pruub 184.068618774s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[8.15( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=15.283181190s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.068618774s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:22 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[11.14( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.324848175s) [1] r=-1 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 active pruub 178.110504150s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[11.14( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.324740410s) [1] r=-1 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 178.110504150s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[8.17( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=15.282783508s) [1] r=-1 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 active pruub 184.068634033s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[8.17( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=15.282761574s) [1] r=-1 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.068634033s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[11.13( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.324741364s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 active pruub 178.110656738s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[11.13( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.324725151s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 178.110656738s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[8.10( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=15.282703400s) [1] r=-1 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 active pruub 184.068679810s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[8.10( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=15.282691002s) [1] r=-1 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.068679810s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[11.12( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.324648857s) [1] r=-1 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 active pruub 178.110702515s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[11.12( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.324631691s) [1] r=-1 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 178.110702515s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[8.11( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=15.282659531s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 active pruub 184.068786621s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[8.11( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=15.282649994s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.068786621s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[11.1( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.324448586s) [1] r=-1 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 active pruub 178.110687256s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[8.2( v 38'12 (0'0,38'12] local-lis/les=54/55 n=1 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=15.282519341s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 active pruub 184.068771362s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[11.1( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.324433327s) [1] r=-1 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 178.110687256s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[8.2( v 38'12 (0'0,38'12] local-lis/les=54/55 n=1 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=15.282505989s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.068771362s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[8.3( v 38'12 (0'0,38'12] local-lis/les=54/55 n=1 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=15.282503128s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 active pruub 184.068908691s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[8.3( v 38'12 (0'0,38'12] local-lis/les=54/55 n=1 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=15.282492638s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.068908691s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[8.f( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=15.283065796s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 active pruub 184.069564819s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[8.8( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=15.282505989s) [1] r=-1 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 active pruub 184.069076538s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[8.f( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=15.283019066s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.069564819s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[8.8( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=15.282491684s) [1] r=-1 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.069076538s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-0 ceph-mon[74331]: 9.10 scrub starts
Nov 24 09:29:22 compute-0 ceph-mon[74331]: 9.10 scrub ok
Nov 24 09:29:22 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 09:29:22 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 09:29:22 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 09:29:22 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 24 09:29:22 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 09:29:22 compute-0 ceph-mon[74331]: 10.5 scrub starts
Nov 24 09:29:22 compute-0 ceph-mon[74331]: 10.5 scrub ok
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[8.16( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=15.281675339s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 active pruub 184.068496704s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[8.9( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=15.282152176s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 active pruub 184.069000244s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[8.16( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=15.281651497s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.068496704s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[8.9( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=15.282139778s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.069000244s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[11.e( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.324040413s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 active pruub 178.111099243s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[11.e( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.324027061s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 178.111099243s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[8.d( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=15.282050133s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 active pruub 184.069137573s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[11.f( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.323940277s) [1] r=-1 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 active pruub 178.111129761s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[11.f( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.323931694s) [1] r=-1 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 178.111129761s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[8.c( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=15.282052040s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 active pruub 184.069335938s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[8.c( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=15.282039642s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.069335938s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[8.d( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=15.282032967s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.069137573s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[11.8( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.323741913s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 active pruub 178.111206055s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[11.8( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.323727608s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 178.111206055s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[11.a( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.323531151s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 active pruub 178.111068726s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[11.a( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.323513031s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 178.111068726s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[8.b( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=15.281785011s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 active pruub 184.069366455s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[8.a( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=15.281472206s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 active pruub 184.069076538s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[8.b( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=15.281768799s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.069366455s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[8.a( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=15.281461716s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.069076538s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[11.3( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.323451042s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 active pruub 178.111297607s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[11.3( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.323435783s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 178.111297607s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[11.4( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.323212624s) [1] r=-1 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 active pruub 178.111251831s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[11.4( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.323200226s) [1] r=-1 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 178.111251831s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[8.6( v 38'12 (0'0,38'12] local-lis/les=54/55 n=1 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=15.284497261s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 active pruub 184.072692871s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[8.6( v 38'12 (0'0,38'12] local-lis/les=54/55 n=1 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=15.284482002s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.072692871s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[8.5( v 38'12 (0'0,38'12] local-lis/les=54/55 n=1 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=15.284579277s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 active pruub 184.072845459s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[8.5( v 38'12 (0'0,38'12] local-lis/les=54/55 n=1 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=15.284568787s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.072845459s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[11.5( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.322976112s) [1] r=-1 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 active pruub 178.111267090s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[11.5( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.322957993s) [1] r=-1 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 178.111267090s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[11.7( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.322921753s) [1] r=-1 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 active pruub 178.111343384s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[11.7( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.322868347s) [1] r=-1 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 178.111343384s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[11.19( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.322761536s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 active pruub 178.111343384s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[8.1b( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=15.284258842s) [1] r=-1 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 active pruub 184.072845459s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[11.19( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.322749138s) [2] r=-1 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 178.111343384s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[8.1b( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=15.284237862s) [1] r=-1 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.072845459s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[11.1a( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.322693825s) [1] r=-1 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 active pruub 178.111358643s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[11.1a( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.322680473s) [1] r=-1 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 178.111358643s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[8.19( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=15.284192085s) [1] r=-1 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 active pruub 184.072952271s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[8.19( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=15.284177780s) [1] r=-1 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.072952271s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[8.4( v 38'12 (0'0,38'12] local-lis/les=54/55 n=1 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=15.284022331s) [1] r=-1 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 active pruub 184.072921753s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[8.4( v 38'12 (0'0,38'12] local-lis/les=54/55 n=1 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=15.284009933s) [1] r=-1 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.072921753s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[11.1b( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.322412491s) [1] r=-1 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 active pruub 178.111373901s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[11.1b( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.322395325s) [1] r=-1 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 178.111373901s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[8.18( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=15.284050941s) [1] r=-1 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 active pruub 184.073028564s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[11.1c( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.322755814s) [1] r=-1 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 active pruub 178.111816406s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[8.18( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=15.283996582s) [1] r=-1 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.073028564s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[8.1f( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=15.283964157s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 active pruub 184.073043823s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[11.1c( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.322742462s) [1] r=-1 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 178.111816406s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[8.1f( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=15.283950806s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.073043823s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[11.1d( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.322723389s) [1] r=-1 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 active pruub 178.111862183s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[11.1d( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.322679520s) [1] r=-1 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 178.111862183s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[8.1c( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=15.283887863s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 active pruub 184.073242188s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[8.1c( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=15.283866882s) [2] r=-1 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.073242188s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[11.1e( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.324421883s) [1] r=-1 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 active pruub 178.113861084s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[11.1e( empty local-lis/les=56/57 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=60 pruub=9.324409485s) [1] r=-1 lpr=60 pi=[56,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 178.113861084s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[8.12( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=15.283895493s) [1] r=-1 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 active pruub 184.073532104s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 60 pg[8.12( v 38'12 (0'0,38'12] local-lis/les=54/55 n=0 ec=54/37 lis/c=54/54 les/c/f=55/55/0 sis=60 pruub=15.283883095s) [1] r=-1 lpr=60 pi=[54,60)/1 crt=38'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.073532104s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:22 compute-0 sudo[98537]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flbxknavifxqcxvvdijdeuqlgbocaeie ; /usr/bin/python3'
Nov 24 09:29:22 compute-0 sudo[98537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:29:22 compute-0 systemd[1]: Starting Ceph grafana.compute-0 for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:29:23 compute-0 python3[98575]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:29:23 compute-0 podman[98621]: 2025-11-24 09:29:23.174299309 +0000 UTC m=+0.043372208 container create a0674656060959d25392ea4042b567724541ad68ff4b7e0cdef72cb164c1b850 (image=quay.io/ceph/grafana:10.4.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 24 09:29:23 compute-0 podman[98622]: 2025-11-24 09:29:23.191529517 +0000 UTC m=+0.045326017 container create 887f2f92f9dc4668ed2e0f1dc4877e382d309d2642aee6a8402d0f37aef12e7d (image=quay.io/ceph/ceph:v19, name=romantic_kalam, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Nov 24 09:29:23 compute-0 systemd[1]: Started libpod-conmon-887f2f92f9dc4668ed2e0f1dc4877e382d309d2642aee6a8402d0f37aef12e7d.scope.
Nov 24 09:29:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f569a0a73c5eca6b9867f2fd2c49dff79b11c3f7968f5a53a5ba66d1653ddc9/merged/etc/grafana/certs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:29:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f569a0a73c5eca6b9867f2fd2c49dff79b11c3f7968f5a53a5ba66d1653ddc9/merged/etc/grafana/grafana.ini supports timestamps until 2038 (0x7fffffff)
Nov 24 09:29:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f569a0a73c5eca6b9867f2fd2c49dff79b11c3f7968f5a53a5ba66d1653ddc9/merged/var/lib/grafana/grafana.db supports timestamps until 2038 (0x7fffffff)
Nov 24 09:29:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f569a0a73c5eca6b9867f2fd2c49dff79b11c3f7968f5a53a5ba66d1653ddc9/merged/etc/grafana/provisioning/datasources supports timestamps until 2038 (0x7fffffff)
Nov 24 09:29:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f569a0a73c5eca6b9867f2fd2c49dff79b11c3f7968f5a53a5ba66d1653ddc9/merged/etc/grafana/provisioning/dashboards supports timestamps until 2038 (0x7fffffff)
Nov 24 09:29:23 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:29:23 compute-0 podman[98621]: 2025-11-24 09:29:23.239336184 +0000 UTC m=+0.108409103 container init a0674656060959d25392ea4042b567724541ad68ff4b7e0cdef72cb164c1b850 (image=quay.io/ceph/grafana:10.4.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 24 09:29:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e71de7392d3f20d11942bc1b870f134aab21d577bea900a54b756419fce18df/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:29:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e71de7392d3f20d11942bc1b870f134aab21d577bea900a54b756419fce18df/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:29:23 compute-0 podman[98621]: 2025-11-24 09:29:23.249298882 +0000 UTC m=+0.118371781 container start a0674656060959d25392ea4042b567724541ad68ff4b7e0cdef72cb164c1b850 (image=quay.io/ceph/grafana:10.4.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 24 09:29:23 compute-0 podman[98621]: 2025-11-24 09:29:23.155584905 +0000 UTC m=+0.024657824 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Nov 24 09:29:23 compute-0 bash[98621]: a0674656060959d25392ea4042b567724541ad68ff4b7e0cdef72cb164c1b850
Nov 24 09:29:23 compute-0 podman[98622]: 2025-11-24 09:29:23.258226714 +0000 UTC m=+0.112023234 container init 887f2f92f9dc4668ed2e0f1dc4877e382d309d2642aee6a8402d0f37aef12e7d (image=quay.io/ceph/ceph:v19, name=romantic_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:29:23 compute-0 systemd[1]: Started Ceph grafana.compute-0 for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:29:23 compute-0 podman[98622]: 2025-11-24 09:29:23.170919736 +0000 UTC m=+0.024716256 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:29:23 compute-0 podman[98622]: 2025-11-24 09:29:23.265952565 +0000 UTC m=+0.119749065 container start 887f2f92f9dc4668ed2e0f1dc4877e382d309d2642aee6a8402d0f37aef12e7d (image=quay.io/ceph/ceph:v19, name=romantic_kalam, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Nov 24 09:29:23 compute-0 podman[98622]: 2025-11-24 09:29:23.269213117 +0000 UTC m=+0.123009617 container attach 887f2f92f9dc4668ed2e0f1dc4877e382d309d2642aee6a8402d0f37aef12e7d (image=quay.io/ceph/ceph:v19, name=romantic_kalam, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Nov 24 09:29:23 compute-0 sudo[98139]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:23 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:29:23 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:23 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:29:23 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:23 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Nov 24 09:29:23 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:23 compute-0 ceph-mgr[74626]: [progress INFO root] complete: finished ev de8470b8-72e9-4ae4-b13f-249be9f0e534 (Updating grafana deployment (+1 -> 1))
Nov 24 09:29:23 compute-0 ceph-mgr[74626]: [progress INFO root] Completed event de8470b8-72e9-4ae4-b13f-249be9f0e534 (Updating grafana deployment (+1 -> 1)) in 8 seconds
Nov 24 09:29:23 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Nov 24 09:29:23 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:23 compute-0 ceph-mgr[74626]: [progress INFO root] update: starting ev 59eac4a3-cc7a-4062-988e-7cc8bd0e133a (Updating ingress.rgw.default deployment (+4 -> 4))
Nov 24 09:29:23 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/monitor_password}] v 0)
Nov 24 09:29:23 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:23 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-0.fxvlbj on compute-0
Nov 24 09:29:23 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-0.fxvlbj on compute-0
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=settings t=2025-11-24T09:29:23.420771041Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2025-11-24T09:29:23Z
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=settings t=2025-11-24T09:29:23.421029407Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=settings t=2025-11-24T09:29:23.421037157Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=settings t=2025-11-24T09:29:23.421041298Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=settings t=2025-11-24T09:29:23.421045088Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=settings t=2025-11-24T09:29:23.421048788Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=settings t=2025-11-24T09:29:23.421052418Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=settings t=2025-11-24T09:29:23.421055858Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=settings t=2025-11-24T09:29:23.421059948Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=settings t=2025-11-24T09:29:23.421063308Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=settings t=2025-11-24T09:29:23.421068218Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=settings t=2025-11-24T09:29:23.421071858Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=settings t=2025-11-24T09:29:23.421075818Z level=info msg=Target target=[all]
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=settings t=2025-11-24T09:29:23.421082619Z level=info msg="Path Home" path=/usr/share/grafana
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=settings t=2025-11-24T09:29:23.421086269Z level=info msg="Path Data" path=/var/lib/grafana
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=settings t=2025-11-24T09:29:23.421090439Z level=info msg="Path Logs" path=/var/log/grafana
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=settings t=2025-11-24T09:29:23.421093849Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=settings t=2025-11-24T09:29:23.421114609Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=settings t=2025-11-24T09:29:23.421118339Z level=info msg="App mode production"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=sqlstore t=2025-11-24T09:29:23.421493349Z level=info msg="Connecting to DB" dbtype=sqlite3
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=sqlstore t=2025-11-24T09:29:23.42150827Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r-----
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.422230987Z level=info msg="Starting DB migrations"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.423383736Z level=info msg="Executing migration" id="create migration_log table"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.42434935Z level=info msg="Migration successfully executed" id="create migration_log table" duration=965.654µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.427882668Z level=info msg="Executing migration" id="create user table"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.428578525Z level=info msg="Migration successfully executed" id="create user table" duration=695.017µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.430619866Z level=info msg="Executing migration" id="add unique index user.login"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.431392885Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=771.999µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.434960493Z level=info msg="Executing migration" id="add unique index user.email"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.435694992Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=735.469µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.43842278Z level=info msg="Executing migration" id="drop index UQE_user_login - v1"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.439122027Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=698.927µs
Nov 24 09:29:23 compute-0 sudo[98700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.441471545Z level=info msg="Executing migration" id="drop index UQE_user_email - v1"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.442142762Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=672.147µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.443633259Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1"
Nov 24 09:29:23 compute-0 sudo[98700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.446000598Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=2.365899ms
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.447754531Z level=info msg="Executing migration" id="create user table v2"
Nov 24 09:29:23 compute-0 sudo[98700]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.448441819Z level=info msg="Migration successfully executed" id="create user table v2" duration=686.868µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.452695984Z level=info msg="Executing migration" id="create index UQE_user_login - v2"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.45334461Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=648.956µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.45695686Z level=info msg="Executing migration" id="create index UQE_user_email - v2"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.457556535Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=598.484µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.459871152Z level=info msg="Executing migration" id="copy data_source v1 to v2"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.46019935Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=328.578µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.46501145Z level=info msg="Executing migration" id="Drop old table user_v1"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.465508792Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=497.202µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.470130037Z level=info msg="Executing migration" id="Add column help_flags1 to user table"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.47103768Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=907.463µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.475212263Z level=info msg="Executing migration" id="Update user table charset"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.475254314Z level=info msg="Migration successfully executed" id="Update user table charset" duration=38.721µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.480390602Z level=info msg="Executing migration" id="Add last_seen_at column to user"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.481366086Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=975.874µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.483761186Z level=info msg="Executing migration" id="Add missing user data"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.484137605Z level=info msg="Migration successfully executed" id="Add missing user data" duration=372.299µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.487057417Z level=info msg="Executing migration" id="Add is_disabled column to user"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.48836542Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.307733ms
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.491652502Z level=info msg="Executing migration" id="Add index user.login/user.email"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.492619565Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=968.123µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.494582425Z level=info msg="Executing migration" id="Add is_service_account column to user"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.495799625Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.21692ms
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.499148628Z level=info msg="Executing migration" id="Update is_service_account column to nullable"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.506157602Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=7.011684ms
Nov 24 09:29:23 compute-0 sudo[98727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/haproxy:2.3 --timeout 895 _orch deploy --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:29:23 compute-0 sudo[98727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.509043194Z level=info msg="Executing migration" id="Add uid column to user"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.510396227Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=1.353584ms
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.513276689Z level=info msg="Executing migration" id="Update uid column values for users"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.513845153Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=571.214µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.516351875Z level=info msg="Executing migration" id="Add unique index user_uid"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.517090104Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=738.559µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.523559834Z level=info msg="Executing migration" id="create temp user table v1-7"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.524502048Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=943.023µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.530847816Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.531988154Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=1.142979ms
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.544387781Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.545277414Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=893.793µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.552130254Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.552895062Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=764.878µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.55800825Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.55881046Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=802.88µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.563923627Z level=info msg="Executing migration" id="Update temp_user table charset"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.563962827Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=45.231µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.568012788Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.568754517Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=744.679µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.571020983Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.571985717Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=963.844µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.574044429Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.574633273Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=588.704µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.576689503Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.577307919Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=619.086µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.579062053Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.582212211Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=3.151278ms
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.584907868Z level=info msg="Executing migration" id="create temp_user v2"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.585576785Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=668.897µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.587356189Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.588039846Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=683.577µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.5897926Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.590547798Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=755.619µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.593189823Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.593783459Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=593.126µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.595689086Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.596349692Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=661.046µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.601250914Z level=info msg="Executing migration" id="copy temp_user v1 to v2"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.601570292Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=319.838µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.610907394Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.611598191Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=689.976µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.613507389Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.613838107Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=330.888µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.617013065Z level=info msg="Executing migration" id="create star table"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.617562209Z level=info msg="Migration successfully executed" id="create star table" duration=549.194µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.620000439Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.620628305Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=627.526µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.623537198Z level=info msg="Executing migration" id="create org table v1"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.624161983Z level=info msg="Migration successfully executed" id="create org table v1" duration=625.945µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.626296336Z level=info msg="Executing migration" id="create index UQE_org_name - v1"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.62687096Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=576.144µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.62886237Z level=info msg="Executing migration" id="create org_user table v1"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.629399473Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=536.973µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.631883275Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.632552671Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=669.316µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.634582782Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.635216387Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=632.875µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.637193887Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.637788671Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=594.364µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.639522254Z level=info msg="Executing migration" id="Update org table charset"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.639544595Z level=info msg="Migration successfully executed" id="Update org table charset" duration=22.951µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.640931189Z level=info msg="Executing migration" id="Update org_user table charset"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.64095281Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=20.091µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.643184896Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.643315629Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=130.643µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.644684263Z level=info msg="Executing migration" id="create dashboard table"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.645305298Z level=info msg="Migration successfully executed" id="create dashboard table" duration=620.415µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.647156334Z level=info msg="Executing migration" id="add index dashboard.account_id"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.6477937Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=637.096µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.649663567Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.650308763Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=643.245µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.652304132Z level=info msg="Executing migration" id="create dashboard_tag table"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.652814885Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=510.743µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.654811024Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.655495682Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=684.108µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.657506591Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.658381653Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=875.402µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.660080455Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.664475434Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=4.394779ms
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.666145526Z level=info msg="Executing migration" id="create dashboard v2"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.666773692Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=625.696µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.668439043Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.669038427Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=599.224µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.671087129Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.671693953Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=606.374µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.674345629Z level=info msg="Executing migration" id="copy dashboard v1 to v2"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.674706228Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=363.039µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.676553395Z level=info msg="Executing migration" id="drop table dashboard_v1"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.677417945Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=864.18µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.678776699Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.67882571Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=49.431µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.680486072Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.681851486Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.365074ms
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.683149878Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.684501731Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.351513ms
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.686065281Z level=info msg="Executing migration" id="Add column gnetId in dashboard"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.687494917Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.429295ms
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.692304006Z level=info msg="Executing migration" id="Add index for gnetId in dashboard"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.692979242Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=675.276µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.694724786Z level=info msg="Executing migration" id="Add column plugin_id in dashboard"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.696044719Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.319813ms
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.697749621Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.698362786Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=612.415µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.699919634Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.700563801Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=644.027µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.702876839Z level=info msg="Executing migration" id="Update dashboard table charset"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.702900849Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=24.12µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.704523659Z level=info msg="Executing migration" id="Update dashboard_tag table charset"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.70454428Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=21.001µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.706038097Z level=info msg="Executing migration" id="Add column folder_id in dashboard"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.707492783Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=1.456696ms
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.709265487Z level=info msg="Executing migration" id="Add column isFolder in dashboard"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.710643081Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.374754ms
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.712351743Z level=info msg="Executing migration" id="Add column has_acl in dashboard"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.714067277Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=1.715183ms
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.715659366Z level=info msg="Executing migration" id="Add column uid in dashboard"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.717640575Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=1.980819ms
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.719216834Z level=info msg="Executing migration" id="Update uid column values in dashboard"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.719429069Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=211.865µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.721159953Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.721817829Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=658.075µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.723805618Z level=info msg="Executing migration" id="Remove unique index org_id_slug"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.724503735Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=697.907µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.725955431Z level=info msg="Executing migration" id="Update dashboard title length"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.725976502Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=22.201µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.727545821Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.728238208Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=691.777µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.732368291Z level=info msg="Executing migration" id="create dashboard_provisioning"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.733055608Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=687.697µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.736119244Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.740060782Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=3.941028ms
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.741693112Z level=info msg="Executing migration" id="create dashboard_provisioning v2"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.742326798Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=634.036µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.744335438Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.744962014Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=626.746µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.746931642Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.747599088Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=666.316µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.749766773Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.750087761Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=320.607µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.7516645Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.752155362Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=491.382µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.754232663Z level=info msg="Executing migration" id="Add check_sum column"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.75605908Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=1.831427ms
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.759640618Z level=info msg="Executing migration" id="Add index for dashboard_title"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.760352326Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=711.378µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.765135444Z level=info msg="Executing migration" id="delete tags for deleted dashboards"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.765278638Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=143.314µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.766652282Z level=info msg="Executing migration" id="delete stars for deleted dashboards"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.766790635Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=138.733µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.768094098Z level=info msg="Executing migration" id="Add index for dashboard_is_folder"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.768785016Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=690.938µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.772646281Z level=info msg="Executing migration" id="Add isPublic for dashboard"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.774345994Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=1.700163ms
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.775940883Z level=info msg="Executing migration" id="create data_source table"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.776788674Z level=info msg="Migration successfully executed" id="create data_source table" duration=847.841µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.778757633Z level=info msg="Executing migration" id="add index data_source.account_id"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.77944084Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=683.207µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.781622424Z level=info msg="Executing migration" id="add unique index data_source.account_id_name"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.782454004Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=831µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.784510456Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.785245604Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=736.178µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.786731121Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.787366497Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=634.927µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.789169391Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.79356962Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=4.398269ms
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.79517142Z level=info msg="Executing migration" id="create data_source table v2"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.795985281Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=814.921µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.797749534Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.798493473Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=744.649µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.800134994Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.800867983Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=732.739µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.803002875Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.803547838Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=546.013µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.804903952Z level=info msg="Executing migration" id="Add column with_credentials"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.806647216Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=1.742534ms
Nov 24 09:29:23 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.808213395Z level=info msg="Executing migration" id="Add secure json data column"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.809993739Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=1.780034ms
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.81168245Z level=info msg="Executing migration" id="Update data_source table charset"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.811704651Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=23.551µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.813029564Z level=info msg="Executing migration" id="Update initial version to 1"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.813221139Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=191.934µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.815045655Z level=info msg="Executing migration" id="Add read_only data column"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.816821068Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=1.775623ms
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.818314695Z level=info msg="Executing migration" id="Migrate logging ds to loki ds"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.818473539Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=160.374µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.819899765Z level=info msg="Executing migration" id="Update json_data with nulls"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.820039098Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=139.573µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.821681039Z level=info msg="Executing migration" id="Add uid column"
Nov 24 09:29:23 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.823419553Z level=info msg="Migration successfully executed" id="Add uid column" duration=1.738034ms
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.826425267Z level=info msg="Executing migration" id="Update uid value"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.826583121Z level=info msg="Migration successfully executed" id="Update uid value" duration=155.234µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.829272428Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.829919443Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=646.725µs
Nov 24 09:29:23 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.831909383Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default"
Nov 24 09:29:23 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.832693542Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=783.499µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.835232806Z level=info msg="Executing migration" id="create api_key table"
Nov 24 09:29:23 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.83659878Z level=info msg="Migration successfully executed" id="create api_key table" duration=1.364174ms
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.842350422Z level=info msg="Executing migration" id="add index api_key.account_id"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.84306337Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=712.668µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.849302145Z level=info msg="Executing migration" id="add index api_key.key"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.850328491Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=1.025646ms
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.865493998Z level=info msg="Executing migration" id="add index api_key.account_id_name"
Nov 24 09:29:23 compute-0 ceph-mon[74331]: pgmap v51: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:29:23 compute-0 ceph-mon[74331]: 8.17 scrub starts
Nov 24 09:29:23 compute-0 ceph-mon[74331]: 8.17 scrub ok
Nov 24 09:29:23 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 09:29:23 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 09:29:23 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 09:29:23 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 24 09:29:23 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 09:29:23 compute-0 ceph-mon[74331]: osdmap e60: 3 total, 3 up, 3 in
Nov 24 09:29:23 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:23 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:23 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:23 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:23 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:23 compute-0 ceph-mon[74331]: osdmap e61: 3 total, 3 up, 3 in
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.868158754Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=2.667057ms
Nov 24 09:29:23 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 61 pg[10.2( v 42'48 (0'0,42'48] local-lis/les=60/61 n=1 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:23 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 61 pg[10.13( v 42'48 (0'0,42'48] local-lis/les=60/61 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:23 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 61 pg[10.15( v 59'51 lc 42'22 (0'0,59'51] local-lis/les=60/61 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=59'51 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:23 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 61 pg[10.14( v 59'51 lc 42'44 (0'0,59'51] local-lis/les=60/61 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=59'51 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:23 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 61 pg[12.6( v 57'56 lc 51'43 (0'0,57'56] local-lis/les=60/61 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:23 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 61 pg[12.12( v 57'56 (0'0,57'56] local-lis/les=60/61 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:23 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 61 pg[12.10( v 59'59 lc 51'27 (0'0,59'59] local-lis/les=60/61 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=59'59 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:23 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 61 pg[12.b( v 57'56 (0'0,57'56] local-lis/les=60/61 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:23 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 61 pg[12.c( v 57'56 (0'0,57'56] local-lis/les=60/61 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:23 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 61 pg[12.e( v 57'56 (0'0,57'56] local-lis/les=60/61 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:23 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 61 pg[10.8( v 42'48 (0'0,42'48] local-lis/les=60/61 n=1 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:23 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 61 pg[12.a( v 57'56 lc 0'0 (0'0,57'56] local-lis/les=60/61 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=57'56 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:23 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 61 pg[10.18( v 42'48 (0'0,42'48] local-lis/les=60/61 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:23 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 61 pg[10.1b( v 42'48 (0'0,42'48] local-lis/les=60/61 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:23 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 61 pg[12.19( v 57'56 (0'0,57'56] local-lis/les=60/61 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:23 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 61 pg[10.5( v 42'48 (0'0,42'48] local-lis/les=60/61 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:23 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 61 pg[12.1c( v 57'56 (0'0,57'56] local-lis/les=60/61 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:23 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 61 pg[10.19( v 42'48 (0'0,42'48] local-lis/les=60/61 n=0 ec=56/41 lis/c=56/56 les/c/f=57/57/0 sis=60) [0] r=0 lpr=60 pi=[56,60)/1 crt=42'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:23 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 61 pg[12.8( v 57'56 (0'0,57'56] local-lis/les=60/61 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=60) [0] r=0 lpr=60 pi=[58,60)/1 crt=57'56 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.877732251Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.878950091Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=1.22141ms
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.88249638Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.88334194Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=848.92µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.885894704Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.886913709Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=1.022565ms
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.888997201Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1"
Nov 24 09:29:23 compute-0 podman[98793]: 2025-11-24 09:29:23.892356455 +0000 UTC m=+0.048815754 container create 6e818d624cb816ff45f5a192d53f594487c77d49f95a9e7e46d8c7a0f7cdbfd6 (image=quay.io/ceph/haproxy:2.3, name=admiring_carson)
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.894796045Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=5.791475ms
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.896572629Z level=info msg="Executing migration" id="create api_key table v2"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.897365098Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=792.519µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.89900037Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.899698687Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=698.787µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.902762943Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.903706406Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=943.693µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.905898641Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.907831789Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=1.932398ms
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.911017848Z level=info msg="Executing migration" id="copy api_key v1 to v2"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.911606823Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=591.655µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.913282194Z level=info msg="Executing migration" id="Drop old table api_key_v1"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.91391881Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=635.586µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.915618702Z level=info msg="Executing migration" id="Update api_key table charset"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.915656204Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=32.752µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.917413557Z level=info msg="Executing migration" id="Add expires to api_key table"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.920273338Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=2.859711ms
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.922571055Z level=info msg="Executing migration" id="Add service account foreign key"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.924964814Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.39211ms
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.926552644Z level=info msg="Executing migration" id="set service account foreign key to nil if 0"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.926719758Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=166.954µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.92838001Z level=info msg="Executing migration" id="Add last_used_at to api_key table"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.930977933Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.597353ms
Nov 24 09:29:23 compute-0 systemd[1]: Started libpod-conmon-6e818d624cb816ff45f5a192d53f594487c77d49f95a9e7e46d8c7a0f7cdbfd6.scope.
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.932930352Z level=info msg="Executing migration" id="Add is_revoked column to api_key table"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.934891171Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=1.959799ms
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.937373383Z level=info msg="Executing migration" id="create dashboard_snapshot table v4"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.938653795Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=1.281122ms
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.941180837Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.941978837Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=793.58µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.944663383Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.945782761Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.119648ms
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.947715859Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.949030672Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=1.314603ms
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.952742074Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.954300753Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=1.563909ms
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.957819571Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.95902309Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=1.205899ms
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.961257236Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.961312927Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=56.111µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.963367579Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.963399839Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=33.48µs
Nov 24 09:29:23 compute-0 podman[98793]: 2025-11-24 09:29:23.870845541 +0000 UTC m=+0.027304850 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.966772353Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table"
Nov 24 09:29:23 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.970903546Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=4.128002ms
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.97389358Z level=info msg="Executing migration" id="Add encrypted dashboard json column"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.976991336Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=3.090967ms
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.979984021Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.980170866Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=186.666µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.983375965Z level=info msg="Executing migration" id="create quota table v1"
Nov 24 09:29:23 compute-0 podman[98793]: 2025-11-24 09:29:23.983776215 +0000 UTC m=+0.140235524 container init 6e818d624cb816ff45f5a192d53f594487c77d49f95a9e7e46d8c7a0f7cdbfd6 (image=quay.io/ceph/haproxy:2.3, name=admiring_carson)
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.98436711Z level=info msg="Migration successfully executed" id="create quota table v1" duration=993.225µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.986735658Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.987513268Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=778.87µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.98962627Z level=info msg="Executing migration" id="Update quota table charset"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.989649261Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=24.761µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.991166228Z level=info msg="Executing migration" id="create plugin_setting table"
Nov 24 09:29:23 compute-0 podman[98793]: 2025-11-24 09:29:23.991799955 +0000 UTC m=+0.148259244 container start 6e818d624cb816ff45f5a192d53f594487c77d49f95a9e7e46d8c7a0f7cdbfd6 (image=quay.io/ceph/haproxy:2.3, name=admiring_carson)
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.991903987Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=737.649µs
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.993912687Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1"
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.994666416Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=753.419µs
Nov 24 09:29:23 compute-0 podman[98793]: 2025-11-24 09:29:23.99564851 +0000 UTC m=+0.152107799 container attach 6e818d624cb816ff45f5a192d53f594487c77d49f95a9e7e46d8c7a0f7cdbfd6 (image=quay.io/ceph/haproxy:2.3, name=admiring_carson)
Nov 24 09:29:23 compute-0 admiring_carson[98857]: 0 0
Nov 24 09:29:23 compute-0 systemd[1]: libpod-6e818d624cb816ff45f5a192d53f594487c77d49f95a9e7e46d8c7a0f7cdbfd6.scope: Deactivated successfully.
Nov 24 09:29:23 compute-0 podman[98793]: 2025-11-24 09:29:23.998905261 +0000 UTC m=+0.155364550 container died 6e818d624cb816ff45f5a192d53f594487c77d49f95a9e7e46d8c7a0f7cdbfd6 (image=quay.io/ceph/haproxy:2.3, name=admiring_carson)
Nov 24 09:29:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:23.996749507Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.000129652Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=3.378324ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.001859214Z level=info msg="Executing migration" id="Update plugin_setting table charset"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.001890375Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=31.621µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.003577227Z level=info msg="Executing migration" id="create session table"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.004389727Z level=info msg="Migration successfully executed" id="create session table" duration=812.63µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.006814838Z level=info msg="Executing migration" id="Drop old table playlist table"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.00690476Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=91.422µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.009315069Z level=info msg="Executing migration" id="Drop old table playlist_item table"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.009390071Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=75.082µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.010958381Z level=info msg="Executing migration" id="create playlist table v2"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.011596896Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=638.795µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.013424151Z level=info msg="Executing migration" id="create playlist item table v2"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.014315674Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=891.583µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.017294038Z level=info msg="Executing migration" id="Update playlist table charset"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.017323878Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=29.65µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.018903697Z level=info msg="Executing migration" id="Update playlist_item table charset"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.018930078Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=29.251µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.020651542Z level=info msg="Executing migration" id="Add playlist column created_at"
Nov 24 09:29:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-577b493058710a26c79de1135655ed9534e924d0e0983dbcb00ed8fe5bcde01a-merged.mount: Deactivated successfully.
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.025787938Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=5.131606ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.028414054Z level=info msg="Executing migration" id="Add playlist column updated_at"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.031341066Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=2.930652ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.033238114Z level=info msg="Executing migration" id="drop preferences table v2"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.033337677Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=101.223µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.035866859Z level=info msg="Executing migration" id="drop preferences table v3"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.035948661Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=81.672µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.040451673Z level=info msg="Executing migration" id="create preferences table v3"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.041451807Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=1.000354ms
Nov 24 09:29:24 compute-0 podman[98793]: 2025-11-24 09:29:24.043212652 +0000 UTC m=+0.199671941 container remove 6e818d624cb816ff45f5a192d53f594487c77d49f95a9e7e46d8c7a0f7cdbfd6 (image=quay.io/ceph/haproxy:2.3, name=admiring_carson)
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.043619582Z level=info msg="Executing migration" id="Update preferences table charset"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.043655093Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=37.76µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.046561055Z level=info msg="Executing migration" id="Add column team_id in preferences"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.049144949Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=2.539593ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.050984304Z level=info msg="Executing migration" id="Update team_id column values in preferences"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.051190729Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=210.035µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.055387814Z level=info msg="Executing migration" id="Add column week_start in preferences"
Nov 24 09:29:24 compute-0 systemd[1]: libpod-conmon-6e818d624cb816ff45f5a192d53f594487c77d49f95a9e7e46d8c7a0f7cdbfd6.scope: Deactivated successfully.
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.058386938Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.003765ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.060749737Z level=info msg="Executing migration" id="Add column preferences.json_data"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.063189688Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=2.439641ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.064964612Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.065016703Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=55.771µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.068404168Z level=info msg="Executing migration" id="Add preferences index org_id"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.069419432Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=1.025485ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.078382875Z level=info msg="Executing migration" id="Add preferences index user_id"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.079628526Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.249471ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.083085752Z level=info msg="Executing migration" id="create alert table v1"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.084402955Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.318632ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.087548872Z level=info msg="Executing migration" id="add index alert org_id & id "
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.088466656Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=917.293µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.090997409Z level=info msg="Executing migration" id="add index alert state"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.091683625Z level=info msg="Migration successfully executed" id="add index alert state" duration=685.576µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.094863865Z level=info msg="Executing migration" id="add index alert dashboard_id"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.096309091Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=1.449416ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.100036353Z level=info msg="Executing migration" id="Create alert_rule_tag table v1"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.100739801Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=700.268µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.103701814Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.104490913Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=790.259µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.107047187Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.107853987Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=811.11µs
Nov 24 09:29:24 compute-0 systemd[1]: Reloading.
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.113908947Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.122151092Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=8.241075ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.125954587Z level=info msg="Executing migration" id="Create alert_rule_tag table v2"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.126945671Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=994.684µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.12931903Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.130023228Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=704.668µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.133404672Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.133659908Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=293.147µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.135677788Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.136226232Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=548.224µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.138917639Z level=info msg="Executing migration" id="create alert_notification table v1"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.139571794Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=654.265µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.143881302Z level=info msg="Executing migration" id="Add column is_default"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.14663439Z level=info msg="Migration successfully executed" id="Add column is_default" duration=2.751358ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.150972909Z level=info msg="Executing migration" id="Add column frequency"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.155442039Z level=info msg="Migration successfully executed" id="Add column frequency" duration=4.470261ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.159215992Z level=info msg="Executing migration" id="Add column send_reminder"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.163338315Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=4.039821ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.165438737Z level=info msg="Executing migration" id="Add column disable_resolve_message"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.169363745Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.898877ms
Nov 24 09:29:24 compute-0 romantic_kalam[98654]: could not fetch user info: no user info saved
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.175838825Z level=info msg="Executing migration" id="add index alert_notification org_id & name"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.177997469Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=2.162574ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.180998374Z level=info msg="Executing migration" id="Update alert table charset"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.181047235Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=52.911µs
Nov 24 09:29:24 compute-0 systemd-rc-local-generator[98914]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.184942332Z level=info msg="Executing migration" id="Update alert_notification table charset"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.184972143Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=30.591µs
Nov 24 09:29:24 compute-0 systemd-sysv-generator[98921]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.193225798Z level=info msg="Executing migration" id="create notification_journal table v1"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.194740445Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=1.524557ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.200517569Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.201558115Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.041346ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.207049771Z level=info msg="Executing migration" id="drop alert_notification_journal"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.208028195Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=979.074µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.210897866Z level=info msg="Executing migration" id="create alert_notification_state table v1"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.212495837Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=1.598591ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.213956383Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.214860905Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=904.112µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.216445324Z level=info msg="Executing migration" id="Add for to alert table"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.219553892Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=3.106978ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.224028403Z level=info msg="Executing migration" id="Add column uid in alert_notification"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.227066648Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.043905ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.228456293Z level=info msg="Executing migration" id="Update uid column values in alert_notification"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.228598556Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=142.453µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.230048222Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.23074269Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=693.968µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.233342484Z level=info msg="Executing migration" id="Remove unique index org_id_name"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.234071982Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=729.459µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.237555078Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.240283486Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=2.728388ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.241959888Z level=info msg="Executing migration" id="alter alert.settings to mediumtext"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.24200793Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=48.302µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.243545618Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.244408369Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=861.21µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.246385598Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.247597638Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=1.21442ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.250318816Z level=info msg="Executing migration" id="Drop old annotation table v4"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.25046805Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=152.614µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.252383897Z level=info msg="Executing migration" id="create annotation table v5"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.253684949Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=1.301452ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.257395142Z level=info msg="Executing migration" id="add index annotation 0 v3"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.258844848Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.456406ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.261094203Z level=info msg="Executing migration" id="add index annotation 1 v3"
Nov 24 09:29:24 compute-0 podman[98622]: 2025-11-24 09:29:24.260864438 +0000 UTC m=+1.114660938 container died 887f2f92f9dc4668ed2e0f1dc4877e382d309d2642aee6a8402d0f37aef12e7d (image=quay.io/ceph/ceph:v19, name=romantic_kalam, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.262341254Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.249421ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.266905897Z level=info msg="Executing migration" id="add index annotation 2 v3"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:24 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab00003b70 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.267881862Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=981.305µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.270367124Z level=info msg="Executing migration" id="add index annotation 3 v3"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.271238525Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=872.141µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.277321817Z level=info msg="Executing migration" id="add index annotation 4 v3"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.279469339Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=2.150253ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.282657659Z level=info msg="Executing migration" id="Update annotation table charset"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.28268703Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=30.311µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.284845053Z level=info msg="Executing migration" id="Add column region_id to annotation table"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.28831325Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=3.467917ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.290848222Z level=info msg="Executing migration" id="Drop category_id index"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.291813696Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=964.794µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.293610061Z level=info msg="Executing migration" id="Add column tags to annotation table"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.296723199Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=3.112209ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.299675342Z level=info msg="Executing migration" id="Create annotation_tag table v2"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.300543533Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=869.911µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.302473491Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.303436675Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=963.083µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.306782378Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.307668481Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=886.023µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.310415088Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.320411187Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=9.988659ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.324225511Z level=info msg="Executing migration" id="Create annotation_tag table v3"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.325296858Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=1.072597ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.327239366Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.328299982Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=1.062096ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.330310653Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.330643971Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=338.238µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.334211319Z level=info msg="Executing migration" id="drop table annotation_tag_v2"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.334846395Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=634.276µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.336425334Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.336599629Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=174.685µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.342836764Z level=info msg="Executing migration" id="Add created time to annotation table"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.347000728Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=4.158043ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.349321295Z level=info msg="Executing migration" id="Add updated time to annotation table"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.35317654Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=3.853706ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.355000215Z level=info msg="Executing migration" id="Add index for created in annotation table"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.356002981Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=1.003106ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.357753334Z level=info msg="Executing migration" id="Add index for updated in annotation table"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.358684567Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=931.003µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.362188304Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.362480512Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=292.368µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.364140843Z level=info msg="Executing migration" id="Add epoch_end column"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.368199744Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=4.053811ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.370040099Z level=info msg="Executing migration" id="Add index for epoch_end"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.371222239Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=1.18345ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.373248309Z level=info msg="Executing migration" id="Make epoch_end the same as epoch"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.373467764Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=219.885µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.375229488Z level=info msg="Executing migration" id="Move region to single row"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.375673919Z level=info msg="Migration successfully executed" id="Move region to single row" duration=444.941µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.377193717Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.378241283Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.042936ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.379811542Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.380717505Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=905.863µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.384983211Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.386336224Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.353953ms
Nov 24 09:29:24 compute-0 systemd[1]: libpod-887f2f92f9dc4668ed2e0f1dc4877e382d309d2642aee6a8402d0f37aef12e7d.scope: Deactivated successfully.
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.390013205Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.391249336Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=1.232801ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.394766424Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.396057836Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.292772ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.401295945Z level=info msg="Executing migration" id="Add index for alert_id on annotation table"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.402445184Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=1.150519ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.404755772Z level=info msg="Executing migration" id="Increase tags column to length 4096"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.404839574Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=84.542µs
Nov 24 09:29:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e71de7392d3f20d11942bc1b870f134aab21d577bea900a54b756419fce18df-merged.mount: Deactivated successfully.
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.409326715Z level=info msg="Executing migration" id="create test_data table"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.410408942Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.083127ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.420001371Z level=info msg="Executing migration" id="create dashboard_version table v1"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.42119169Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=1.18891ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.427069896Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.428169414Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.100808ms
Nov 24 09:29:24 compute-0 systemd[1]: Reloading.
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.430437029Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.431470675Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.034616ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.433666649Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.433878725Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=209.666µs
Nov 24 09:29:24 compute-0 podman[98622]: 2025-11-24 09:29:24.433939377 +0000 UTC m=+1.287735877 container remove 887f2f92f9dc4668ed2e0f1dc4877e382d309d2642aee6a8402d0f37aef12e7d (image=quay.io/ceph/ceph:v19, name=romantic_kalam, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.437664189Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.438202993Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=539.934µs
Nov 24 09:29:24 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v54: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:29:24 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.439741941Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1"
Nov 24 09:29:24 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.439806222Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=64.821µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.441525545Z level=info msg="Executing migration" id="create team table"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.442379206Z level=info msg="Migration successfully executed" id="create team table" duration=853.251µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.444335095Z level=info msg="Executing migration" id="add index team.org_id"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.445388401Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.052106ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.44736169Z level=info msg="Executing migration" id="add unique index team_org_id_name"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.448311464Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=949.413µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.450276512Z level=info msg="Executing migration" id="Add column uid in team"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.45423367Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=3.956728ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.455753158Z level=info msg="Executing migration" id="Update uid column values in team"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.455922262Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=169.394µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.457372048Z level=info msg="Executing migration" id="Add unique index team_org_id_uid"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.458077406Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=704.408µs
Nov 24 09:29:24 compute-0 sudo[98537]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.460706552Z level=info msg="Executing migration" id="create team member table"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.461345227Z level=info msg="Migration successfully executed" id="create team member table" duration=638.215µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.46428306Z level=info msg="Executing migration" id="add index team_member.org_id"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.465185993Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=902.943µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.467404558Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.468350071Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=947.403µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.470682249Z level=info msg="Executing migration" id="add index team_member.team_id"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.471632643Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=950.084µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.476376031Z level=info msg="Executing migration" id="Add column email to team table"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.480457362Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=4.080071ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.484343109Z level=info msg="Executing migration" id="Add column external to team_member table"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.488686357Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=4.342657ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.490563083Z level=info msg="Executing migration" id="Add column permission to team_member table"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:24 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf4000d90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:24 compute-0 systemd-rc-local-generator[98967]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.495074475Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.507541ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.497906585Z level=info msg="Executing migration" id="create dashboard acl table"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.498996582Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=1.092377ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.501148056Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id"
Nov 24 09:29:24 compute-0 systemd-sysv-generator[98972]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.501954546Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=807.17µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.504480289Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.505686819Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.208501ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.51016334Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.511352969Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.190359ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.513328408Z level=info msg="Executing migration" id="add index dashboard_acl_user_id"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.514093957Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=765.379µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.516467447Z level=info msg="Executing migration" id="add index dashboard_acl_team_id"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.51743844Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=974.473µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.519553083Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.520312191Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=759.048µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.522356243Z level=info msg="Executing migration" id="add index dashboard_permission"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.52306783Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=711.147µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.52504828Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.525552992Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=504.792µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.527273154Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.527443409Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=170.495µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.528893916Z level=info msg="Executing migration" id="create tag table"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.529562202Z level=info msg="Migration successfully executed" id="create tag table" duration=668.266µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.531598942Z level=info msg="Executing migration" id="add index tag.key_value"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.53232578Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=726.468µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.534410362Z level=info msg="Executing migration" id="create login attempt table"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.534979706Z level=info msg="Migration successfully executed" id="create login attempt table" duration=569.734µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.537113149Z level=info msg="Executing migration" id="add index login_attempt.username"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.537797046Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=696.317µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.539926259Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.540660637Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=732.008µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.542920554Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.553053205Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=10.131641ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.554760018Z level=info msg="Executing migration" id="create login_attempt v2"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.555346912Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=587.445µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.556938792Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.55769376Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=754.448µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.559908856Z level=info msg="Executing migration" id="copy login_attempt v1 to v2"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.560178952Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=272.146µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.561600358Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.56212144Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=521.083µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.563774181Z level=info msg="Executing migration" id="create user auth table"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.564381837Z level=info msg="Migration successfully executed" id="create user auth table" duration=606.796µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.567270848Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.568000576Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=729.148µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.570258812Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.570320624Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=62.752µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.574426316Z level=info msg="Executing migration" id="Add OAuth access token to user_auth"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.578549368Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=4.121732ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.580186409Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.584042205Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=3.855996ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.585798948Z level=info msg="Executing migration" id="Add OAuth token type to user_auth"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.589304206Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=3.504718ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.591227483Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.594953916Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=3.725983ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.596632508Z level=info msg="Executing migration" id="Add index to user_id column in user_auth"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.597445107Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=812.649µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.59955253Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.603181361Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=3.628261ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.64182852Z level=info msg="Executing migration" id="create server_lock table"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.643330217Z level=info msg="Migration successfully executed" id="create server_lock table" duration=1.503067ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.647678335Z level=info msg="Executing migration" id="add index server_lock.operation_uid"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.648462395Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=784.37µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.652245339Z level=info msg="Executing migration" id="create user auth token table"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.652950896Z level=info msg="Migration successfully executed" id="create user auth token table" duration=705.357µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.655880429Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.656647718Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=766.779µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.659165251Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.659880378Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=714.827µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.663239951Z level=info msg="Executing migration" id="add index user_auth_token.user_id"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.664041192Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=801.141µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.667882337Z level=info msg="Executing migration" id="Add revoked_at to the user auth token"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.671781985Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=3.895137ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.673586939Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.674859131Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=1.272362ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.679355812Z level=info msg="Executing migration" id="create cache_data table"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.680379658Z level=info msg="Migration successfully executed" id="create cache_data table" duration=1.019506ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.685736931Z level=info msg="Executing migration" id="add unique index cache_data.cache_key"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.687150686Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.414185ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.689501834Z level=info msg="Executing migration" id="create short_url table v1"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.690654483Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=1.152269ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.692892839Z level=info msg="Executing migration" id="add index short_url.org_id-uid"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.694032956Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.139177ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.696139739Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.696218111Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=79.282µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.698521249Z level=info msg="Executing migration" id="delete alert_definition table"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.698627651Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=106.852µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.700639861Z level=info msg="Executing migration" id="recreate alert_definition table"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.701713958Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.073887ms
Nov 24 09:29:24 compute-0 systemd[1]: libpod-conmon-887f2f92f9dc4668ed2e0f1dc4877e382d309d2642aee6a8402d0f37aef12e7d.scope: Deactivated successfully.
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.704354763Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.705458431Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.103468ms
Nov 24 09:29:24 compute-0 sudo[99003]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stktsoeqskuajcwzuaghmwlmphcmjeey ; /usr/bin/python3'
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.708310002Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.70948412Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.173968ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.711751167Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.711821038Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=70.441µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.713349376Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns"
Nov 24 09:29:24 compute-0 sudo[99003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.714412723Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.063137ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.718652258Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.719768656Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.115598ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.723497168Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.724687998Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.18833ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.728448982Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.729645461Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.196139ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.731887077Z level=info msg="Executing migration" id="Add column paused in alert_definition"
Nov 24 09:29:24 compute-0 systemd[1]: Starting Ceph haproxy.rgw.default.compute-0.fxvlbj for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.738952563Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=7.065417ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.742851739Z level=info msg="Executing migration" id="drop alert_definition table"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.744310555Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=1.459106ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.75171407Z level=info msg="Executing migration" id="delete alert_definition_version table"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.752092899Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=380.569µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.755429622Z level=info msg="Executing migration" id="recreate alert_definition_version table"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.756737904Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.308693ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.761551434Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.762823606Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.272682ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.765742198Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.767466911Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.724723ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.770404404Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.770776703Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=378.539µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.784169406Z level=info msg="Executing migration" id="drop alert_definition_version table"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.785638562Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.473526ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.78955929Z level=info msg="Executing migration" id="create alert_instance table"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.790736759Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.177209ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.793019255Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.794047751Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.027766ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.801546197Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.802534022Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=991.285µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.806699085Z level=info msg="Executing migration" id="add column current_state_end to alert_instance"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.812485019Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=5.784784ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.814872749Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.816058208Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.1854ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.818272902Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.819291358Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.018076ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.825528883Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:24 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.849317583Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=23.7727ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.851681962Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance"
Nov 24 09:29:24 compute-0 python3[99007]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:29:24 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.875527055Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=23.835232ms
Nov 24 09:29:24 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.881603245Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.883470472Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=1.869927ms
Nov 24 09:29:24 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 24 09:29:24 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.886255331Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.88780826Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.555689ms
Nov 24 09:29:24 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Nov 24 09:29:24 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.892400204Z level=info msg="Executing migration" id="add current_reason column related to current_state"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.899529171Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=7.125987ms
Nov 24 09:29:24 compute-0 ceph-mon[74331]: Deploying daemon haproxy.rgw.default.compute-0.fxvlbj on compute-0
Nov 24 09:29:24 compute-0 ceph-mon[74331]: 10.17 scrub starts
Nov 24 09:29:24 compute-0 ceph-mon[74331]: 10.17 scrub ok
Nov 24 09:29:24 compute-0 ceph-mon[74331]: 9.15 scrub starts
Nov 24 09:29:24 compute-0 ceph-mon[74331]: 9.15 scrub ok
Nov 24 09:29:24 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.902594687Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.906837512Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=4.243085ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.908718909Z level=info msg="Executing migration" id="create alert_rule table"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.909723335Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.004985ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.911732904Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.912556255Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=823.841µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.917068437Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.917916447Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=848µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.920065651Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.920995065Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=928.743µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.923038645Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.923143517Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=104.712µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.924695786Z level=info msg="Executing migration" id="add column for to alert_rule"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.929014573Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=4.318057ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.931022963Z level=info msg="Executing migration" id="add column annotations to alert_rule"
Nov 24 09:29:24 compute-0 podman[99039]: 2025-11-24 09:29:24.935232048 +0000 UTC m=+0.058644868 container create e58364d0077bce84ccaab4590eb903dacc89ac7d349565cc5589c510841a4b21 (image=quay.io/ceph/ceph:v19, name=zealous_tu, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.936436548Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=5.408745ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.938810286Z level=info msg="Executing migration" id="add column labels to alert_rule"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.945197455Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=6.382149ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.949697107Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.950873136Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=1.180029ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.95265298Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.953674436Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.021516ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.955325957Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.959966262Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=4.636055ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.961784448Z level=info msg="Executing migration" id="add panel_id column to alert_rule"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.966440343Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=4.650005ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.968194526Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.969169481Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=974.395µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.971395806Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.976063632Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=4.664496ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.978186044Z level=info msg="Executing migration" id="add is_paused column to alert_rule table"
Nov 24 09:29:24 compute-0 systemd[1]: Started libpod-conmon-e58364d0077bce84ccaab4590eb903dacc89ac7d349565cc5589c510841a4b21.scope.
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.982540973Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=4.352479ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.984357648Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.984412639Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=55.741µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.986275525Z level=info msg="Executing migration" id="create alert_rule_version table"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.987416834Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.140849ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.989965657Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.990837838Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=872.011µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.993028063Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.994119141Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.070977ms
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.996708075Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql"
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.996777057Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=68.892µs
Nov 24 09:29:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:24.998695824Z level=info msg="Executing migration" id="add column for to alert_rule_version"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.003846412Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=5.144678ms
Nov 24 09:29:25 compute-0 podman[99039]: 2025-11-24 09:29:24.910009372 +0000 UTC m=+0.033422222 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.005707298Z level=info msg="Executing migration" id="add column annotations to alert_rule_version"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.010351043Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=4.642795ms
Nov 24 09:29:25 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.017601774Z level=info msg="Executing migration" id="add column labels to alert_rule_version"
Nov 24 09:29:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eb6ac281c8458fb62eb499e5814e7cd0ef9b2748cecd150de73141e325ea23c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:29:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eb6ac281c8458fb62eb499e5814e7cd0ef9b2748cecd150de73141e325ea23c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.026087785Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=8.480421ms
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.02831788Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version"
Nov 24 09:29:25 compute-0 podman[99066]: 2025-11-24 09:29:25.028157456 +0000 UTC m=+0.063918079 container create f215aaf82002cc22756a434066734fc2e92b1f46ea15197b4fec2852fa744603 (image=quay.io/ceph/haproxy:2.3, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-rgw-default-compute-0-fxvlbj)
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.035481037Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=7.155217ms
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.038171354Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table"
Nov 24 09:29:25 compute-0 podman[99039]: 2025-11-24 09:29:25.042602354 +0000 UTC m=+0.166015184 container init e58364d0077bce84ccaab4590eb903dacc89ac7d349565cc5589c510841a4b21 (image=quay.io/ceph/ceph:v19, name=zealous_tu, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid)
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.042827441Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=4.656187ms
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.044327238Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.044378849Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=51.721µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.045830655Z level=info msg="Executing migration" id=create_alert_configuration_table
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.046471611Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=641.136µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.048612464Z level=info msg="Executing migration" id="Add column default in alert_configuration"
Nov 24 09:29:25 compute-0 podman[99039]: 2025-11-24 09:29:25.05125946 +0000 UTC m=+0.174672280 container start e58364d0077bce84ccaab4590eb903dacc89ac7d349565cc5589c510841a4b21 (image=quay.io/ceph/ceph:v19, name=zealous_tu, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True)
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.053099695Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=4.480561ms
Nov 24 09:29:25 compute-0 podman[99039]: 2025-11-24 09:29:25.054823828 +0000 UTC m=+0.178236688 container attach e58364d0077bce84ccaab4590eb903dacc89ac7d349565cc5589c510841a4b21 (image=quay.io/ceph/ceph:v19, name=zealous_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.055032943Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.055080554Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=48.591µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.056586602Z level=info msg="Executing migration" id="add column org_id in alert_configuration"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.061168396Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=4.580724ms
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.063124264Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.063913494Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=785.57µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.065872583Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.070701422Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=4.828759ms
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.072126968Z level=info msg="Executing migration" id=create_ngalert_configuration_table
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.072752584Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=625.797µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.074601109Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.075550733Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=945.334µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.07781286Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.082646029Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=4.831939ms
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.084436923Z level=info msg="Executing migration" id="create provenance_type table"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.085199813Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=762.45µs
Nov 24 09:29:25 compute-0 podman[99066]: 2025-11-24 09:29:24.992619763 +0000 UTC m=+0.028380436 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.088430723Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.089284464Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=851.751µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.09112053Z level=info msg="Executing migration" id="create alert_image table"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.091897229Z level=info msg="Migration successfully executed" id="create alert_image table" duration=776.359µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.093658873Z level=info msg="Executing migration" id="add unique index on token to alert_image table"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.094434852Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=775.249µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.096432022Z level=info msg="Executing migration" id="support longer URLs in alert_image table"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.096480473Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=48.941µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.098056782Z level=info msg="Executing migration" id=create_alert_configuration_history_table
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.098841212Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=783.73µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.100648517Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.101433555Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=785.139µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.102994905Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.103292032Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists"
Nov 24 09:29:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/820337ce7091101fa2af8de6c38d9cb9de9d6ff2e8a73a390c1902d5456499d7/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.105070466Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.105478126Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=408.49µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.107052175Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.107966168Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=914.253µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.109917976Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.114677935Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=4.759209ms
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.116452639Z level=info msg="Executing migration" id="create library_element table v1"
Nov 24 09:29:25 compute-0 podman[99066]: 2025-11-24 09:29:25.117376452 +0000 UTC m=+0.153137095 container init f215aaf82002cc22756a434066734fc2e92b1f46ea15197b4fec2852fa744603 (image=quay.io/ceph/haproxy:2.3, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-rgw-default-compute-0-fxvlbj)
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.117384372Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=930.853µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.121405912Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind"
Nov 24 09:29:25 compute-0 podman[99066]: 2025-11-24 09:29:25.122369675 +0000 UTC m=+0.158130298 container start f215aaf82002cc22756a434066734fc2e92b1f46ea15197b4fec2852fa744603 (image=quay.io/ceph/haproxy:2.3, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-rgw-default-compute-0-fxvlbj)
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.122558961Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.153079ms
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.124727854Z level=info msg="Executing migration" id="create library_element_connection table v1"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.125598026Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=869.532µs
Nov 24 09:29:25 compute-0 bash[99066]: f215aaf82002cc22756a434066734fc2e92b1f46ea15197b4fec2852fa744603
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.127546644Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.128516779Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=969.885µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.130552559Z level=info msg="Executing migration" id="add unique index library_element org_id_uid"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.131619696Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.066917ms
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.133826891Z level=info msg="Executing migration" id="increase max description length to 2048"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.133909883Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=83.332µs
Nov 24 09:29:25 compute-0 systemd[1]: Started Ceph haproxy.rgw.default.compute-0.fxvlbj for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-rgw-default-compute-0-fxvlbj[99088]: [NOTICE] 327/092925 (2) : New worker #1 (4) forked
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.136022675Z level=info msg="Executing migration" id="alter library_element model to mediumtext"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.136173328Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=150.933µs
Nov 24 09:29:25 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.138289782Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.138883006Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=598.595µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.140859786Z level=info msg="Executing migration" id="create data_keys table"
Nov 24 09:29:25 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.004000097s ======
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.141999393Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.139568ms
Nov 24 09:29:25 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:29:25.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.004000097s
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.145429759Z level=info msg="Executing migration" id="create secrets table"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.146226708Z level=info msg="Migration successfully executed" id="create secrets table" duration=794.659µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.149222823Z level=info msg="Executing migration" id="rename data_keys name column to id"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.182056198Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=32.816305ms
Nov 24 09:29:25 compute-0 sudo[98727]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.185269708Z level=info msg="Executing migration" id="add name column into data_keys"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.190813036Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=5.547478ms
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.192639652Z level=info msg="Executing migration" id="copy data_keys id column values into name"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.192780555Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=141.263µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.194313743Z level=info msg="Executing migration" id="rename data_keys name column to label"
Nov 24 09:29:25 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:29:25 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:25 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.223134158Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=28.815755ms
Nov 24 09:29:25 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:25 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.236356157Z level=info msg="Executing migration" id="rename data_keys id column back to name"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.262304891Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=25.945954ms
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.271585002Z level=info msg="Executing migration" id="create kv_store table v1"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.272478885Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=892.603µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.275074368Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.276035673Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=960.984µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.287544238Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.287792005Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=252.867µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.291692551Z level=info msg="Executing migration" id="create permission table"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.292605004Z level=info msg="Migration successfully executed" id="create permission table" duration=912.633µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.296358018Z level=info msg="Executing migration" id="add unique index permission.role_id"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.297503265Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.144807ms
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.302293805Z level=info msg="Executing migration" id="add unique index role_id_action_scope"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.303231879Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=937.564µs
Nov 24 09:29:25 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:25 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-2.tariiq on compute-2
Nov 24 09:29:25 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-2.tariiq on compute-2
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.307797372Z level=info msg="Executing migration" id="create role table"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.308639132Z level=info msg="Migration successfully executed" id="create role table" duration=840.04µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.311166165Z level=info msg="Executing migration" id="add column display_name"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.317125923Z level=info msg="Migration successfully executed" id="add column display_name" duration=5.957408ms
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.318669782Z level=info msg="Executing migration" id="add column group_name"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.323936962Z level=info msg="Migration successfully executed" id="add column group_name" duration=5.26578ms
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.329071Z level=info msg="Executing migration" id="add index role.org_id"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.330053795Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=982.125µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.358821009Z level=info msg="Executing migration" id="add unique index role_org_id_name"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.360623594Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.804406ms
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.36408974Z level=info msg="Executing migration" id="add index role_org_id_uid"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.365025033Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=935.283µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.367790232Z level=info msg="Executing migration" id="create team role table"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.368601532Z level=info msg="Migration successfully executed" id="create team role table" duration=810.119µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.371896394Z level=info msg="Executing migration" id="add index team_role.org_id"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.372905668Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.009454ms
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.37657634Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.378038016Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.461796ms
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.386034565Z level=info msg="Executing migration" id="add index team_role.team_id"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.387553793Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.517708ms
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.39350419Z level=info msg="Executing migration" id="create user role table"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.39471194Z level=info msg="Migration successfully executed" id="create user role table" duration=1.20674ms
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.398160986Z level=info msg="Executing migration" id="add index user_role.org_id"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.399839388Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.681773ms
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.402653588Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.404264418Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.61075ms
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.407693023Z level=info msg="Executing migration" id="add index user_role.user_id"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.408583905Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=891.593µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.410933223Z level=info msg="Executing migration" id="create builtin role table"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.411791164Z level=info msg="Migration successfully executed" id="create builtin role table" duration=854.721µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.415136177Z level=info msg="Executing migration" id="add index builtin_role.role_id"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.416275006Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.140439ms
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.419006724Z level=info msg="Executing migration" id="add index builtin_role.name"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.42004909Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.042787ms
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.42290041Z level=info msg="Executing migration" id="Add column org_id to builtin_role table"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.429169486Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=6.265526ms
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.44339622Z level=info msg="Executing migration" id="add index builtin_role.org_id"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.44502464Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.63352ms
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.468463793Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.470564604Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=2.107772ms
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.475885307Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.478087511Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=2.204114ms
Nov 24 09:29:25 compute-0 zealous_tu[99077]: {
Nov 24 09:29:25 compute-0 zealous_tu[99077]:     "user_id": "openstack",
Nov 24 09:29:25 compute-0 zealous_tu[99077]:     "display_name": "openstack",
Nov 24 09:29:25 compute-0 zealous_tu[99077]:     "email": "",
Nov 24 09:29:25 compute-0 zealous_tu[99077]:     "suspended": 0,
Nov 24 09:29:25 compute-0 zealous_tu[99077]:     "max_buckets": 1000,
Nov 24 09:29:25 compute-0 zealous_tu[99077]:     "subusers": [],
Nov 24 09:29:25 compute-0 zealous_tu[99077]:     "keys": [
Nov 24 09:29:25 compute-0 zealous_tu[99077]:         {
Nov 24 09:29:25 compute-0 zealous_tu[99077]:             "user": "openstack",
Nov 24 09:29:25 compute-0 zealous_tu[99077]:             "access_key": "8FD9FADANAI8HIFNBO8H",
Nov 24 09:29:25 compute-0 zealous_tu[99077]:             "secret_key": "z23VCgewpZA9t8YVcrBt8fUQ1Uldf2JqyFDORg2c",
Nov 24 09:29:25 compute-0 zealous_tu[99077]:             "active": true,
Nov 24 09:29:25 compute-0 zealous_tu[99077]:             "create_date": "2025-11-24T09:29:25.432334Z"
Nov 24 09:29:25 compute-0 zealous_tu[99077]:         }
Nov 24 09:29:25 compute-0 zealous_tu[99077]:     ],
Nov 24 09:29:25 compute-0 zealous_tu[99077]:     "swift_keys": [],
Nov 24 09:29:25 compute-0 zealous_tu[99077]:     "caps": [],
Nov 24 09:29:25 compute-0 zealous_tu[99077]:     "op_mask": "read, write, delete",
Nov 24 09:29:25 compute-0 zealous_tu[99077]:     "default_placement": "",
Nov 24 09:29:25 compute-0 zealous_tu[99077]:     "default_storage_class": "",
Nov 24 09:29:25 compute-0 zealous_tu[99077]:     "placement_tags": [],
Nov 24 09:29:25 compute-0 zealous_tu[99077]:     "bucket_quota": {
Nov 24 09:29:25 compute-0 zealous_tu[99077]:         "enabled": false,
Nov 24 09:29:25 compute-0 zealous_tu[99077]:         "check_on_raw": false,
Nov 24 09:29:25 compute-0 zealous_tu[99077]:         "max_size": -1,
Nov 24 09:29:25 compute-0 zealous_tu[99077]:         "max_size_kb": 0,
Nov 24 09:29:25 compute-0 zealous_tu[99077]:         "max_objects": -1
Nov 24 09:29:25 compute-0 zealous_tu[99077]:     },
Nov 24 09:29:25 compute-0 zealous_tu[99077]:     "user_quota": {
Nov 24 09:29:25 compute-0 zealous_tu[99077]:         "enabled": false,
Nov 24 09:29:25 compute-0 zealous_tu[99077]:         "check_on_raw": false,
Nov 24 09:29:25 compute-0 zealous_tu[99077]:         "max_size": -1,
Nov 24 09:29:25 compute-0 zealous_tu[99077]:         "max_size_kb": 0,
Nov 24 09:29:25 compute-0 zealous_tu[99077]:         "max_objects": -1
Nov 24 09:29:25 compute-0 zealous_tu[99077]:     },
Nov 24 09:29:25 compute-0 zealous_tu[99077]:     "temp_url_keys": [],
Nov 24 09:29:25 compute-0 zealous_tu[99077]:     "type": "rgw",
Nov 24 09:29:25 compute-0 zealous_tu[99077]:     "mfa_ids": [],
Nov 24 09:29:25 compute-0 zealous_tu[99077]:     "account_id": "",
Nov 24 09:29:25 compute-0 zealous_tu[99077]:     "path": "/",
Nov 24 09:29:25 compute-0 zealous_tu[99077]:     "create_date": "2025-11-24T09:29:25.431962Z",
Nov 24 09:29:25 compute-0 zealous_tu[99077]:     "tags": [],
Nov 24 09:29:25 compute-0 zealous_tu[99077]:     "group_ids": []
Nov 24 09:29:25 compute-0 zealous_tu[99077]: }
Nov 24 09:29:25 compute-0 zealous_tu[99077]: 
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.48087085Z level=info msg="Executing migration" id="add unique index role.uid"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.482646155Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.779585ms
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.485136896Z level=info msg="Executing migration" id="create seed assignment table"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.486323086Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=1.22246ms
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.48971913Z level=info msg="Executing migration" id="add unique index builtin_role_role_name"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.491007843Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.288742ms
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.494253683Z level=info msg="Executing migration" id="add column hidden to role table"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.502228351Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=7.973088ms
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.504753993Z level=info msg="Executing migration" id="permission kind migration"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.512847424Z level=info msg="Migration successfully executed" id="permission kind migration" duration=8.088871ms
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.514808604Z level=info msg="Executing migration" id="permission attribute migration"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.522068224Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=7.25554ms
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.524757721Z level=info msg="Executing migration" id="permission identifier migration"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.531373904Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=6.612683ms
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.533343754Z level=info msg="Executing migration" id="add permission identifier index"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.534355459Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.009525ms
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.537742063Z level=info msg="Executing migration" id="add permission action scope role_id index"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.538728098Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=984.055µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[98094]: ts=2025-11-24T09:29:25.539Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.003299835s
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.540950882Z level=info msg="Executing migration" id="remove permission role_id action scope index"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.541914807Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=964.455µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.544294706Z level=info msg="Executing migration" id="create query_history table v1"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.545087635Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=803.489µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.547182308Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.54809048Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=907.852µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.551919135Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.552009487Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=91.182µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.555087893Z level=info msg="Executing migration" id="rbac disabled migrator"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.555174236Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=87.672µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.557495784Z level=info msg="Executing migration" id="teams permissions migration"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.557963755Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=468.182µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.560331044Z level=info msg="Executing migration" id="dashboard permissions"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.56096796Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=635.976µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.562540849Z level=info msg="Executing migration" id="dashboard permissions uid scopes"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.563130923Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=554.014µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.566536759Z level=info msg="Executing migration" id="drop managed folder create actions"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.566773944Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=237.005µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.571828599Z level=info msg="Executing migration" id="alerting notification permissions"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.572596999Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=768.18µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.582283039Z level=info msg="Executing migration" id="create query_history_star table v1"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.583085139Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=805.64µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.587729415Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.58876972Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.040135ms
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.593204751Z level=info msg="Executing migration" id="add column org_id in query_history_star"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.599438505Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=6.232834ms
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.600976653Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.601058375Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=76.222µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.602786678Z level=info msg="Executing migration" id="create correlation table v1"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.604935262Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=2.134734ms
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.613765011Z level=info msg="Executing migration" id="add index correlations.uid"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.615974826Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=2.215375ms
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.618799096Z level=info msg="Executing migration" id="add index correlations.source_uid"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.620252552Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.454876ms
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.622775015Z level=info msg="Executing migration" id="add correlation config column"
Nov 24 09:29:25 compute-0 systemd[1]: libpod-e58364d0077bce84ccaab4590eb903dacc89ac7d349565cc5589c510841a4b21.scope: Deactivated successfully.
Nov 24 09:29:25 compute-0 ceph-mgr[74626]: [progress INFO root] Writing back 22 completed events
Nov 24 09:29:25 compute-0 podman[99039]: 2025-11-24 09:29:25.626849497 +0000 UTC m=+0.750262317 container died e58364d0077bce84ccaab4590eb903dacc89ac7d349565cc5589c510841a4b21 (image=quay.io/ceph/ceph:v19, name=zealous_tu, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Nov 24 09:29:25 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.634034995Z level=info msg="Migration successfully executed" id="add correlation config column" duration=11.25028ms
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.646875114Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.648967205Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=2.098242ms
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.651359185Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.652581885Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.22312ms
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.657020495Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1"
Nov 24 09:29:25 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:25 compute-0 ceph-mgr[74626]: [progress INFO root] Completed event 79067636-8812-499c-862e-92d1f3778d11 (Global Recovery Event) in 10 seconds
Nov 24 09:29:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-6eb6ac281c8458fb62eb499e5814e7cd0ef9b2748cecd150de73141e325ea23c-merged.mount: Deactivated successfully.
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.683862772Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=26.840937ms
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.686079617Z level=info msg="Executing migration" id="create correlation v2"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.687870932Z level=info msg="Migration successfully executed" id="create correlation v2" duration=1.790715ms
Nov 24 09:29:25 compute-0 podman[99039]: 2025-11-24 09:29:25.689720858 +0000 UTC m=+0.813133678 container remove e58364d0077bce84ccaab4590eb903dacc89ac7d349565cc5589c510841a4b21 (image=quay.io/ceph/ceph:v19, name=zealous_tu, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.690425095Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.693152843Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=2.726198ms
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.695725867Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.697129522Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.404065ms
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.699214873Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.701172363Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.957069ms
Nov 24 09:29:25 compute-0 systemd[1]: libpod-conmon-e58364d0077bce84ccaab4590eb903dacc89ac7d349565cc5589c510841a4b21.scope: Deactivated successfully.
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.703457699Z level=info msg="Executing migration" id="copy correlation v1 to v2"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.703765747Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=306.388µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.705576032Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.7066999Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=1.123538ms
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.708691339Z level=info msg="Executing migration" id="add provisioning column"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.718665177Z level=info msg="Migration successfully executed" id="add provisioning column" duration=9.967357ms
Nov 24 09:29:25 compute-0 sudo[99003]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.720708758Z level=info msg="Executing migration" id="create entity_events table"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.721776994Z level=info msg="Migration successfully executed" id="create entity_events table" duration=1.069636ms
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.723742313Z level=info msg="Executing migration" id="create dashboard public config v1"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.724694147Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=949.814µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.72725609Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.72763855Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.729174808Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.729505866Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.731122626Z level=info msg="Executing migration" id="Drop old dashboard public config table"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.732093311Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=969.584µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.734007478Z level=info msg="Executing migration" id="recreate dashboard public config v1"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.734960282Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=952.094µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.737378742Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.738273404Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=894.582µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.742018327Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.742982971Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=966.674µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.745282208Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.746435967Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.155778ms
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.748414026Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.749512763Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.099267ms
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.751289478Z level=info msg="Executing migration" id="Drop public config table"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.752079857Z level=info msg="Migration successfully executed" id="Drop public config table" duration=790.789µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.753694567Z level=info msg="Executing migration" id="Recreate dashboard public config v2"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.754724342Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.029625ms
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.756446825Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.757337047Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=890.242µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.758679571Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.759650315Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=971.434µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.761213304Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.762132226Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=918.132µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.764071775Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.785754623Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=21.674668ms
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.787778594Z level=info msg="Executing migration" id="add annotations_enabled column"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.796205473Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=8.420869ms
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.798294085Z level=info msg="Executing migration" id="add time_selection_enabled column"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.806051658Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=7.752193ms
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.808326924Z level=info msg="Executing migration" id="delete orphaned public dashboards"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.808650862Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=326.939µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.81059361Z level=info msg="Executing migration" id="add share column"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.817464831Z level=info msg="Migration successfully executed" id="add share column" duration=6.866911ms
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.819258325Z level=info msg="Executing migration" id="backfill empty share column fields with default of public"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.8194491Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=191.135µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.821168832Z level=info msg="Executing migration" id="create file table"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.822016824Z level=info msg="Migration successfully executed" id="create file table" duration=849.022µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.824292091Z level=info msg="Executing migration" id="file table idx: path natural pk"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.825202503Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=909.842µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.827839268Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.828751921Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=913.843µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.830842633Z level=info msg="Executing migration" id="create file_meta table"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.831623963Z level=info msg="Migration successfully executed" id="create file_meta table" duration=768.119µs
Nov 24 09:29:25 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 11.0 deep-scrub starts
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.880811314Z level=info msg="Executing migration" id="file table idx: path key"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.882322402Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.500567ms
Nov 24 09:29:25 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 11.0 deep-scrub ok
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.929548125Z level=info msg="Executing migration" id="set path collation in file table"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.929717939Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=176.374µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.980584462Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL"
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.980783057Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=205.375µs
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.987212307Z level=info msg="Executing migration" id="managed permissions migration"
Nov 24 09:29:25 compute-0 ceph-mon[74331]: pgmap v54: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:29:25 compute-0 ceph-mon[74331]: 10.16 scrub starts
Nov 24 09:29:25 compute-0 ceph-mon[74331]: 10.16 scrub ok
Nov 24 09:29:25 compute-0 ceph-mon[74331]: 11.15 scrub starts
Nov 24 09:29:25 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 24 09:29:25 compute-0 ceph-mon[74331]: 11.15 scrub ok
Nov 24 09:29:25 compute-0 ceph-mon[74331]: osdmap e62: 3 total, 3 up, 3 in
Nov 24 09:29:25 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:25 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:25 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:25 compute-0 ceph-mon[74331]: 12.1a scrub starts
Nov 24 09:29:25 compute-0 ceph-mon[74331]: 12.1a scrub ok
Nov 24 09:29:25 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:25.98894971Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=1.738953ms
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.006289131Z level=info msg="Executing migration" id="managed folder permissions alert actions migration"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.00746466Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=1.178769ms
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.019224092Z level=info msg="Executing migration" id="RBAC action name migrator"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.02114448Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.930887ms
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.024316389Z level=info msg="Executing migration" id="Add UID column to playlist"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.032788279Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=8.47262ms
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.038165272Z level=info msg="Executing migration" id="Update uid column values in playlist"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.038568273Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=472.483µs
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.041813043Z level=info msg="Executing migration" id="Add index for uid in playlist"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.043637538Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.823215ms
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.047710779Z level=info msg="Executing migration" id="update group index for alert rules"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.048034697Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=323.068µs
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.052327055Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.052638872Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=314.107µs
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.055826791Z level=info msg="Executing migration" id="admin only folder/dashboard permission"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.056356744Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=529.903µs
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.058067367Z level=info msg="Executing migration" id="add action column to seed_assignment"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.064573408Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=6.505791ms
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.06746909Z level=info msg="Executing migration" id="add scope column to seed_assignment"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.074010623Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=6.539613ms
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.076837773Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.077871249Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.035156ms
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.079720975Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.155045336Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=75.318711ms
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.157601319Z level=info msg="Executing migration" id="add unique index builtin_role_name back"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.158752848Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.149049ms
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.161021204Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.161933647Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=912.453µs
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.164194182Z level=info msg="Executing migration" id="add primary key to seed_assigment"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.185464892Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=21.265559ms
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.188265481Z level=info msg="Executing migration" id="add origin column to seed_assignment"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.195261865Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=6.992954ms
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.197161812Z level=info msg="Executing migration" id="add origin to plugin seed_assignment"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.197475499Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=314.337µs
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.199167181Z level=info msg="Executing migration" id="prevent seeding OnCall access"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.199417808Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=250.357µs
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.201012858Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.201223263Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=210.495µs
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.202659078Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.202862353Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=203.255µs
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.204631327Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.204854593Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=223.365µs
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.206591516Z level=info msg="Executing migration" id="create folder table"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.207492108Z level=info msg="Migration successfully executed" id="create folder table" duration=899.962µs
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.209293213Z level=info msg="Executing migration" id="Add index for parent_uid"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.21039328Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.099857ms
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.21240027Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.213471437Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.071897ms
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.215411115Z level=info msg="Executing migration" id="Update folder title length"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.215479897Z level=info msg="Migration successfully executed" id="Update folder title length" duration=69.642µs
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.217032945Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.218055351Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.022406ms
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.220556693Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.221714922Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.155499ms
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.223286451Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.224276555Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=985.244µs
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.22648665Z level=info msg="Executing migration" id="Sync dashboard and folder table"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.22690121Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=414.64µs
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.228285134Z level=info msg="Executing migration" id="Remove ghost folders from the folder table"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.22852423Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=239.406µs
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.229999877Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.23091523Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=915.363µs
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.232328485Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.233240098Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=911.433µs
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.234800717Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.235671829Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=871.172µs
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.237255018Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.238133529Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=878.181µs
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.240725894Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.241978775Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.252161ms
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.243979784Z level=info msg="Executing migration" id="create anon_device table"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.24542218Z level=info msg="Migration successfully executed" id="create anon_device table" duration=1.442396ms
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.247045831Z level=info msg="Executing migration" id="add unique index anon_device.device_id"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.248727693Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.681882ms
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.251067061Z level=info msg="Executing migration" id="add index anon_device.updated_at"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.251937652Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=870.081µs
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.254395383Z level=info msg="Executing migration" id="create signing_key table"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.255269266Z level=info msg="Migration successfully executed" id="create signing_key table" duration=874.112µs
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.257287565Z level=info msg="Executing migration" id="add unique index signing_key.key_id"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.258177918Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=889.813µs
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.261255724Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.262155096Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=899.322µs
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.264678068Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.264932675Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=254.987µs
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.266377921Z level=info msg="Executing migration" id="Add folder_uid for dashboard"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:26 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf00016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.272694088Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=6.313468ms
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.274370269Z level=info msg="Executing migration" id="Populate dashboard folder_uid column"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.274991965Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=622.236µs
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.277510977Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.27841777Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=906.483µs
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.280631155Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.281494876Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=863.781µs
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.283004314Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.283880566Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=875.912µs
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.285594598Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.286556362Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=961.214µs
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.288246164Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.289127566Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=881.132µs
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.290600663Z level=info msg="Executing migration" id="create sso_setting table"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.291431913Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=829.51µs
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.294717915Z level=info msg="Executing migration" id="copy kvstore migration status to each org"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.295394381Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=677.656µs
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.297019882Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.297284719Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=264.987µs
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.299457092Z level=info msg="Executing migration" id="alter kv_store.value to longtext"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.299539414Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=82.922µs
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.301079493Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.307627906Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=6.548363ms
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.30901313Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.315469521Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=6.455841ms
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.316894656Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.317253115Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=357.379µs
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=migrator t=2025-11-24T09:29:26.31908159Z level=info msg="migrations completed" performed=547 skipped=0 duration=2.895762096s
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=sqlstore t=2025-11-24T09:29:26.32029671Z level=info msg="Created default organization"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=secrets t=2025-11-24T09:29:26.322083794Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=plugin.store t=2025-11-24T09:29:26.349799433Z level=info msg="Loading plugins..."
Nov 24 09:29:26 compute-0 python3[99221]: ansible-ansible.builtin.get_url Invoked with url=http://192.168.122.100:8443 dest=/tmp/dash_response mode=0644 validate_certs=False force=False http_agent=ansible-httpget use_proxy=True force_basic_auth=False use_gssapi=False backup=False checksum= timeout=10 unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None headers=None tmp_dest=None ciphers=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=local.finder t=2025-11-24T09:29:26.422466678Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=plugin.store t=2025-11-24T09:29:26.422495259Z level=info msg="Plugins loaded" count=55 duration=72.695966ms
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=query_data t=2025-11-24T09:29:26.425044812Z level=info msg="Query Service initialization"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=live.push_http t=2025-11-24T09:29:26.42818051Z level=info msg="Live Push Gateway initialization"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=ngalert.migration t=2025-11-24T09:29:26.430813025Z level=info msg=Starting
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=ngalert.migration t=2025-11-24T09:29:26.431206595Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=ngalert.migration orgID=1 t=2025-11-24T09:29:26.431542603Z level=info msg="Migrating alerts for organisation"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=ngalert.migration orgID=1 t=2025-11-24T09:29:26.432088837Z level=info msg="Alerts found to migrate" alerts=0
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=ngalert.migration t=2025-11-24T09:29:26.433507562Z level=info msg="Completed alerting migration"
Nov 24 09:29:26 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v56: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 24 09:29:26 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Nov 24 09:29:26 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=ngalert.state.manager t=2025-11-24T09:29:26.455751965Z level=info msg="Running in alternative execution of Error/NoData mode"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=infra.usagestats.collector t=2025-11-24T09:29:26.458909883Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=provisioning.datasources t=2025-11-24T09:29:26.460322248Z level=info msg="inserting datasource from configuration" name=Loki uid=P8E80F9AEF21F6940
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=provisioning.alerting t=2025-11-24T09:29:26.47046525Z level=info msg="starting to provision alerting"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=provisioning.alerting t=2025-11-24T09:29:26.470510821Z level=info msg="finished to provision alerting"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=ngalert.state.manager t=2025-11-24T09:29:26.470660115Z level=info msg="Warming state cache for startup"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=ngalert.state.manager t=2025-11-24T09:29:26.47167668Z level=info msg="State cache has been initialized" states=0 duration=1.016485ms
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=ngalert.multiorg.alertmanager t=2025-11-24T09:29:26.471735292Z level=info msg="Starting MultiOrg Alertmanager"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=ngalert.scheduler t=2025-11-24T09:29:26.471777303Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=ticker t=2025-11-24T09:29:26.471859244Z level=info msg=starting first_tick=2025-11-24T09:29:30Z
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=grafanaStorageLogger t=2025-11-24T09:29:26.472542312Z level=info msg="Storage starting"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=http.server t=2025-11-24T09:29:26.475122376Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=http.server t=2025-11-24T09:29:26.475434544Z level=info msg="HTTP Server Listen" address=192.168.122.100:3000 protocol=https subUrl= socket=
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:26 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab00003b70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=plugins.update.checker t=2025-11-24T09:29:26.558059696Z level=info msg="Update check succeeded" duration=86.397326ms
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=sqlstore.transactions t=2025-11-24T09:29:26.578677807Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=sqlstore.transactions t=2025-11-24T09:29:26.589738662Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Nov 24 09:29:26 compute-0 ceph-mgr[74626]: [dashboard INFO request] [192.168.122.100:45342] [GET] [200] [0.128s] [6.3K] [7c1f18ac-49ed-488c-baef-9554eaf52fc2] /
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=provisioning.dashboard t=2025-11-24T09:29:26.600148621Z level=info msg="starting to provision dashboards"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=sqlstore.transactions t=2025-11-24T09:29:26.600463909Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=grafana.update.checker t=2025-11-24T09:29:26.652509882Z level=info msg="Update check succeeded" duration=180.885702ms
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=grafana-apiserver t=2025-11-24T09:29:26.805873251Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=grafana-apiserver t=2025-11-24T09:29:26.813990793Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager"
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:26 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf40018b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:26 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 11.c scrub starts
Nov 24 09:29:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=provisioning.dashboard t=2025-11-24T09:29:26.854068088Z level=info msg="finished to provision dashboards"
Nov 24 09:29:26 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 11.c scrub ok
Nov 24 09:29:26 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Nov 24 09:29:26 compute-0 python3[99251]: ansible-ansible.builtin.get_url Invoked with url=http://192.168.122.100:8443 dest=/tmp/dash_http_response mode=0644 validate_certs=False username=VALUE_SPECIFIED_IN_NO_LOG_PARAMETER password=NOT_LOGGING_PARAMETER url_username=VALUE_SPECIFIED_IN_NO_LOG_PARAMETER url_password=NOT_LOGGING_PARAMETER force=False http_agent=ansible-httpget use_proxy=True force_basic_auth=False use_gssapi=False backup=False checksum= timeout=10 unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False client_cert=None client_key=None headers=None tmp_dest=None ciphers=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:29:26 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 24 09:29:26 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Nov 24 09:29:27 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Nov 24 09:29:27 compute-0 ceph-mon[74331]: Deploying daemon haproxy.rgw.default.compute-2.tariiq on compute-2
Nov 24 09:29:27 compute-0 ceph-mon[74331]: 12.15 scrub starts
Nov 24 09:29:27 compute-0 ceph-mon[74331]: 12.15 scrub ok
Nov 24 09:29:27 compute-0 ceph-mon[74331]: 11.0 deep-scrub starts
Nov 24 09:29:27 compute-0 ceph-mon[74331]: 11.0 deep-scrub ok
Nov 24 09:29:27 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 24 09:29:27 compute-0 ceph-mgr[74626]: [dashboard INFO request] [192.168.122.100:45354] [GET] [200] [0.002s] [6.3K] [ef529cc7-3322-4ed5-8c79-244848683577] /
Nov 24 09:29:27 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:29:27 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:29:27 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:29:27.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:29:27 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 63 pg[9.17( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=3 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=63 pruub=10.865754128s) [2] r=-1 lpr=63 pi=[54,63)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active pruub 184.068389893s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:27 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 63 pg[9.17( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=3 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=63 pruub=10.865695953s) [2] r=-1 lpr=63 pi=[54,63)/1 crt=45'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.068389893s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:27 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 63 pg[9.b( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=63 pruub=10.865816116s) [2] r=-1 lpr=63 pi=[54,63)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active pruub 184.069122314s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:27 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 63 pg[9.b( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=63 pruub=10.865764618s) [2] r=-1 lpr=63 pi=[54,63)/1 crt=45'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.069122314s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:27 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 63 pg[9.3( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=63 pruub=10.865091324s) [2] r=-1 lpr=63 pi=[54,63)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active pruub 184.068817139s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:27 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 63 pg[9.f( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=63 pruub=10.865300179s) [2] r=-1 lpr=63 pi=[54,63)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active pruub 184.069137573s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:27 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 63 pg[9.f( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=63 pruub=10.865271568s) [2] r=-1 lpr=63 pi=[54,63)/1 crt=45'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.069137573s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:27 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 63 pg[9.3( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=63 pruub=10.865039825s) [2] r=-1 lpr=63 pi=[54,63)/1 crt=45'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.068817139s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:27 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 63 pg[9.7( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=63 pruub=10.865277290s) [2] r=-1 lpr=63 pi=[54,63)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active pruub 184.069519043s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:27 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 63 pg[9.7( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=63 pruub=10.865244865s) [2] r=-1 lpr=63 pi=[54,63)/1 crt=45'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.069519043s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:27 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 63 pg[9.1b( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=5 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=63 pruub=10.868383408s) [2] r=-1 lpr=63 pi=[54,63)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active pruub 184.072875977s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:27 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 63 pg[9.1b( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=5 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=63 pruub=10.868352890s) [2] r=-1 lpr=63 pi=[54,63)/1 crt=45'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.072875977s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:27 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 63 pg[9.1f( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=5 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=63 pruub=10.868268967s) [2] r=-1 lpr=63 pi=[54,63)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active pruub 184.073059082s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:27 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 63 pg[9.1f( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=5 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=63 pruub=10.868253708s) [2] r=-1 lpr=63 pi=[54,63)/1 crt=45'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.073059082s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:27 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 63 pg[9.13( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=5 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=63 pruub=10.867792130s) [2] r=-1 lpr=63 pi=[54,63)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active pruub 184.073333740s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:27 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 63 pg[9.13( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=5 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=63 pruub=10.867706299s) [2] r=-1 lpr=63 pi=[54,63)/1 crt=45'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 184.073333740s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:27 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:29:27 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:29:27 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:29:27.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:29:27 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 09:29:27 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:27 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 09:29:27 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:27 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Nov 24 09:29:27 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:27 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/keepalived_password}] v 0)
Nov 24 09:29:27 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:27 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 24 09:29:27 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 24 09:29:27 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 24 09:29:27 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 24 09:29:27 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-2.atxclo on compute-2
Nov 24 09:29:27 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-2.atxclo on compute-2
Nov 24 09:29:27 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:29:27 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Nov 24 09:29:27 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Nov 24 09:29:27 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 64 pg[9.17( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=3 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=64) [2]/[0] r=0 lpr=64 pi=[54,64)/1 crt=45'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:27 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 64 pg[9.17( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=3 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=64) [2]/[0] r=0 lpr=64 pi=[54,64)/1 crt=45'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:27 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 64 pg[9.3( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=64) [2]/[0] r=0 lpr=64 pi=[54,64)/1 crt=45'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:27 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 64 pg[9.3( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=64) [2]/[0] r=0 lpr=64 pi=[54,64)/1 crt=45'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:27 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 64 pg[9.b( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=64) [2]/[0] r=0 lpr=64 pi=[54,64)/1 crt=45'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:27 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 64 pg[9.b( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=64) [2]/[0] r=0 lpr=64 pi=[54,64)/1 crt=45'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:27 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 64 pg[9.f( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=64) [2]/[0] r=0 lpr=64 pi=[54,64)/1 crt=45'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:27 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 64 pg[9.f( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=64) [2]/[0] r=0 lpr=64 pi=[54,64)/1 crt=45'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:27 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 64 pg[9.7( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=64) [2]/[0] r=0 lpr=64 pi=[54,64)/1 crt=45'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:27 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 64 pg[9.7( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=64) [2]/[0] r=0 lpr=64 pi=[54,64)/1 crt=45'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:27 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 64 pg[9.1b( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=5 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=64) [2]/[0] r=0 lpr=64 pi=[54,64)/1 crt=45'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:27 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 64 pg[9.1b( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=5 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=64) [2]/[0] r=0 lpr=64 pi=[54,64)/1 crt=45'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:27 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 64 pg[9.1f( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=5 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=64) [2]/[0] r=0 lpr=64 pi=[54,64)/1 crt=45'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:27 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 64 pg[9.1f( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=5 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=64) [2]/[0] r=0 lpr=64 pi=[54,64)/1 crt=45'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:27 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 64 pg[9.13( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=5 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=64) [2]/[0] r=0 lpr=64 pi=[54,64)/1 crt=45'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:27 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 64 pg[9.13( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=5 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=64) [2]/[0] r=0 lpr=64 pi=[54,64)/1 crt=45'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:27 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Nov 24 09:29:27 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Nov 24 09:29:27 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:29:27.796270) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 09:29:27 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Nov 24 09:29:27 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976567796444, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7194, "num_deletes": 254, "total_data_size": 13110166, "memory_usage": 13576800, "flush_reason": "Manual Compaction"}
Nov 24 09:29:27 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Nov 24 09:29:27 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976567868563, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 11705348, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 142, "largest_seqno": 7331, "table_properties": {"data_size": 11679276, "index_size": 16549, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8453, "raw_key_size": 81416, "raw_average_key_size": 24, "raw_value_size": 11614820, "raw_average_value_size": 3455, "num_data_blocks": 727, "num_entries": 3361, "num_filter_entries": 3361, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976307, "oldest_key_time": 1763976307, "file_creation_time": 1763976567, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "42aa12d2-c531-4ddc-8c4c-bc0b5971346b", "db_session_id": "RORHLERH15LC1QL8D0I4", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:29:27 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 72404 microseconds, and 44745 cpu microseconds.
Nov 24 09:29:27 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 11.b scrub starts
Nov 24 09:29:27 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:29:27.868675) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 11705348 bytes OK
Nov 24 09:29:27 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:29:27.868723) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Nov 24 09:29:27 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:29:27.870494) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Nov 24 09:29:27 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:29:27.870534) EVENT_LOG_v1 {"time_micros": 1763976567870523, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Nov 24 09:29:27 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:29:27.870583) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Nov 24 09:29:27 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 13077782, prev total WAL file size 13077782, number of live WAL files 2.
Nov 24 09:29:27 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:29:27 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:29:27.875914) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323534' seq:0, type:0; will stop at (end)
Nov 24 09:29:27 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Nov 24 09:29:27 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(11MB) 13(57KB) 8(1944B)]
Nov 24 09:29:27 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976567876085, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 11765786, "oldest_snapshot_seqno": -1}
Nov 24 09:29:27 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 11.b scrub ok
Nov 24 09:29:27 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3180 keys, 11747790 bytes, temperature: kUnknown
Nov 24 09:29:27 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976567938777, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 11747790, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11722052, "index_size": 16668, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8005, "raw_key_size": 80304, "raw_average_key_size": 25, "raw_value_size": 11659165, "raw_average_value_size": 3666, "num_data_blocks": 732, "num_entries": 3180, "num_filter_entries": 3180, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976305, "oldest_key_time": 0, "file_creation_time": 1763976567, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "42aa12d2-c531-4ddc-8c4c-bc0b5971346b", "db_session_id": "RORHLERH15LC1QL8D0I4", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:29:27 compute-0 ceph-mon[74331]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:29:27 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:29:27.939314) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 11747790 bytes
Nov 24 09:29:27 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:29:27.940920) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 187.3 rd, 187.0 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(11.2, 0.0 +0.0 blob) out(11.2 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3470, records dropped: 290 output_compression: NoCompression
Nov 24 09:29:27 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:29:27.940967) EVENT_LOG_v1 {"time_micros": 1763976567940945, "job": 4, "event": "compaction_finished", "compaction_time_micros": 62828, "compaction_time_cpu_micros": 36733, "output_level": 6, "num_output_files": 1, "total_output_size": 11747790, "num_input_records": 3470, "num_output_records": 3180, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 09:29:27 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:29:27 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976567945560, "job": 4, "event": "table_file_deletion", "file_number": 19}
Nov 24 09:29:27 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:29:27 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976567945782, "job": 4, "event": "table_file_deletion", "file_number": 13}
Nov 24 09:29:27 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:29:27 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976567945887, "job": 4, "event": "table_file_deletion", "file_number": 8}
Nov 24 09:29:27 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:29:27.875704) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:29:28 compute-0 ceph-mon[74331]: pgmap v56: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 24 09:29:28 compute-0 ceph-mon[74331]: 10.0 deep-scrub starts
Nov 24 09:29:28 compute-0 ceph-mon[74331]: 10.0 deep-scrub ok
Nov 24 09:29:28 compute-0 ceph-mon[74331]: 11.c scrub starts
Nov 24 09:29:28 compute-0 ceph-mon[74331]: 11.c scrub ok
Nov 24 09:29:28 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 24 09:29:28 compute-0 ceph-mon[74331]: osdmap e63: 3 total, 3 up, 3 in
Nov 24 09:29:28 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:28 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:28 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:28 compute-0 ceph-mon[74331]: 10.e deep-scrub starts
Nov 24 09:29:28 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:28 compute-0 ceph-mon[74331]: 10.e deep-scrub ok
Nov 24 09:29:28 compute-0 ceph-mon[74331]: osdmap e64: 3 total, 3 up, 3 in
Nov 24 09:29:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:28 : epoch 6924254d : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:29:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:28 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:28 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v59: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 255 B/s, 3 objects/s recovering
Nov 24 09:29:28 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0)
Nov 24 09:29:28 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 24 09:29:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:28 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf00016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:28 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Nov 24 09:29:28 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 24 09:29:28 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Nov 24 09:29:28 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Nov 24 09:29:28 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 65 pg[9.17( v 45'1130 (0'0,45'1130] local-lis/les=64/65 n=3 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[54,64)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:28 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 65 pg[9.7( v 45'1130 (0'0,45'1130] local-lis/les=64/65 n=6 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[54,64)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:28 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 65 pg[9.1b( v 45'1130 (0'0,45'1130] local-lis/les=64/65 n=5 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[54,64)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:28 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 65 pg[9.1f( v 45'1130 (0'0,45'1130] local-lis/les=64/65 n=5 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[54,64)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:28 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 65 pg[9.f( v 45'1130 (0'0,45'1130] local-lis/les=64/65 n=6 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[54,64)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:28 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 65 pg[9.3( v 45'1130 (0'0,45'1130] local-lis/les=64/65 n=6 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[54,64)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:28 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 65 pg[9.b( v 45'1130 (0'0,45'1130] local-lis/les=64/65 n=6 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[54,64)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:28 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 65 pg[9.13( v 45'1130 (0'0,45'1130] local-lis/les=64/65 n=5 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[54,64)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:28 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab00003b70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:29 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:29:29 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:29:29 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:29:29.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:29:29 compute-0 ceph-mon[74331]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 24 09:29:29 compute-0 ceph-mon[74331]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 24 09:29:29 compute-0 ceph-mon[74331]: Deploying daemon keepalived.rgw.default.compute-2.atxclo on compute-2
Nov 24 09:29:29 compute-0 ceph-mon[74331]: 11.b scrub starts
Nov 24 09:29:29 compute-0 ceph-mon[74331]: 11.b scrub ok
Nov 24 09:29:29 compute-0 ceph-mon[74331]: pgmap v59: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 255 B/s, 3 objects/s recovering
Nov 24 09:29:29 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 24 09:29:29 compute-0 ceph-mon[74331]: 10.a scrub starts
Nov 24 09:29:29 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 24 09:29:29 compute-0 ceph-mon[74331]: osdmap e65: 3 total, 3 up, 3 in
Nov 24 09:29:29 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:29:29 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:29:29 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:29:29.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:29:29 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 09:29:29 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:29 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 09:29:29 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:29 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Nov 24 09:29:29 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:29 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 24 09:29:29 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 24 09:29:29 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 24 09:29:29 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 24 09:29:29 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-0.zrpppr on compute-0
Nov 24 09:29:29 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-0.zrpppr on compute-0
Nov 24 09:29:29 compute-0 sudo[99255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:29:29 compute-0 sudo[99255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:29 compute-0 sudo[99255]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:29 compute-0 sudo[99280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/keepalived:2.2.4 --timeout 895 _orch deploy --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:29:29 compute-0 sudo[99280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:29 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Nov 24 09:29:29 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Nov 24 09:29:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 66 pg[9.17( v 45'1130 (0'0,45'1130] local-lis/les=64/65 n=3 ec=54/39 lis/c=64/54 les/c/f=65/55/0 sis=66 pruub=14.988794327s) [2] async=[2] r=-1 lpr=66 pi=[54,66)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active pruub 190.741958618s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 66 pg[9.17( v 45'1130 (0'0,45'1130] local-lis/les=64/65 n=3 ec=54/39 lis/c=64/54 les/c/f=65/55/0 sis=66 pruub=14.988743782s) [2] r=-1 lpr=66 pi=[54,66)/1 crt=45'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.741958618s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 66 pg[9.3( v 45'1130 (0'0,45'1130] local-lis/les=64/65 n=6 ec=54/39 lis/c=64/54 les/c/f=65/55/0 sis=66 pruub=14.993798256s) [2] async=[2] r=-1 lpr=66 pi=[54,66)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active pruub 190.747131348s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 66 pg[9.3( v 45'1130 (0'0,45'1130] local-lis/les=64/65 n=6 ec=54/39 lis/c=64/54 les/c/f=65/55/0 sis=66 pruub=14.993733406s) [2] r=-1 lpr=66 pi=[54,66)/1 crt=45'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.747131348s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 66 pg[9.b( v 45'1130 (0'0,45'1130] local-lis/les=64/65 n=6 ec=54/39 lis/c=64/54 les/c/f=65/55/0 sis=66 pruub=14.993295670s) [2] async=[2] r=-1 lpr=66 pi=[54,66)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active pruub 190.747131348s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 66 pg[9.b( v 45'1130 (0'0,45'1130] local-lis/les=64/65 n=6 ec=54/39 lis/c=64/54 les/c/f=65/55/0 sis=66 pruub=14.993264198s) [2] r=-1 lpr=66 pi=[54,66)/1 crt=45'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.747131348s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 66 pg[9.f( v 45'1130 (0'0,45'1130] local-lis/les=64/65 n=6 ec=54/39 lis/c=64/54 les/c/f=65/55/0 sis=66 pruub=14.992499352s) [2] async=[2] r=-1 lpr=66 pi=[54,66)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active pruub 190.746994019s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 66 pg[9.f( v 45'1130 (0'0,45'1130] local-lis/les=64/65 n=6 ec=54/39 lis/c=64/54 les/c/f=65/55/0 sis=66 pruub=14.992440224s) [2] r=-1 lpr=66 pi=[54,66)/1 crt=45'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.746994019s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 66 pg[9.7( v 45'1130 (0'0,45'1130] local-lis/les=64/65 n=6 ec=54/39 lis/c=64/54 les/c/f=65/55/0 sis=66 pruub=14.991877556s) [2] async=[2] r=-1 lpr=66 pi=[54,66)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active pruub 190.746719360s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 66 pg[9.1f( v 45'1130 (0'0,45'1130] local-lis/les=64/65 n=5 ec=54/39 lis/c=64/54 les/c/f=65/55/0 sis=66 pruub=14.991895676s) [2] async=[2] r=-1 lpr=66 pi=[54,66)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active pruub 190.746871948s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 66 pg[9.7( v 45'1130 (0'0,45'1130] local-lis/les=64/65 n=6 ec=54/39 lis/c=64/54 les/c/f=65/55/0 sis=66 pruub=14.991744995s) [2] r=-1 lpr=66 pi=[54,66)/1 crt=45'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.746719360s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 66 pg[9.1f( v 45'1130 (0'0,45'1130] local-lis/les=64/65 n=5 ec=54/39 lis/c=64/54 les/c/f=65/55/0 sis=66 pruub=14.991852760s) [2] r=-1 lpr=66 pi=[54,66)/1 crt=45'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.746871948s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 66 pg[9.1b( v 45'1130 (0'0,45'1130] local-lis/les=64/65 n=5 ec=54/39 lis/c=64/54 les/c/f=65/55/0 sis=66 pruub=14.991701126s) [2] async=[2] r=-1 lpr=66 pi=[54,66)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active pruub 190.746749878s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 66 pg[9.1b( v 45'1130 (0'0,45'1130] local-lis/les=64/65 n=5 ec=54/39 lis/c=64/54 les/c/f=65/55/0 sis=66 pruub=14.991587639s) [2] r=-1 lpr=66 pi=[54,66)/1 crt=45'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.746749878s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 66 pg[9.13( v 45'1130 (0'0,45'1130] local-lis/les=64/65 n=5 ec=54/39 lis/c=64/54 les/c/f=65/55/0 sis=66 pruub=14.992065430s) [2] async=[2] r=-1 lpr=66 pi=[54,66)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active pruub 190.747467041s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:29 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 66 pg[9.13( v 45'1130 (0'0,45'1130] local-lis/les=64/65 n=5 ec=54/39 lis/c=64/54 les/c/f=65/55/0 sis=66 pruub=14.992032051s) [2] r=-1 lpr=66 pi=[54,66)/1 crt=45'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.747467041s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:29 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Nov 24 09:29:30 compute-0 podman[99344]: 2025-11-24 09:29:30.044398341 +0000 UTC m=+0.037033341 container create 2d670445edd3e72172bde46e64233482279744d37073273533c64be681e3c782 (image=quay.io/ceph/keepalived:2.2.4, name=interesting_gould, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., name=keepalived, architecture=x86_64, vcs-type=git, release=1793, com.redhat.component=keepalived-container, io.openshift.expose-services=, description=keepalived for Ceph, distribution-scope=public, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20)
Nov 24 09:29:30 compute-0 systemd[1]: Started libpod-conmon-2d670445edd3e72172bde46e64233482279744d37073273533c64be681e3c782.scope.
Nov 24 09:29:30 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:29:30 compute-0 podman[99344]: 2025-11-24 09:29:30.118135172 +0000 UTC m=+0.110770182 container init 2d670445edd3e72172bde46e64233482279744d37073273533c64be681e3c782 (image=quay.io/ceph/keepalived:2.2.4, name=interesting_gould, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, release=1793)
Nov 24 09:29:30 compute-0 podman[99344]: 2025-11-24 09:29:30.028754743 +0000 UTC m=+0.021389753 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Nov 24 09:29:30 compute-0 podman[99344]: 2025-11-24 09:29:30.126780527 +0000 UTC m=+0.119415547 container start 2d670445edd3e72172bde46e64233482279744d37073273533c64be681e3c782 (image=quay.io/ceph/keepalived:2.2.4, name=interesting_gould, vcs-type=git, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, release=1793, io.openshift.tags=Ceph keepalived, distribution-scope=public, version=2.2.4, com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Nov 24 09:29:30 compute-0 podman[99344]: 2025-11-24 09:29:30.130319235 +0000 UTC m=+0.122954235 container attach 2d670445edd3e72172bde46e64233482279744d37073273533c64be681e3c782 (image=quay.io/ceph/keepalived:2.2.4, name=interesting_gould, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, name=keepalived, io.openshift.expose-services=, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., release=1793)
Nov 24 09:29:30 compute-0 interesting_gould[99361]: 0 0
Nov 24 09:29:30 compute-0 systemd[1]: libpod-2d670445edd3e72172bde46e64233482279744d37073273533c64be681e3c782.scope: Deactivated successfully.
Nov 24 09:29:30 compute-0 conmon[99361]: conmon 2d670445edd3e72172bd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2d670445edd3e72172bde46e64233482279744d37073273533c64be681e3c782.scope/container/memory.events
Nov 24 09:29:30 compute-0 podman[99344]: 2025-11-24 09:29:30.134315174 +0000 UTC m=+0.126950164 container died 2d670445edd3e72172bde46e64233482279744d37073273533c64be681e3c782 (image=quay.io/ceph/keepalived:2.2.4, name=interesting_gould, description=keepalived for Ceph, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., name=keepalived, release=1793, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, architecture=x86_64, vcs-type=git, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Nov 24 09:29:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-8687abd90c34e031d09665f9d2d8c819b518d9c92b18b30d5db1dafeeb2af499-merged.mount: Deactivated successfully.
Nov 24 09:29:30 compute-0 podman[99344]: 2025-11-24 09:29:30.167453028 +0000 UTC m=+0.160088028 container remove 2d670445edd3e72172bde46e64233482279744d37073273533c64be681e3c782 (image=quay.io/ceph/keepalived:2.2.4, name=interesting_gould, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, vcs-type=git, distribution-scope=public, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, version=2.2.4, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, name=keepalived, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64)
Nov 24 09:29:30 compute-0 systemd[1]: libpod-conmon-2d670445edd3e72172bde46e64233482279744d37073273533c64be681e3c782.scope: Deactivated successfully.
Nov 24 09:29:30 compute-0 ceph-mon[74331]: 10.a scrub ok
Nov 24 09:29:30 compute-0 ceph-mon[74331]: 8.c scrub starts
Nov 24 09:29:30 compute-0 ceph-mon[74331]: 8.c scrub ok
Nov 24 09:29:30 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:30 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:30 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:30 compute-0 ceph-mon[74331]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 24 09:29:30 compute-0 ceph-mon[74331]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 24 09:29:30 compute-0 ceph-mon[74331]: Deploying daemon keepalived.rgw.default.compute-0.zrpppr on compute-0
Nov 24 09:29:30 compute-0 ceph-mon[74331]: 10.c deep-scrub starts
Nov 24 09:29:30 compute-0 ceph-mon[74331]: 10.c deep-scrub ok
Nov 24 09:29:30 compute-0 ceph-mon[74331]: osdmap e66: 3 total, 3 up, 3 in
Nov 24 09:29:30 compute-0 systemd[1]: Reloading.
Nov 24 09:29:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:30 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf40018b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:30 compute-0 systemd-sysv-generator[99412]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:29:30 compute-0 systemd-rc-local-generator[99408]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:29:30 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v62: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s; 294 B/s, 2 objects/s recovering
Nov 24 09:29:30 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Nov 24 09:29:30 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 24 09:29:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:30 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:30 compute-0 systemd[1]: Reloading.
Nov 24 09:29:30 compute-0 systemd-sysv-generator[99450]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:29:30 compute-0 systemd-rc-local-generator[99447]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:29:30 compute-0 ceph-mgr[74626]: [progress INFO root] Writing back 23 completed events
Nov 24 09:29:30 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 24 09:29:30 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:30 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Nov 24 09:29:30 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 24 09:29:30 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Nov 24 09:29:30 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Nov 24 09:29:30 compute-0 systemd[1]: Starting Ceph keepalived.rgw.default.compute-0.zrpppr for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:29:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:30 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf00016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:30 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 67 pg[9.15( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=4 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=67 pruub=15.195795059s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active pruub 192.068862915s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:30 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 67 pg[9.d( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=67 pruub=15.195730209s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active pruub 192.069458008s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:30 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 67 pg[9.d( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=67 pruub=15.195695877s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=45'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 192.069458008s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:30 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 67 pg[9.15( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=4 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=67 pruub=15.194797516s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=45'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 192.068862915s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:30 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 67 pg[9.5( v 56'1133 (0'0,56'1133] local-lis/les=54/55 n=6 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=67 pruub=15.198564529s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=55'1131 lcod 55'1132 mlcod 55'1132 active pruub 192.072875977s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:30 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 67 pg[9.5( v 56'1133 (0'0,56'1133] local-lis/les=54/55 n=6 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=67 pruub=15.198502541s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=55'1131 lcod 55'1132 mlcod 0'0 unknown NOTIFY pruub 192.072875977s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:30 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 67 pg[9.1d( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=5 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=67 pruub=15.198533058s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active pruub 192.073364258s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:30 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 67 pg[9.1d( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=5 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=67 pruub=15.198472977s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=45'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 192.073364258s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:31 compute-0 podman[99505]: 2025-11-24 09:29:31.051001673 +0000 UTC m=+0.037279517 container create d5395347222ac278fb4f146fc9d90a8c85b2a981ca274101c0135407cbe033b2 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-rgw-default-compute-0-zrpppr, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.28.2, io.openshift.expose-services=, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, description=keepalived for Ceph)
Nov 24 09:29:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcfffd54835110a6643e4f6f31d6f2eb14263fea934a0dc5ef944129c62c613f/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:29:31 compute-0 podman[99505]: 2025-11-24 09:29:31.122697154 +0000 UTC m=+0.108975028 container init d5395347222ac278fb4f146fc9d90a8c85b2a981ca274101c0135407cbe033b2 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-rgw-default-compute-0-zrpppr, vcs-type=git, io.buildah.version=1.28.2, io.openshift.expose-services=, name=keepalived, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., description=keepalived for Ceph, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Nov 24 09:29:31 compute-0 podman[99505]: 2025-11-24 09:29:31.127431841 +0000 UTC m=+0.113709685 container start d5395347222ac278fb4f146fc9d90a8c85b2a981ca274101c0135407cbe033b2 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-rgw-default-compute-0-zrpppr, release=1793, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, description=keepalived for Ceph, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, distribution-scope=public)
Nov 24 09:29:31 compute-0 podman[99505]: 2025-11-24 09:29:31.034149875 +0000 UTC m=+0.020427739 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Nov 24 09:29:31 compute-0 bash[99505]: d5395347222ac278fb4f146fc9d90a8c85b2a981ca274101c0135407cbe033b2
Nov 24 09:29:31 compute-0 systemd[1]: Started Ceph keepalived.rgw.default.compute-0.zrpppr for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:29:31 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-rgw-default-compute-0-zrpppr[99520]: Mon Nov 24 09:29:31 2025: Starting Keepalived v2.2.4 (08/21,2021)
Nov 24 09:29:31 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-rgw-default-compute-0-zrpppr[99520]: Mon Nov 24 09:29:31 2025: Running on Linux 5.14.0-639.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025 (built for Linux 5.14.0)
Nov 24 09:29:31 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-rgw-default-compute-0-zrpppr[99520]: Mon Nov 24 09:29:31 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Nov 24 09:29:31 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-rgw-default-compute-0-zrpppr[99520]: Mon Nov 24 09:29:31 2025: Configuration file /etc/keepalived/keepalived.conf
Nov 24 09:29:31 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-rgw-default-compute-0-zrpppr[99520]: Mon Nov 24 09:29:31 2025: Failed to bind to process monitoring socket - errno 98 - Address already in use
Nov 24 09:29:31 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-rgw-default-compute-0-zrpppr[99520]: Mon Nov 24 09:29:31 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Nov 24 09:29:31 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-rgw-default-compute-0-zrpppr[99520]: Mon Nov 24 09:29:31 2025: Starting VRRP child process, pid=4
Nov 24 09:29:31 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-rgw-default-compute-0-zrpppr[99520]: Mon Nov 24 09:29:31 2025: Startup complete
Nov 24 09:29:31 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-0-mglptr[97666]: Mon Nov 24 09:29:31 2025: (VI_0) Entering BACKUP STATE
Nov 24 09:29:31 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-rgw-default-compute-0-zrpppr[99520]: Mon Nov 24 09:29:31 2025: (VI_0) Entering BACKUP STATE (init)
Nov 24 09:29:31 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:29:31 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:29:31 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:29:31.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:29:31 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-rgw-default-compute-0-zrpppr[99520]: Mon Nov 24 09:29:31 2025: VRRP_Script(check_backend) succeeded
Nov 24 09:29:31 compute-0 sudo[99280]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:31 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:29:31 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:31 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:29:31 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:31 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Nov 24 09:29:31 compute-0 ceph-mon[74331]: 11.17 scrub starts
Nov 24 09:29:31 compute-0 ceph-mon[74331]: 11.17 scrub ok
Nov 24 09:29:31 compute-0 ceph-mon[74331]: pgmap v62: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s; 294 B/s, 2 objects/s recovering
Nov 24 09:29:31 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 24 09:29:31 compute-0 ceph-mon[74331]: 10.9 scrub starts
Nov 24 09:29:31 compute-0 ceph-mon[74331]: 10.9 scrub ok
Nov 24 09:29:31 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:31 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 24 09:29:31 compute-0 ceph-mon[74331]: osdmap e67: 3 total, 3 up, 3 in
Nov 24 09:29:31 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:31 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:31 compute-0 ceph-mgr[74626]: [progress INFO root] complete: finished ev 59eac4a3-cc7a-4062-988e-7cc8bd0e133a (Updating ingress.rgw.default deployment (+4 -> 4))
Nov 24 09:29:31 compute-0 ceph-mgr[74626]: [progress INFO root] Completed event 59eac4a3-cc7a-4062-988e-7cc8bd0e133a (Updating ingress.rgw.default deployment (+4 -> 4)) in 8 seconds
Nov 24 09:29:31 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Nov 24 09:29:31 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:31 compute-0 ceph-mgr[74626]: [progress INFO root] update: starting ev 2e1526f6-f9c3-41c7-b285-0fc61db11077 (Updating prometheus deployment (+1 -> 1))
Nov 24 09:29:31 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:29:31 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:29:31 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:29:31.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:29:31 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Deploying daemon prometheus.compute-0 on compute-0
Nov 24 09:29:31 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Deploying daemon prometheus.compute-0 on compute-0
Nov 24 09:29:31 compute-0 sudo[99528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:29:31 compute-0 sudo[99528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:31 compute-0 sudo[99528]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:31 compute-0 sudo[99553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/prometheus:v2.51.0 --timeout 895 _orch deploy --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:29:31 compute-0 sudo[99553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:31 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-0-mglptr[97666]: Mon Nov 24 09:29:31 2025: (VI_0) Entering MASTER STATE
Nov 24 09:29:31 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Nov 24 09:29:31 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Nov 24 09:29:31 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Nov 24 09:29:31 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 9.16 deep-scrub starts
Nov 24 09:29:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 68 pg[9.1d( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=5 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[0] r=0 lpr=68 pi=[54,68)/1 crt=45'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 68 pg[9.5( v 56'1133 (0'0,56'1133] local-lis/les=54/55 n=6 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[0] r=0 lpr=68 pi=[54,68)/1 crt=55'1131 lcod 55'1132 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 68 pg[9.1d( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=5 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[0] r=0 lpr=68 pi=[54,68)/1 crt=45'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 68 pg[9.5( v 56'1133 (0'0,56'1133] local-lis/les=54/55 n=6 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[0] r=0 lpr=68 pi=[54,68)/1 crt=55'1131 lcod 55'1132 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 68 pg[9.d( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[0] r=0 lpr=68 pi=[54,68)/1 crt=45'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 68 pg[9.d( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[0] r=0 lpr=68 pi=[54,68)/1 crt=45'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 68 pg[9.15( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=4 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[0] r=0 lpr=68 pi=[54,68)/1 crt=45'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:31 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 68 pg[9.15( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=4 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[0] r=0 lpr=68 pi=[54,68)/1 crt=45'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:31 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 9.16 deep-scrub ok
Nov 24 09:29:32 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:32 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:32 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:32 compute-0 ceph-mon[74331]: 12.17 scrub starts
Nov 24 09:29:32 compute-0 ceph-mon[74331]: 12.17 scrub ok
Nov 24 09:29:32 compute-0 ceph-mon[74331]: Deploying daemon prometheus.compute-0 on compute-0
Nov 24 09:29:32 compute-0 ceph-mon[74331]: 12.f scrub starts
Nov 24 09:29:32 compute-0 ceph-mon[74331]: osdmap e68: 3 total, 3 up, 3 in
Nov 24 09:29:32 compute-0 ceph-mon[74331]: 9.16 deep-scrub starts
Nov 24 09:29:32 compute-0 ceph-mon[74331]: 9.16 deep-scrub ok
Nov 24 09:29:32 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:32 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab00003b70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:32 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v65: 353 pgs: 4 unknown, 349 active+clean; 455 KiB data, 107 MiB used, 60 GiB / 60 GiB avail; 302 B/s, 9 objects/s recovering
Nov 24 09:29:32 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:32 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf40018b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:29:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Nov 24 09:29:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Nov 24 09:29:32 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Nov 24 09:29:32 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Nov 24 09:29:32 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 69 pg[9.15( v 45'1130 (0'0,45'1130] local-lis/les=68/69 n=4 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[0] async=[2] r=0 lpr=68 pi=[54,68)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:32 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:32 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:32 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 69 pg[9.d( v 45'1130 (0'0,45'1130] local-lis/les=68/69 n=6 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[0] async=[2] r=0 lpr=68 pi=[54,68)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:32 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 69 pg[9.1d( v 45'1130 (0'0,45'1130] local-lis/les=68/69 n=5 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[0] async=[2] r=0 lpr=68 pi=[54,68)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:32 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 69 pg[9.5( v 56'1133 (0'0,56'1133] local-lis/les=68/69 n=6 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=68) [2]/[0] async=[2] r=0 lpr=68 pi=[54,68)/1 crt=56'1133 lcod 55'1132 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:32 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Nov 24 09:29:33 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-0-mglptr[97666]: Mon Nov 24 09:29:33 2025: (VI_0) received an invalid passwd!
Nov 24 09:29:33 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-rgw-default-compute-0-zrpppr[99520]: Mon Nov 24 09:29:33 2025: (VI_0) received lower priority (90) advert from 192.168.122.102 - discarding
Nov 24 09:29:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:29:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:29:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:29:33.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:29:33 compute-0 ceph-mon[74331]: 12.f scrub ok
Nov 24 09:29:33 compute-0 ceph-mon[74331]: 12.9 deep-scrub starts
Nov 24 09:29:33 compute-0 ceph-mon[74331]: 12.9 deep-scrub ok
Nov 24 09:29:33 compute-0 ceph-mon[74331]: pgmap v65: 353 pgs: 4 unknown, 349 active+clean; 455 KiB data, 107 MiB used, 60 GiB / 60 GiB avail; 302 B/s, 9 objects/s recovering
Nov 24 09:29:33 compute-0 ceph-mon[74331]: 10.d scrub starts
Nov 24 09:29:33 compute-0 ceph-mon[74331]: 10.d scrub ok
Nov 24 09:29:33 compute-0 ceph-mon[74331]: 9.2 scrub starts
Nov 24 09:29:33 compute-0 ceph-mon[74331]: osdmap e69: 3 total, 3 up, 3 in
Nov 24 09:29:33 compute-0 ceph-mon[74331]: 9.2 scrub ok
Nov 24 09:29:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:29:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:29:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:29:33.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:29:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Nov 24 09:29:33 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Nov 24 09:29:33 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Nov 24 09:29:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Nov 24 09:29:33 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Nov 24 09:29:34 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 70 pg[9.d( v 45'1130 (0'0,45'1130] local-lis/les=68/69 n=6 ec=54/39 lis/c=68/54 les/c/f=69/55/0 sis=70 pruub=14.840811729s) [2] async=[2] r=-1 lpr=70 pi=[54,70)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active pruub 194.798736572s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:34 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 70 pg[9.1d( v 45'1130 (0'0,45'1130] local-lis/les=68/69 n=5 ec=54/39 lis/c=68/54 les/c/f=69/55/0 sis=70 pruub=14.840845108s) [2] async=[2] r=-1 lpr=70 pi=[54,70)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active pruub 194.798782349s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:34 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 70 pg[9.5( v 56'1133 (0'0,56'1133] local-lis/les=68/69 n=6 ec=54/39 lis/c=68/54 les/c/f=69/55/0 sis=70 pruub=14.840824127s) [2] async=[2] r=-1 lpr=70 pi=[54,70)/1 crt=56'1133 lcod 55'1132 mlcod 55'1132 active pruub 194.798782349s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:34 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 70 pg[9.1d( v 45'1130 (0'0,45'1130] local-lis/les=68/69 n=5 ec=54/39 lis/c=68/54 les/c/f=69/55/0 sis=70 pruub=14.840768814s) [2] r=-1 lpr=70 pi=[54,70)/1 crt=45'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 194.798782349s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:34 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 70 pg[9.d( v 45'1130 (0'0,45'1130] local-lis/les=68/69 n=6 ec=54/39 lis/c=68/54 les/c/f=69/55/0 sis=70 pruub=14.840702057s) [2] r=-1 lpr=70 pi=[54,70)/1 crt=45'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 194.798736572s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:34 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 70 pg[9.5( v 56'1133 (0'0,56'1133] local-lis/les=68/69 n=6 ec=54/39 lis/c=68/54 les/c/f=69/55/0 sis=70 pruub=14.840729713s) [2] r=-1 lpr=70 pi=[54,70)/1 crt=56'1133 lcod 55'1132 mlcod 0'0 unknown NOTIFY pruub 194.798782349s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:34 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 70 pg[9.15( v 45'1130 (0'0,45'1130] local-lis/les=68/69 n=4 ec=54/39 lis/c=68/54 les/c/f=69/55/0 sis=70 pruub=14.833396912s) [2] async=[2] r=-1 lpr=70 pi=[54,70)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active pruub 194.791992188s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:34 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 70 pg[9.15( v 45'1130 (0'0,45'1130] local-lis/les=68/69 n=4 ec=54/39 lis/c=68/54 les/c/f=69/55/0 sis=70 pruub=14.833291054s) [2] r=-1 lpr=70 pi=[54,70)/1 crt=45'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 194.791992188s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:34 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-0-mglptr[97666]: Mon Nov 24 09:29:34 2025: (VI_0) received an invalid passwd!
Nov 24 09:29:34 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-rgw-default-compute-0-zrpppr[99520]: Mon Nov 24 09:29:34 2025: (VI_0) received lower priority (90) advert from 192.168.122.102 - discarding
Nov 24 09:29:34 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:34 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:34 compute-0 ceph-mon[74331]: 10.11 scrub starts
Nov 24 09:29:34 compute-0 ceph-mon[74331]: 10.11 scrub ok
Nov 24 09:29:34 compute-0 ceph-mon[74331]: 10.b scrub starts
Nov 24 09:29:34 compute-0 ceph-mon[74331]: 10.b scrub ok
Nov 24 09:29:34 compute-0 ceph-mon[74331]: 11.9 scrub starts
Nov 24 09:29:34 compute-0 ceph-mon[74331]: 11.9 scrub ok
Nov 24 09:29:34 compute-0 ceph-mon[74331]: osdmap e70: 3 total, 3 up, 3 in
Nov 24 09:29:34 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v68: 353 pgs: 4 unknown, 349 active+clean; 455 KiB data, 107 MiB used, 60 GiB / 60 GiB avail; 302 B/s, 9 objects/s recovering
Nov 24 09:29:34 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:34 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab00003b70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:34 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-rgw-default-compute-0-zrpppr[99520]: Mon Nov 24 09:29:34 2025: (VI_0) Entering MASTER STATE
Nov 24 09:29:34 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:34 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf4002d40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:34 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 11.d scrub starts
Nov 24 09:29:34 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 11.d scrub ok
Nov 24 09:29:34 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Nov 24 09:29:34 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Nov 24 09:29:34 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Nov 24 09:29:34 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/092934 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:29:35 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:29:35 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:29:35 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:29:35.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:29:35 compute-0 ceph-mon[74331]: 11.e scrub starts
Nov 24 09:29:35 compute-0 ceph-mon[74331]: 11.e scrub ok
Nov 24 09:29:35 compute-0 ceph-mon[74331]: pgmap v68: 353 pgs: 4 unknown, 349 active+clean; 455 KiB data, 107 MiB used, 60 GiB / 60 GiB avail; 302 B/s, 9 objects/s recovering
Nov 24 09:29:35 compute-0 ceph-mon[74331]: 12.d scrub starts
Nov 24 09:29:35 compute-0 ceph-mon[74331]: 12.d scrub ok
Nov 24 09:29:35 compute-0 ceph-mon[74331]: 11.d scrub starts
Nov 24 09:29:35 compute-0 ceph-mon[74331]: 11.d scrub ok
Nov 24 09:29:35 compute-0 ceph-mon[74331]: osdmap e71: 3 total, 3 up, 3 in
Nov 24 09:29:35 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:29:35 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:29:35 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:29:35.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:29:35 compute-0 podman[99618]: 2025-11-24 09:29:35.646832825 +0000 UTC m=+3.624630820 volume create 9716d3343b7ba578991de9582929b17fac843d7621ada16407a02f53efff06cd
Nov 24 09:29:35 compute-0 podman[99618]: 2025-11-24 09:29:35.627906836 +0000 UTC m=+3.605704871 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Nov 24 09:29:35 compute-0 podman[99618]: 2025-11-24 09:29:35.656327321 +0000 UTC m=+3.634125316 container create 2db1b79a38e6d1b3f190215e04e4ba7dbbb219f8a32885fb05352d6873300ca8 (image=quay.io/prometheus/prometheus:v2.51.0, name=sad_williams, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:29:35 compute-0 ceph-mgr[74626]: [progress INFO root] Writing back 24 completed events
Nov 24 09:29:35 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 24 09:29:35 compute-0 systemd[1]: Started libpod-conmon-2db1b79a38e6d1b3f190215e04e4ba7dbbb219f8a32885fb05352d6873300ca8.scope.
Nov 24 09:29:35 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:35 compute-0 ceph-mgr[74626]: [progress WARNING root] Starting Global Recovery Event,4 pgs not in active + clean state
Nov 24 09:29:35 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:29:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa95e4132c75d1c9aff1ae5459ac0e9570f0cb4bca666ee525aeae478825fda3/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Nov 24 09:29:35 compute-0 podman[99618]: 2025-11-24 09:29:35.738839141 +0000 UTC m=+3.716637136 container init 2db1b79a38e6d1b3f190215e04e4ba7dbbb219f8a32885fb05352d6873300ca8 (image=quay.io/prometheus/prometheus:v2.51.0, name=sad_williams, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:29:35 compute-0 podman[99618]: 2025-11-24 09:29:35.747538937 +0000 UTC m=+3.725336932 container start 2db1b79a38e6d1b3f190215e04e4ba7dbbb219f8a32885fb05352d6873300ca8 (image=quay.io/prometheus/prometheus:v2.51.0, name=sad_williams, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:29:35 compute-0 podman[99618]: 2025-11-24 09:29:35.750834738 +0000 UTC m=+3.728632753 container attach 2db1b79a38e6d1b3f190215e04e4ba7dbbb219f8a32885fb05352d6873300ca8 (image=quay.io/prometheus/prometheus:v2.51.0, name=sad_williams, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:29:35 compute-0 sad_williams[99877]: 65534 65534
Nov 24 09:29:35 compute-0 systemd[1]: libpod-2db1b79a38e6d1b3f190215e04e4ba7dbbb219f8a32885fb05352d6873300ca8.scope: Deactivated successfully.
Nov 24 09:29:35 compute-0 podman[99618]: 2025-11-24 09:29:35.752553241 +0000 UTC m=+3.730351236 container died 2db1b79a38e6d1b3f190215e04e4ba7dbbb219f8a32885fb05352d6873300ca8 (image=quay.io/prometheus/prometheus:v2.51.0, name=sad_williams, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:29:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa95e4132c75d1c9aff1ae5459ac0e9570f0cb4bca666ee525aeae478825fda3-merged.mount: Deactivated successfully.
Nov 24 09:29:35 compute-0 podman[99618]: 2025-11-24 09:29:35.785824817 +0000 UTC m=+3.763622812 container remove 2db1b79a38e6d1b3f190215e04e4ba7dbbb219f8a32885fb05352d6873300ca8 (image=quay.io/prometheus/prometheus:v2.51.0, name=sad_williams, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:29:35 compute-0 podman[99618]: 2025-11-24 09:29:35.78995801 +0000 UTC m=+3.767756005 volume remove 9716d3343b7ba578991de9582929b17fac843d7621ada16407a02f53efff06cd
Nov 24 09:29:35 compute-0 systemd[1]: libpod-conmon-2db1b79a38e6d1b3f190215e04e4ba7dbbb219f8a32885fb05352d6873300ca8.scope: Deactivated successfully.
Nov 24 09:29:35 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 8.e deep-scrub starts
Nov 24 09:29:35 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 8.e deep-scrub ok
Nov 24 09:29:35 compute-0 podman[99894]: 2025-11-24 09:29:35.838766512 +0000 UTC m=+0.023062343 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Nov 24 09:29:36 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:36 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:36 compute-0 ceph-mon[74331]: 10.1 scrub starts
Nov 24 09:29:36 compute-0 ceph-mon[74331]: 10.1 scrub ok
Nov 24 09:29:36 compute-0 ceph-mon[74331]: 12.5 scrub starts
Nov 24 09:29:36 compute-0 ceph-mon[74331]: 12.5 scrub ok
Nov 24 09:29:36 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:36 compute-0 ceph-mon[74331]: 8.e deep-scrub starts
Nov 24 09:29:36 compute-0 ceph-mon[74331]: 8.e deep-scrub ok
Nov 24 09:29:36 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v70: 353 pgs: 4 unknown, 349 active+clean; 456 KiB data, 107 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:29:36 compute-0 podman[99894]: 2025-11-24 09:29:36.457249845 +0000 UTC m=+0.641545666 volume create 94def31ab826ff9de00203c0847f7a936ace0bab02fa3d1fc7d51400d501a15b
Nov 24 09:29:36 compute-0 podman[99894]: 2025-11-24 09:29:36.468121255 +0000 UTC m=+0.652417076 container create 5f04e7aba5c8a4bf78bb3a485dda0a8f411601b507f62c818e6e3c792ebd3ab6 (image=quay.io/prometheus/prometheus:v2.51.0, name=kind_poincare, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:29:36 compute-0 systemd[1]: Started libpod-conmon-5f04e7aba5c8a4bf78bb3a485dda0a8f411601b507f62c818e6e3c792ebd3ab6.scope.
Nov 24 09:29:36 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:36 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:36 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:29:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3be6c837cbcac51a2c994d1386f7eaa2433d89feb2c20f4186add986e22ba700/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Nov 24 09:29:36 compute-0 podman[99894]: 2025-11-24 09:29:36.539966559 +0000 UTC m=+0.724262400 container init 5f04e7aba5c8a4bf78bb3a485dda0a8f411601b507f62c818e6e3c792ebd3ab6 (image=quay.io/prometheus/prometheus:v2.51.0, name=kind_poincare, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:29:36 compute-0 podman[99894]: 2025-11-24 09:29:36.545815245 +0000 UTC m=+0.730111056 container start 5f04e7aba5c8a4bf78bb3a485dda0a8f411601b507f62c818e6e3c792ebd3ab6 (image=quay.io/prometheus/prometheus:v2.51.0, name=kind_poincare, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:29:36 compute-0 kind_poincare[99911]: 65534 65534
Nov 24 09:29:36 compute-0 systemd[1]: libpod-5f04e7aba5c8a4bf78bb3a485dda0a8f411601b507f62c818e6e3c792ebd3ab6.scope: Deactivated successfully.
Nov 24 09:29:36 compute-0 podman[99894]: 2025-11-24 09:29:36.548890001 +0000 UTC m=+0.733185812 container attach 5f04e7aba5c8a4bf78bb3a485dda0a8f411601b507f62c818e6e3c792ebd3ab6 (image=quay.io/prometheus/prometheus:v2.51.0, name=kind_poincare, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:29:36 compute-0 podman[99894]: 2025-11-24 09:29:36.549378654 +0000 UTC m=+0.733674485 container died 5f04e7aba5c8a4bf78bb3a485dda0a8f411601b507f62c818e6e3c792ebd3ab6 (image=quay.io/prometheus/prometheus:v2.51.0, name=kind_poincare, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:29:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-3be6c837cbcac51a2c994d1386f7eaa2433d89feb2c20f4186add986e22ba700-merged.mount: Deactivated successfully.
Nov 24 09:29:36 compute-0 podman[99894]: 2025-11-24 09:29:36.585457609 +0000 UTC m=+0.769753420 container remove 5f04e7aba5c8a4bf78bb3a485dda0a8f411601b507f62c818e6e3c792ebd3ab6 (image=quay.io/prometheus/prometheus:v2.51.0, name=kind_poincare, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:29:36 compute-0 podman[99894]: 2025-11-24 09:29:36.590027093 +0000 UTC m=+0.774322924 volume remove 94def31ab826ff9de00203c0847f7a936ace0bab02fa3d1fc7d51400d501a15b
Nov 24 09:29:36 compute-0 systemd[1]: libpod-conmon-5f04e7aba5c8a4bf78bb3a485dda0a8f411601b507f62c818e6e3c792ebd3ab6.scope: Deactivated successfully.
Nov 24 09:29:36 compute-0 systemd[1]: Reloading.
Nov 24 09:29:36 compute-0 systemd-sysv-generator[99957]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:29:36 compute-0 systemd-rc-local-generator[99953]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:29:36 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:36 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab00003b70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:36 compute-0 systemd[1]: Reloading.
Nov 24 09:29:36 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 11.2 deep-scrub starts
Nov 24 09:29:36 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 11.2 deep-scrub ok
Nov 24 09:29:36 compute-0 systemd-rc-local-generator[99989]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:29:36 compute-0 systemd-sysv-generator[99997]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:29:37 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:29:37 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:29:37 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:29:37.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:29:37 compute-0 systemd[1]: Starting Ceph prometheus.compute-0 for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:29:37 compute-0 ceph-mon[74331]: 10.f scrub starts
Nov 24 09:29:37 compute-0 ceph-mon[74331]: 10.f scrub ok
Nov 24 09:29:37 compute-0 ceph-mon[74331]: pgmap v70: 353 pgs: 4 unknown, 349 active+clean; 456 KiB data, 107 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:29:37 compute-0 ceph-mon[74331]: 12.0 scrub starts
Nov 24 09:29:37 compute-0 ceph-mon[74331]: 12.0 scrub ok
Nov 24 09:29:37 compute-0 ceph-mon[74331]: 11.2 deep-scrub starts
Nov 24 09:29:37 compute-0 ceph-mon[74331]: 11.2 deep-scrub ok
Nov 24 09:29:37 compute-0 podman[100051]: 2025-11-24 09:29:37.424091149 +0000 UTC m=+0.052831882 container create 10beeaa631829ec8676854498a3516687cc150842a3e976767e7a8406d406beb (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:29:37 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:29:37 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:29:37 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:29:37.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:29:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e808d421ce16ad17b94fab6ba2dae9b588b8fff9c297f105ca3d5f16e0791607/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Nov 24 09:29:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e808d421ce16ad17b94fab6ba2dae9b588b8fff9c297f105ca3d5f16e0791607/merged/etc/prometheus supports timestamps until 2038 (0x7fffffff)
Nov 24 09:29:37 compute-0 podman[100051]: 2025-11-24 09:29:37.472291197 +0000 UTC m=+0.101031930 container init 10beeaa631829ec8676854498a3516687cc150842a3e976767e7a8406d406beb (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:29:37 compute-0 podman[100051]: 2025-11-24 09:29:37.476660495 +0000 UTC m=+0.105401248 container start 10beeaa631829ec8676854498a3516687cc150842a3e976767e7a8406d406beb (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:29:37 compute-0 bash[100051]: 10beeaa631829ec8676854498a3516687cc150842a3e976767e7a8406d406beb
Nov 24 09:29:37 compute-0 podman[100051]: 2025-11-24 09:29:37.396529095 +0000 UTC m=+0.025269828 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Nov 24 09:29:37 compute-0 systemd[1]: Started Ceph prometheus.compute-0 for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:29:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-prometheus-compute-0[100066]: ts=2025-11-24T09:29:37.512Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)"
Nov 24 09:29:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-prometheus-compute-0[100066]: ts=2025-11-24T09:29:37.512Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)"
Nov 24 09:29:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-prometheus-compute-0[100066]: ts=2025-11-24T09:29:37.512Z caller=main.go:623 level=info host_details="(Linux 5.14.0-639.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025 x86_64 compute-0 (none))"
Nov 24 09:29:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-prometheus-compute-0[100066]: ts=2025-11-24T09:29:37.512Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)"
Nov 24 09:29:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-prometheus-compute-0[100066]: ts=2025-11-24T09:29:37.512Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)"
Nov 24 09:29:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-prometheus-compute-0[100066]: ts=2025-11-24T09:29:37.515Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=192.168.122.100:9095
Nov 24 09:29:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-prometheus-compute-0[100066]: ts=2025-11-24T09:29:37.516Z caller=main.go:1129 level=info msg="Starting TSDB ..."
Nov 24 09:29:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-prometheus-compute-0[100066]: ts=2025-11-24T09:29:37.518Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=192.168.122.100:9095
Nov 24 09:29:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-prometheus-compute-0[100066]: ts=2025-11-24T09:29:37.518Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=192.168.122.100:9095
Nov 24 09:29:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-prometheus-compute-0[100066]: ts=2025-11-24T09:29:37.521Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any"
Nov 24 09:29:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-prometheus-compute-0[100066]: ts=2025-11-24T09:29:37.521Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=2.18µs
Nov 24 09:29:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-prometheus-compute-0[100066]: ts=2025-11-24T09:29:37.521Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while"
Nov 24 09:29:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-prometheus-compute-0[100066]: ts=2025-11-24T09:29:37.522Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0
Nov 24 09:29:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-prometheus-compute-0[100066]: ts=2025-11-24T09:29:37.522Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=42.881µs wal_replay_duration=366.2µs wbl_replay_duration=170ns total_replay_duration=439.411µs
Nov 24 09:29:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-prometheus-compute-0[100066]: ts=2025-11-24T09:29:37.524Z caller=main.go:1150 level=info fs_type=XFS_SUPER_MAGIC
Nov 24 09:29:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-prometheus-compute-0[100066]: ts=2025-11-24T09:29:37.524Z caller=main.go:1153 level=info msg="TSDB started"
Nov 24 09:29:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-prometheus-compute-0[100066]: ts=2025-11-24T09:29:37.524Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
Nov 24 09:29:37 compute-0 sudo[99553]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:37 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:29:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-prometheus-compute-0[100066]: ts=2025-11-24T09:29:37.551Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=27.103453ms db_storage=1.05µs remote_storage=1.74µs web_handler=470ns query_engine=830ns scrape=3.105708ms scrape_sd=256.096µs notify=28.561µs notify_sd=16.13µs rules=23.097934ms tracing=9.24µs
Nov 24 09:29:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-prometheus-compute-0[100066]: ts=2025-11-24T09:29:37.551Z caller=main.go:1114 level=info msg="Server is ready to receive web requests."
Nov 24 09:29:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-prometheus-compute-0[100066]: ts=2025-11-24T09:29:37.551Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..."
Nov 24 09:29:37 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:37 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:29:37 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:37 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Nov 24 09:29:37 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:37 compute-0 ceph-mgr[74626]: [progress INFO root] complete: finished ev 2e1526f6-f9c3-41c7-b285-0fc61db11077 (Updating prometheus deployment (+1 -> 1))
Nov 24 09:29:37 compute-0 ceph-mgr[74626]: [progress INFO root] Completed event 2e1526f6-f9c3-41c7-b285-0fc61db11077 (Updating prometheus deployment (+1 -> 1)) in 6 seconds
Nov 24 09:29:37 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "prometheus"} v 0)
Nov 24 09:29:37 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Nov 24 09:29:37 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e71 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:29:37 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 8.1 deep-scrub starts
Nov 24 09:29:37 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 8.1 deep-scrub ok
Nov 24 09:29:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:38 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf4002d40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:38 compute-0 ceph-mon[74331]: 11.a deep-scrub starts
Nov 24 09:29:38 compute-0 ceph-mon[74331]: 11.a deep-scrub ok
Nov 24 09:29:38 compute-0 ceph-mon[74331]: 10.6 scrub starts
Nov 24 09:29:38 compute-0 ceph-mon[74331]: 10.6 scrub ok
Nov 24 09:29:38 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:38 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:38 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:38 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Nov 24 09:29:38 compute-0 ceph-mon[74331]: 8.1 deep-scrub starts
Nov 24 09:29:38 compute-0 ceph-mon[74331]: 8.1 deep-scrub ok
Nov 24 09:29:38 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v71: 353 pgs: 353 active+clean; 456 KiB data, 107 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s rd, 0 B/s wr, 10 op/s; 98 B/s, 4 objects/s recovering
Nov 24 09:29:38 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Nov 24 09:29:38 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 24 09:29:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:38 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:38 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Nov 24 09:29:38 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : mgrmap e26: compute-0.mauvni(active, since 88s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:29:38 compute-0 sshd-session[92753]: Connection closed by 192.168.122.100 port 58974
Nov 24 09:29:38 compute-0 sshd-session[92722]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 24 09:29:38 compute-0 systemd[1]: session-35.scope: Deactivated successfully.
Nov 24 09:29:38 compute-0 systemd[1]: session-35.scope: Consumed 47.084s CPU time.
Nov 24 09:29:38 compute-0 systemd-logind[822]: Session 35 logged out. Waiting for processes to exit.
Nov 24 09:29:38 compute-0 systemd-logind[822]: Removed session 35.
Nov 24 09:29:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ignoring --setuser ceph since I am not root
Nov 24 09:29:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ignoring --setgroup ceph since I am not root
Nov 24 09:29:38 compute-0 ceph-mgr[74626]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Nov 24 09:29:38 compute-0 ceph-mgr[74626]: pidfile_write: ignore empty --pid-file
Nov 24 09:29:38 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'alerts'
Nov 24 09:29:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:29:38.828+0000 7fac81914140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 24 09:29:38 compute-0 ceph-mgr[74626]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 24 09:29:38 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'balancer'
Nov 24 09:29:38 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 8.0 scrub starts
Nov 24 09:29:38 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 8.0 scrub ok
Nov 24 09:29:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:38 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0002b10 fd 15 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:29:38.912+0000 7fac81914140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 24 09:29:38 compute-0 ceph-mgr[74626]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 24 09:29:38 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'cephadm'
Nov 24 09:29:39 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:29:39 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:29:39 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:29:39.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:29:39 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Nov 24 09:29:39 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:29:39 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:29:39 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:29:39.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:29:39 compute-0 ceph-mon[74331]: 12.11 scrub starts
Nov 24 09:29:39 compute-0 ceph-mon[74331]: 12.11 scrub ok
Nov 24 09:29:39 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 24 09:29:39 compute-0 ceph-mon[74331]: 12.1f scrub starts
Nov 24 09:29:39 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Nov 24 09:29:39 compute-0 ceph-mon[74331]: mgrmap e26: compute-0.mauvni(active, since 88s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:29:39 compute-0 ceph-mon[74331]: 8.0 scrub starts
Nov 24 09:29:39 compute-0 ceph-mon[74331]: 8.0 scrub ok
Nov 24 09:29:39 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 24 09:29:39 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Nov 24 09:29:39 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Nov 24 09:29:39 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 72 pg[9.16( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=4 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=72 pruub=14.571634293s) [1] r=-1 lpr=72 pi=[54,72)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active pruub 200.069030762s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:39 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 72 pg[9.16( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=4 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=72 pruub=14.571595192s) [1] r=-1 lpr=72 pi=[54,72)/1 crt=45'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 200.069030762s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:39 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 72 pg[9.e( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=72 pruub=14.571098328s) [1] r=-1 lpr=72 pi=[54,72)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active pruub 200.069107056s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:39 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 72 pg[9.e( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=72 pruub=14.570952415s) [1] r=-1 lpr=72 pi=[54,72)/1 crt=45'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 200.069107056s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:39 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 72 pg[9.6( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=72 pruub=14.571009636s) [1] r=-1 lpr=72 pi=[54,72)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active pruub 200.069747925s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:39 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 72 pg[9.6( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=72 pruub=14.570903778s) [1] r=-1 lpr=72 pi=[54,72)/1 crt=45'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 200.069747925s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:39 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 72 pg[9.1e( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=5 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=72 pruub=14.574321747s) [1] r=-1 lpr=72 pi=[54,72)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active pruub 200.073562622s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:39 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 72 pg[9.1e( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=5 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=72 pruub=14.574255943s) [1] r=-1 lpr=72 pi=[54,72)/1 crt=45'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 200.073562622s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:39 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Nov 24 09:29:39 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:29:39.555786) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 09:29:39 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Nov 24 09:29:39 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976579555825, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 648, "num_deletes": 251, "total_data_size": 934415, "memory_usage": 947416, "flush_reason": "Manual Compaction"}
Nov 24 09:29:39 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Nov 24 09:29:39 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976579569356, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 909671, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7332, "largest_seqno": 7979, "table_properties": {"data_size": 906015, "index_size": 1372, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9601, "raw_average_key_size": 20, "raw_value_size": 898100, "raw_average_value_size": 1906, "num_data_blocks": 61, "num_entries": 471, "num_filter_entries": 471, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976568, "oldest_key_time": 1763976568, "file_creation_time": 1763976579, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "42aa12d2-c531-4ddc-8c4c-bc0b5971346b", "db_session_id": "RORHLERH15LC1QL8D0I4", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:29:39 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 13628 microseconds, and 3984 cpu microseconds.
Nov 24 09:29:39 compute-0 ceph-mon[74331]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:29:39 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:29:39.569411) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 909671 bytes OK
Nov 24 09:29:39 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:29:39.569436) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Nov 24 09:29:39 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:29:39.573872) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Nov 24 09:29:39 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:29:39.573917) EVENT_LOG_v1 {"time_micros": 1763976579573906, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 09:29:39 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:29:39.573947) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 09:29:39 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 930660, prev total WAL file size 930660, number of live WAL files 2.
Nov 24 09:29:39 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:29:39 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:29:39.574644) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Nov 24 09:29:39 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 09:29:39 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(888KB)], [20(11MB)]
Nov 24 09:29:39 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976579574677, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 12657461, "oldest_snapshot_seqno": -1}
Nov 24 09:29:39 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3126 keys, 11432716 bytes, temperature: kUnknown
Nov 24 09:29:39 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976579671794, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 11432716, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11407783, "index_size": 15992, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7877, "raw_key_size": 80791, "raw_average_key_size": 25, "raw_value_size": 11346117, "raw_average_value_size": 3629, "num_data_blocks": 696, "num_entries": 3126, "num_filter_entries": 3126, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976305, "oldest_key_time": 0, "file_creation_time": 1763976579, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "42aa12d2-c531-4ddc-8c4c-bc0b5971346b", "db_session_id": "RORHLERH15LC1QL8D0I4", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:29:39 compute-0 ceph-mon[74331]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:29:39 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'crash'
Nov 24 09:29:39 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:29:39.672159) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 11432716 bytes
Nov 24 09:29:39 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:29:39.687193) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 130.2 rd, 117.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 11.2 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(26.5) write-amplify(12.6) OK, records in: 3651, records dropped: 525 output_compression: NoCompression
Nov 24 09:29:39 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:29:39.687233) EVENT_LOG_v1 {"time_micros": 1763976579687216, "job": 6, "event": "compaction_finished", "compaction_time_micros": 97240, "compaction_time_cpu_micros": 26897, "output_level": 6, "num_output_files": 1, "total_output_size": 11432716, "num_input_records": 3651, "num_output_records": 3126, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 09:29:39 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:29:39 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976579687528, "job": 6, "event": "table_file_deletion", "file_number": 22}
Nov 24 09:29:39 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:29:39 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976579692436, "job": 6, "event": "table_file_deletion", "file_number": 20}
Nov 24 09:29:39 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:29:39.574584) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:29:39 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:29:39.692467) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:29:39 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:29:39.692475) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:29:39 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:29:39.692476) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:29:39 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:29:39.692477) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:29:39 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:29:39.692479) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:29:39 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:29:39.760+0000 7fac81914140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 24 09:29:39 compute-0 ceph-mgr[74626]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 24 09:29:39 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'dashboard'
Nov 24 09:29:39 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Nov 24 09:29:39 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Nov 24 09:29:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:40 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0002b10 fd 15 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:40 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'devicehealth'
Nov 24 09:29:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:29:40.452+0000 7fac81914140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 24 09:29:40 compute-0 ceph-mgr[74626]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 24 09:29:40 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'diskprediction_local'
Nov 24 09:29:40 compute-0 ceph-mon[74331]: 12.1f scrub ok
Nov 24 09:29:40 compute-0 ceph-mon[74331]: 12.13 scrub starts
Nov 24 09:29:40 compute-0 ceph-mon[74331]: 12.13 scrub ok
Nov 24 09:29:40 compute-0 ceph-mon[74331]: from='mgr.14484 192.168.122.100:0/2522741294' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 24 09:29:40 compute-0 ceph-mon[74331]: osdmap e72: 3 total, 3 up, 3 in
Nov 24 09:29:40 compute-0 ceph-mon[74331]: 8.7 scrub starts
Nov 24 09:29:40 compute-0 ceph-mon[74331]: 8.7 scrub ok
Nov 24 09:29:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:40 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0002b10 fd 15 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:40 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Nov 24 09:29:40 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Nov 24 09:29:40 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Nov 24 09:29:40 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 73 pg[9.1e( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=5 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=73) [1]/[0] r=0 lpr=73 pi=[54,73)/1 crt=45'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:40 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 73 pg[9.1e( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=5 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=73) [1]/[0] r=0 lpr=73 pi=[54,73)/1 crt=45'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:40 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 73 pg[9.6( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=73) [1]/[0] r=0 lpr=73 pi=[54,73)/1 crt=45'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:40 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 73 pg[9.6( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=73) [1]/[0] r=0 lpr=73 pi=[54,73)/1 crt=45'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:40 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 73 pg[9.16( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=4 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=73) [1]/[0] r=0 lpr=73 pi=[54,73)/1 crt=45'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:40 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 73 pg[9.e( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=73) [1]/[0] r=0 lpr=73 pi=[54,73)/1 crt=45'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:40 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 73 pg[9.e( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=73) [1]/[0] r=0 lpr=73 pi=[54,73)/1 crt=45'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:40 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 73 pg[9.16( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=4 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=73) [1]/[0] r=0 lpr=73 pi=[54,73)/1 crt=45'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 24 09:29:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 24 09:29:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]:   from numpy import show_config as show_numpy_config
Nov 24 09:29:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:29:40.644+0000 7fac81914140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 24 09:29:40 compute-0 ceph-mgr[74626]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 24 09:29:40 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'influx'
Nov 24 09:29:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:29:40.721+0000 7fac81914140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 24 09:29:40 compute-0 ceph-mgr[74626]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 24 09:29:40 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'insights'
Nov 24 09:29:40 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'iostat'
Nov 24 09:29:40 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Nov 24 09:29:40 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Nov 24 09:29:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:40 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8000b60 fd 15 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:29:40.882+0000 7fac81914140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 24 09:29:40 compute-0 ceph-mgr[74626]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 24 09:29:40 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'k8sevents'
Nov 24 09:29:41 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:29:41 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:29:41 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:29:41.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:29:41 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'localpool'
Nov 24 09:29:41 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'mds_autoscaler'
Nov 24 09:29:41 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:29:41 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:29:41 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:29:41.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:29:41 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Nov 24 09:29:41 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'mirroring'
Nov 24 09:29:41 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'nfs'
Nov 24 09:29:41 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Nov 24 09:29:41 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Nov 24 09:29:41 compute-0 ceph-mon[74331]: 10.1a scrub starts
Nov 24 09:29:41 compute-0 ceph-mon[74331]: 10.1a scrub ok
Nov 24 09:29:41 compute-0 ceph-mon[74331]: 8.9 scrub starts
Nov 24 09:29:41 compute-0 ceph-mon[74331]: 8.9 scrub ok
Nov 24 09:29:41 compute-0 ceph-mon[74331]: osdmap e73: 3 total, 3 up, 3 in
Nov 24 09:29:41 compute-0 ceph-mon[74331]: 11.6 scrub starts
Nov 24 09:29:41 compute-0 ceph-mon[74331]: 11.6 scrub ok
Nov 24 09:29:41 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Nov 24 09:29:41 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Nov 24 09:29:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:29:41.999+0000 7fac81914140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 24 09:29:42 compute-0 ceph-mgr[74626]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 24 09:29:42 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'orchestrator'
Nov 24 09:29:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:29:42.250+0000 7fac81914140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 24 09:29:42 compute-0 ceph-mgr[74626]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 24 09:29:42 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'osd_perf_query'
Nov 24 09:29:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:42 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf4002d40 fd 15 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:29:42.326+0000 7fac81914140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 24 09:29:42 compute-0 ceph-mgr[74626]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 24 09:29:42 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'osd_support'
Nov 24 09:29:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:29:42.392+0000 7fac81914140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 24 09:29:42 compute-0 ceph-mgr[74626]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 24 09:29:42 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'pg_autoscaler'
Nov 24 09:29:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:29:42.468+0000 7fac81914140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 24 09:29:42 compute-0 ceph-mgr[74626]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 24 09:29:42 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'progress'
Nov 24 09:29:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:42 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc003c10 fd 15 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:29:42.538+0000 7fac81914140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 24 09:29:42 compute-0 ceph-mgr[74626]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 24 09:29:42 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'prometheus'
Nov 24 09:29:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 74 pg[9.1e( v 45'1130 (0'0,45'1130] local-lis/les=73/74 n=5 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=73) [1]/[0] async=[1] r=0 lpr=73 pi=[54,73)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 74 pg[9.e( v 45'1130 (0'0,45'1130] local-lis/les=73/74 n=6 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=73) [1]/[0] async=[1] r=0 lpr=73 pi=[54,73)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 74 pg[9.16( v 45'1130 (0'0,45'1130] local-lis/les=73/74 n=4 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=73) [1]/[0] async=[1] r=0 lpr=73 pi=[54,73)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 74 pg[9.6( v 45'1130 (0'0,45'1130] local-lis/les=73/74 n=6 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=73) [1]/[0] async=[1] r=0 lpr=73 pi=[54,73)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:42 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e74 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:29:42 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Nov 24 09:29:42 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Nov 24 09:29:42 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Nov 24 09:29:42 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Nov 24 09:29:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 75 pg[9.1e( v 45'1130 (0'0,45'1130] local-lis/les=73/74 n=5 ec=54/39 lis/c=73/54 les/c/f=74/55/0 sis=75 pruub=15.801430702s) [1] async=[1] r=-1 lpr=75 pi=[54,75)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active pruub 204.543685913s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 75 pg[9.1e( v 45'1130 (0'0,45'1130] local-lis/les=73/74 n=5 ec=54/39 lis/c=73/54 les/c/f=74/55/0 sis=75 pruub=15.801348686s) [1] r=-1 lpr=75 pi=[54,75)/1 crt=45'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 204.543685913s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 75 pg[9.e( v 45'1130 (0'0,45'1130] local-lis/les=73/74 n=6 ec=54/39 lis/c=73/54 les/c/f=74/55/0 sis=75 pruub=15.804235458s) [1] async=[1] r=-1 lpr=75 pi=[54,75)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active pruub 204.546615601s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 75 pg[9.e( v 45'1130 (0'0,45'1130] local-lis/les=73/74 n=6 ec=54/39 lis/c=73/54 les/c/f=74/55/0 sis=75 pruub=15.804140091s) [1] r=-1 lpr=75 pi=[54,75)/1 crt=45'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 204.546615601s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 75 pg[9.6( v 45'1130 (0'0,45'1130] local-lis/les=73/74 n=6 ec=54/39 lis/c=73/54 les/c/f=74/55/0 sis=75 pruub=15.804328918s) [1] async=[1] r=-1 lpr=75 pi=[54,75)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active pruub 204.547164917s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 75 pg[9.6( v 45'1130 (0'0,45'1130] local-lis/les=73/74 n=6 ec=54/39 lis/c=73/54 les/c/f=74/55/0 sis=75 pruub=15.804252625s) [1] r=-1 lpr=75 pi=[54,75)/1 crt=45'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 204.547164917s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 75 pg[9.16( v 45'1130 (0'0,45'1130] local-lis/les=73/74 n=4 ec=54/39 lis/c=73/54 les/c/f=74/55/0 sis=75 pruub=15.803491592s) [1] async=[1] r=-1 lpr=75 pi=[54,75)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active pruub 204.546691895s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:42 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 75 pg[9.16( v 45'1130 (0'0,45'1130] local-lis/les=73/74 n=4 ec=54/39 lis/c=73/54 les/c/f=74/55/0 sis=75 pruub=15.803431511s) [1] r=-1 lpr=75 pi=[54,75)/1 crt=45'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 204.546691895s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:42 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Nov 24 09:29:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:42 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0003c10 fd 15 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:29:42.892+0000 7fac81914140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 24 09:29:42 compute-0 ceph-mgr[74626]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 24 09:29:42 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'rbd_support'
Nov 24 09:29:42 compute-0 ceph-mon[74331]: 12.1b scrub starts
Nov 24 09:29:42 compute-0 ceph-mon[74331]: 12.1b scrub ok
Nov 24 09:29:42 compute-0 ceph-mon[74331]: 11.16 scrub starts
Nov 24 09:29:42 compute-0 ceph-mon[74331]: 11.16 scrub ok
Nov 24 09:29:42 compute-0 ceph-mon[74331]: 10.1f scrub starts
Nov 24 09:29:42 compute-0 ceph-mon[74331]: 10.1f scrub ok
Nov 24 09:29:42 compute-0 ceph-mon[74331]: 11.18 scrub starts
Nov 24 09:29:42 compute-0 ceph-mon[74331]: 11.18 scrub ok
Nov 24 09:29:42 compute-0 ceph-mon[74331]: osdmap e74: 3 total, 3 up, 3 in
Nov 24 09:29:42 compute-0 ceph-mon[74331]: 8.a scrub starts
Nov 24 09:29:42 compute-0 ceph-mon[74331]: 8.a scrub ok
Nov 24 09:29:42 compute-0 ceph-mon[74331]: osdmap e75: 3 total, 3 up, 3 in
Nov 24 09:29:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:29:42.996+0000 7fac81914140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 24 09:29:42 compute-0 ceph-mgr[74626]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 24 09:29:42 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'restful'
Nov 24 09:29:43 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:29:43 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:29:43 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:29:43.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:29:43 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'rgw'
Nov 24 09:29:43 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:29:43.436+0000 7fac81914140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 24 09:29:43 compute-0 ceph-mgr[74626]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 24 09:29:43 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'rook'
Nov 24 09:29:43 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:29:43 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:29:43 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:29:43.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:29:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Nov 24 09:29:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Nov 24 09:29:43 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Nov 24 09:29:43 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 8.1e deep-scrub starts
Nov 24 09:29:43 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 8.1e deep-scrub ok
Nov 24 09:29:43 compute-0 ceph-mon[74331]: 12.16 scrub starts
Nov 24 09:29:43 compute-0 ceph-mon[74331]: 12.16 scrub ok
Nov 24 09:29:43 compute-0 ceph-mon[74331]: 8.1a scrub starts
Nov 24 09:29:43 compute-0 ceph-mon[74331]: 8.1a scrub ok
Nov 24 09:29:43 compute-0 ceph-mon[74331]: 8.d deep-scrub starts
Nov 24 09:29:43 compute-0 ceph-mon[74331]: 8.d deep-scrub ok
Nov 24 09:29:43 compute-0 ceph-mon[74331]: 12.14 scrub starts
Nov 24 09:29:43 compute-0 ceph-mon[74331]: osdmap e76: 3 total, 3 up, 3 in
Nov 24 09:29:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:29:44.008+0000 7fac81914140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 24 09:29:44 compute-0 ceph-mgr[74626]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 24 09:29:44 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'selftest'
Nov 24 09:29:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:29:44.084+0000 7fac81914140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 24 09:29:44 compute-0 ceph-mgr[74626]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 24 09:29:44 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'snap_schedule'
Nov 24 09:29:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:29:44.171+0000 7fac81914140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 24 09:29:44 compute-0 ceph-mgr[74626]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 24 09:29:44 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'stats'
Nov 24 09:29:44 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'status'
Nov 24 09:29:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:44 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae80016a0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:29:44.328+0000 7fac81914140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 24 09:29:44 compute-0 ceph-mgr[74626]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 24 09:29:44 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'telegraf'
Nov 24 09:29:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:29:44.402+0000 7fac81914140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 24 09:29:44 compute-0 ceph-mgr[74626]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 24 09:29:44 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'telemetry'
Nov 24 09:29:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:44 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf4003e40 fd 15 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:29:44.569+0000 7fac81914140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 24 09:29:44 compute-0 ceph-mgr[74626]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 24 09:29:44 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'test_orchestrator'
Nov 24 09:29:44 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.rzcnzg restarted
Nov 24 09:29:44 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.rzcnzg started
Nov 24 09:29:44 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Nov 24 09:29:44 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Nov 24 09:29:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:29:44.800+0000 7fac81914140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 24 09:29:44 compute-0 ceph-mgr[74626]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 24 09:29:44 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'volumes'
Nov 24 09:29:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:44 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc003c10 fd 15 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:44 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.qelqsg restarted
Nov 24 09:29:44 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.qelqsg started
Nov 24 09:29:45 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : mgrmap e27: compute-0.mauvni(active, since 94s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:29:45 compute-0 ceph-mon[74331]: 12.14 scrub ok
Nov 24 09:29:45 compute-0 ceph-mon[74331]: 8.1e deep-scrub starts
Nov 24 09:29:45 compute-0 ceph-mon[74331]: 8.1e deep-scrub ok
Nov 24 09:29:45 compute-0 ceph-mon[74331]: 11.13 deep-scrub starts
Nov 24 09:29:45 compute-0 ceph-mon[74331]: 11.13 deep-scrub ok
Nov 24 09:29:45 compute-0 ceph-mon[74331]: 10.7 deep-scrub starts
Nov 24 09:29:45 compute-0 ceph-mon[74331]: 10.7 deep-scrub ok
Nov 24 09:29:45 compute-0 ceph-mon[74331]: Standby manager daemon compute-2.rzcnzg restarted
Nov 24 09:29:45 compute-0 ceph-mon[74331]: Standby manager daemon compute-2.rzcnzg started
Nov 24 09:29:45 compute-0 ceph-mon[74331]: Standby manager daemon compute-1.qelqsg restarted
Nov 24 09:29:45 compute-0 ceph-mon[74331]: Standby manager daemon compute-1.qelqsg started
Nov 24 09:29:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:29:45.113+0000 7fac81914140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: mgr[py] Loading python module 'zabbix'
Nov 24 09:29:45 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:29:45 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 24 09:29:45 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:29:45.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 24 09:29:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:29:45.182+0000 7fac81914140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 24 09:29:45 compute-0 ceph-mon[74331]: log_channel(cluster) log [INF] : Active manager daemon compute-0.mauvni restarted
Nov 24 09:29:45 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Nov 24 09:29:45 compute-0 ceph-mon[74331]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.mauvni
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: ms_deliver_dispatch: unhandled message 0x5559fe581860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Nov 24 09:29:45 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Nov 24 09:29:45 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Nov 24 09:29:45 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : mgrmap e28: compute-0.mauvni(active, starting, since 0.0598706s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: mgr handle_mgr_map Activating!
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: mgr handle_mgr_map I am now activating
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: balancer
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:29:45 compute-0 ceph-mon[74331]: log_channel(cluster) log [INF] : Manager daemon compute-0.mauvni is now available
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Starting
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Optimize plan auto_2025-11-24_09:29:45
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: cephadm
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: crash
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: dashboard
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO access_control] Loading user roles DB version=2
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO sso] Loading SSO DB version=1
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: devicehealth
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO root] Configured CherryPy, starting engine...
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [devicehealth INFO root] Starting
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: iostat
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: nfs
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: orchestrator
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: pg_autoscaler
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: progress
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [progress INFO root] Loading...
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7fac08cd7a00>, <progress.module.GhostEvent object at 0x7fac08cd7c40>, <progress.module.GhostEvent object at 0x7fac08cd7c70>, <progress.module.GhostEvent object at 0x7fac08cd7ca0>, <progress.module.GhostEvent object at 0x7fac08cd7cd0>, <progress.module.GhostEvent object at 0x7fac08cd7d00>, <progress.module.GhostEvent object at 0x7fac08cd7d30>, <progress.module.GhostEvent object at 0x7fac08cd7d60>, <progress.module.GhostEvent object at 0x7fac08cd7d90>, <progress.module.GhostEvent object at 0x7fac08cd7dc0>, <progress.module.GhostEvent object at 0x7fac08cd7df0>, <progress.module.GhostEvent object at 0x7fac08cd7e20>, <progress.module.GhostEvent object at 0x7fac08cd7e50>, <progress.module.GhostEvent object at 0x7fac08cd7e80>, <progress.module.GhostEvent object at 0x7fac08cd7eb0>, <progress.module.GhostEvent object at 0x7fac08cd7ee0>, <progress.module.GhostEvent object at 0x7fac08cd7f10>, <progress.module.GhostEvent object at 0x7fac08cd7f40>, <progress.module.GhostEvent object at 0x7fac08cd7f70>, <progress.module.GhostEvent object at 0x7fac08cd7fa0>, <progress.module.GhostEvent object at 0x7fac08cd7fd0>, <progress.module.GhostEvent object at 0x7fac08ce4040>, <progress.module.GhostEvent object at 0x7fac08ce4070>, <progress.module.GhostEvent object at 0x7fac08ce40a0>] historic events
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [progress INFO root] Loaded OSDMap, ready.
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [prometheus DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:29:45 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Nov 24 09:29:45 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: prometheus
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [prometheus INFO root] server_addr: :: server_port: 9283
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [prometheus INFO root] Cache enabled
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [prometheus INFO root] starting metric collection thread
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [prometheus INFO root] Starting engine...
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.error] [24/Nov/2025:09:29:45] ENGINE Bus STARTING
Nov 24 09:29:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: [24/Nov/2025:09:29:45] ENGINE Bus STARTING
Nov 24 09:29:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: CherryPy Checker:
Nov 24 09:29:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: The Application mounted at '' has an empty config.
Nov 24 09:29:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] recovery thread starting
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] starting setup
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: rbd_support
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: restful
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: status
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [restful INFO root] server_addr: :: server_port: 8003
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: telemetry
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 09:29:45 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mauvni/mirror_snapshot_schedule"} v 0)
Nov 24 09:29:45 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mauvni/mirror_snapshot_schedule"}]: dispatch
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [restful WARNING root] server not running: no certificate configured
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] PerfHandler: starting
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_task_task: vms, start_after=
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_task_task: volumes, start_after=
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_task_task: backups, start_after=
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_task_task: images, start_after=
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TaskHandler: starting
Nov 24 09:29:45 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mauvni/trash_purge_schedule"} v 0)
Nov 24 09:29:45 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mauvni/trash_purge_schedule"}]: dispatch
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: mgr load Constructed class from module: volumes
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] setup complete
Nov 24 09:29:45 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:29:45 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:29:45 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:29:45.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:29:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:29:45.457+0000 7fabe805b640 -1 client.0 error registering admin socket command: (17) File exists
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: client.0 error registering admin socket command: (17) File exists
Nov 24 09:29:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:29:45.458+0000 7fabe6057640 -1 client.0 error registering admin socket command: (17) File exists
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: client.0 error registering admin socket command: (17) File exists
Nov 24 09:29:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:29:45.458+0000 7fabe6057640 -1 client.0 error registering admin socket command: (17) File exists
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: client.0 error registering admin socket command: (17) File exists
Nov 24 09:29:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:29:45.458+0000 7fabe6057640 -1 client.0 error registering admin socket command: (17) File exists
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: client.0 error registering admin socket command: (17) File exists
Nov 24 09:29:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:29:45.459+0000 7fabe6057640 -1 client.0 error registering admin socket command: (17) File exists
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: client.0 error registering admin socket command: (17) File exists
Nov 24 09:29:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T09:29:45.459+0000 7fabe6057640 -1 client.0 error registering admin socket command: (17) File exists
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: client.0 error registering admin socket command: (17) File exists
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.error] [24/Nov/2025:09:29:45] ENGINE Serving on http://:::9283
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.error] [24/Nov/2025:09:29:45] ENGINE Bus STARTED
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [prometheus INFO root] Engine started.
Nov 24 09:29:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: [24/Nov/2025:09:29:45] ENGINE Serving on http://:::9283
Nov 24 09:29:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: [24/Nov/2025:09:29:45] ENGINE Bus STARTED
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Nov 24 09:29:45 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 11.10 deep-scrub starts
Nov 24 09:29:45 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 11.10 deep-scrub ok
Nov 24 09:29:45 compute-0 sshd-session[100279]: Accepted publickey for ceph-admin from 192.168.122.100 port 47718 ssh2: RSA SHA256:d901dNHY28a6fGfVJZBiZ/6DokdrVSFZakqDQ7cQMIA
Nov 24 09:29:45 compute-0 systemd-logind[822]: New session 37 of user ceph-admin.
Nov 24 09:29:45 compute-0 systemd[1]: Started Session 37 of User ceph-admin.
Nov 24 09:29:45 compute-0 sshd-session[100279]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 24 09:29:45 compute-0 ceph-mgr[74626]: [dashboard INFO dashboard.module] Engine started.
Nov 24 09:29:45 compute-0 sudo[100295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:29:45 compute-0 sudo[100295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:45 compute-0 sudo[100295]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:45 compute-0 sudo[100320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 24 09:29:45 compute-0 sudo[100320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:46 compute-0 ceph-mon[74331]: 11.1f scrub starts
Nov 24 09:29:46 compute-0 ceph-mon[74331]: 11.1f scrub ok
Nov 24 09:29:46 compute-0 ceph-mon[74331]: mgrmap e27: compute-0.mauvni(active, since 94s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:29:46 compute-0 ceph-mon[74331]: Active manager daemon compute-0.mauvni restarted
Nov 24 09:29:46 compute-0 ceph-mon[74331]: Activating manager daemon compute-0.mauvni
Nov 24 09:29:46 compute-0 ceph-mon[74331]: osdmap e77: 3 total, 3 up, 3 in
Nov 24 09:29:46 compute-0 ceph-mon[74331]: mgrmap e28: compute-0.mauvni(active, starting, since 0.0598706s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:29:46 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 24 09:29:46 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 24 09:29:46 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 24 09:29:46 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.cibmfe"}]: dispatch
Nov 24 09:29:46 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.bbilht"}]: dispatch
Nov 24 09:29:46 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.vpamdk"}]: dispatch
Nov 24 09:29:46 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr metadata", "who": "compute-0.mauvni", "id": "compute-0.mauvni"}]: dispatch
Nov 24 09:29:46 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr metadata", "who": "compute-2.rzcnzg", "id": "compute-2.rzcnzg"}]: dispatch
Nov 24 09:29:46 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr metadata", "who": "compute-1.qelqsg", "id": "compute-1.qelqsg"}]: dispatch
Nov 24 09:29:46 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 09:29:46 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 09:29:46 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 09:29:46 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 24 09:29:46 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 24 09:29:46 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 24 09:29:46 compute-0 ceph-mon[74331]: Manager daemon compute-0.mauvni is now available
Nov 24 09:29:46 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:46 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:29:46 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mauvni/mirror_snapshot_schedule"}]: dispatch
Nov 24 09:29:46 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mauvni/mirror_snapshot_schedule"}]: dispatch
Nov 24 09:29:46 compute-0 ceph-mon[74331]: 8.2 scrub starts
Nov 24 09:29:46 compute-0 ceph-mon[74331]: 8.2 scrub ok
Nov 24 09:29:46 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mauvni/trash_purge_schedule"}]: dispatch
Nov 24 09:29:46 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mauvni/trash_purge_schedule"}]: dispatch
Nov 24 09:29:46 compute-0 ceph-mon[74331]: 12.1 scrub starts
Nov 24 09:29:46 compute-0 ceph-mon[74331]: 12.1 scrub ok
Nov 24 09:29:46 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : mgrmap e29: compute-0.mauvni(active, since 1.07462s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:29:46 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v3: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:29:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:46 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:46 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae80016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:46 compute-0 podman[100419]: 2025-11-24 09:29:46.560029169 +0000 UTC m=+0.078070985 container exec 926e81c0f890a1c1ac5ebf5b0a3fc7d39273a3029701ecf933d5ab782a4c6bc4 (image=quay.io/ceph/ceph:v19, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mon-compute-0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:29:46 compute-0 ceph-mgr[74626]: [cephadm INFO cherrypy.error] [24/Nov/2025:09:29:46] ENGINE Bus STARTING
Nov 24 09:29:46 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : [24/Nov/2025:09:29:46] ENGINE Bus STARTING
Nov 24 09:29:46 compute-0 podman[100419]: 2025-11-24 09:29:46.677485746 +0000 UTC m=+0.195527552 container exec_died 926e81c0f890a1c1ac5ebf5b0a3fc7d39273a3029701ecf933d5ab782a4c6bc4 (image=quay.io/ceph/ceph:v19, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 09:29:46 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Nov 24 09:29:46 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Nov 24 09:29:46 compute-0 ceph-mgr[74626]: [cephadm INFO cherrypy.error] [24/Nov/2025:09:29:46] ENGINE Serving on http://192.168.122.100:8765
Nov 24 09:29:46 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : [24/Nov/2025:09:29:46] ENGINE Serving on http://192.168.122.100:8765
Nov 24 09:29:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:46 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf4003e40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:46 compute-0 ceph-mgr[74626]: [cephadm INFO cherrypy.error] [24/Nov/2025:09:29:46] ENGINE Serving on https://192.168.122.100:7150
Nov 24 09:29:46 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : [24/Nov/2025:09:29:46] ENGINE Serving on https://192.168.122.100:7150
Nov 24 09:29:46 compute-0 ceph-mgr[74626]: [cephadm INFO cherrypy.error] [24/Nov/2025:09:29:46] ENGINE Bus STARTED
Nov 24 09:29:46 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : [24/Nov/2025:09:29:46] ENGINE Bus STARTED
Nov 24 09:29:46 compute-0 ceph-mgr[74626]: [cephadm INFO cherrypy.error] [24/Nov/2025:09:29:46] ENGINE Client ('192.168.122.100', 37806) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 24 09:29:46 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : [24/Nov/2025:09:29:46] ENGINE Client ('192.168.122.100', 37806) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 24 09:29:47 compute-0 ceph-mon[74331]: 11.10 deep-scrub starts
Nov 24 09:29:47 compute-0 ceph-mon[74331]: 11.10 deep-scrub ok
Nov 24 09:29:47 compute-0 ceph-mon[74331]: mgrmap e29: compute-0.mauvni(active, since 1.07462s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:29:47 compute-0 ceph-mon[74331]: 8.11 scrub starts
Nov 24 09:29:47 compute-0 ceph-mon[74331]: 8.11 scrub ok
Nov 24 09:29:47 compute-0 ceph-mon[74331]: 11.14 deep-scrub starts
Nov 24 09:29:47 compute-0 ceph-mon[74331]: [24/Nov/2025:09:29:46] ENGINE Bus STARTING
Nov 24 09:29:47 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:29:47 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 24 09:29:47 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:29:47.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 24 09:29:47 compute-0 podman[100577]: 2025-11-24 09:29:47.243961247 +0000 UTC m=+0.051966048 container exec 7b41a24888e2dd3dca187bd76560d76829b7d7b7dcf75bceeedb6a669c1298b7 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:29:47 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v4: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:29:47 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Nov 24 09:29:47 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 24 09:29:47 compute-0 podman[100577]: 2025-11-24 09:29:47.27648495 +0000 UTC m=+0.084489751 container exec_died 7b41a24888e2dd3dca187bd76560d76829b7d7b7dcf75bceeedb6a669c1298b7 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:29:47 compute-0 ceph-mgr[74626]: [devicehealth INFO root] Check health
Nov 24 09:29:47 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:29:47 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:29:47 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:29:47.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:29:47 compute-0 podman[100661]: 2025-11-24 09:29:47.503607019 +0000 UTC m=+0.050406975 container exec 3adc7e4dbfb76acd70b92bdc8783d49c26735889ac1576ee9a74ae48f52acf62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:29:47 compute-0 podman[100661]: 2025-11-24 09:29:47.517541402 +0000 UTC m=+0.064341358 container exec_died 3adc7e4dbfb76acd70b92bdc8783d49c26735889ac1576ee9a74ae48f52acf62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:29:47 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Nov 24 09:29:47 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Nov 24 09:29:47 compute-0 podman[100727]: 2025-11-24 09:29:47.7493469 +0000 UTC m=+0.060155193 container exec 6c3a81d73f056383702bf60c1dab3f213ae48261b4107ee30655cbadd5ed4114 (image=quay.io/ceph/haproxy:2.3, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf)
Nov 24 09:29:47 compute-0 podman[100727]: 2025-11-24 09:29:47.757497994 +0000 UTC m=+0.068306257 container exec_died 6c3a81d73f056383702bf60c1dab3f213ae48261b4107ee30655cbadd5ed4114 (image=quay.io/ceph/haproxy:2.3, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf)
Nov 24 09:29:47 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e77 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:29:47 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 09:29:47 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:47 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 09:29:47 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:47 compute-0 podman[100794]: 2025-11-24 09:29:47.951073052 +0000 UTC m=+0.045879292 container exec da5e2e82794b556dfcd8ea30635453752d519b3ce5ab3e77ac09ab6f644d0021 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-0-mglptr, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, architecture=x86_64, release=1793, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., description=keepalived for Ceph, com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived)
Nov 24 09:29:47 compute-0 podman[100794]: 2025-11-24 09:29:47.988535171 +0000 UTC m=+0.083341391 container exec_died da5e2e82794b556dfcd8ea30635453752d519b3ce5ab3e77ac09ab6f644d0021 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-0-mglptr, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, distribution-scope=public, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, name=keepalived, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, com.redhat.component=keepalived-container, release=1793)
Nov 24 09:29:48 compute-0 ceph-mon[74331]: 11.14 deep-scrub ok
Nov 24 09:29:48 compute-0 ceph-mon[74331]: 8.13 scrub starts
Nov 24 09:29:48 compute-0 ceph-mon[74331]: 8.13 scrub ok
Nov 24 09:29:48 compute-0 ceph-mon[74331]: [24/Nov/2025:09:29:46] ENGINE Serving on http://192.168.122.100:8765
Nov 24 09:29:48 compute-0 ceph-mon[74331]: [24/Nov/2025:09:29:46] ENGINE Serving on https://192.168.122.100:7150
Nov 24 09:29:48 compute-0 ceph-mon[74331]: [24/Nov/2025:09:29:46] ENGINE Bus STARTED
Nov 24 09:29:48 compute-0 ceph-mon[74331]: [24/Nov/2025:09:29:46] ENGINE Client ('192.168.122.100', 37806) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 24 09:29:48 compute-0 ceph-mon[74331]: pgmap v4: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:29:48 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 24 09:29:48 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 24 09:29:48 compute-0 ceph-mon[74331]: 8.b scrub starts
Nov 24 09:29:48 compute-0 ceph-mon[74331]: 8.b scrub ok
Nov 24 09:29:48 compute-0 ceph-mon[74331]: 11.12 scrub starts
Nov 24 09:29:48 compute-0 ceph-mon[74331]: 11.12 scrub ok
Nov 24 09:29:48 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:48 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:48 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : mgrmap e30: compute-0.mauvni(active, since 2s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:29:48 compute-0 podman[100858]: 2025-11-24 09:29:48.201780509 +0000 UTC m=+0.067357422 container exec 32681d7ec5cc8674cee7672941d75d1674b5a61184918a28db89f06c57c7c5f8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:29:48 compute-0 podman[100858]: 2025-11-24 09:29:48.230090946 +0000 UTC m=+0.095667849 container exec_died 32681d7ec5cc8674cee7672941d75d1674b5a61184918a28db89f06c57c7c5f8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:29:48 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Nov 24 09:29:48 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 24 09:29:48 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Nov 24 09:29:48 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Nov 24 09:29:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:48 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:48 compute-0 podman[100934]: 2025-11-24 09:29:48.437278097 +0000 UTC m=+0.054737454 container exec a0674656060959d25392ea4042b567724541ad68ff4b7e0cdef72cb164c1b850 (image=quay.io/ceph/grafana:10.4.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 24 09:29:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:48 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:48 compute-0 podman[100934]: 2025-11-24 09:29:48.592313027 +0000 UTC m=+0.209772424 container exec_died a0674656060959d25392ea4042b567724541ad68ff4b7e0cdef72cb164c1b850 (image=quay.io/ceph/grafana:10.4.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 24 09:29:48 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 8.1d deep-scrub starts
Nov 24 09:29:48 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 8.1d deep-scrub ok
Nov 24 09:29:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:48 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae80016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:48 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 09:29:48 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:48 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 09:29:48 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:48 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Nov 24 09:29:48 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 24 09:29:48 compute-0 podman[101047]: 2025-11-24 09:29:48.951793073 +0000 UTC m=+0.050099038 container exec 10beeaa631829ec8676854498a3516687cc150842a3e976767e7a8406d406beb (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:29:48 compute-0 podman[101047]: 2025-11-24 09:29:48.990470644 +0000 UTC m=+0.088776579 container exec_died 10beeaa631829ec8676854498a3516687cc150842a3e976767e7a8406d406beb (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:29:49 compute-0 sudo[100320]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:49 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:29:49 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:49 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:29:49 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:49 compute-0 ceph-mon[74331]: 11.11 scrub starts
Nov 24 09:29:49 compute-0 ceph-mon[74331]: 11.11 scrub ok
Nov 24 09:29:49 compute-0 ceph-mon[74331]: mgrmap e30: compute-0.mauvni(active, since 2s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:29:49 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 24 09:29:49 compute-0 ceph-mon[74331]: osdmap e78: 3 total, 3 up, 3 in
Nov 24 09:29:49 compute-0 ceph-mon[74331]: 11.8 scrub starts
Nov 24 09:29:49 compute-0 ceph-mon[74331]: 11.8 scrub ok
Nov 24 09:29:49 compute-0 ceph-mon[74331]: 11.1 scrub starts
Nov 24 09:29:49 compute-0 ceph-mon[74331]: 11.1 scrub ok
Nov 24 09:29:49 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:49 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:49 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 24 09:29:49 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 24 09:29:49 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:49 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:49 compute-0 sudo[101089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:29:49 compute-0 sudo[101089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:49 compute-0 sudo[101089]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:49 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 09:29:49 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:29:49 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 24 09:29:49 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:49 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:29:49.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 24 09:29:49 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 09:29:49 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:49 compute-0 sudo[101114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:29:49 compute-0 sudo[101114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:49 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v6: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:29:49 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Nov 24 09:29:49 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 24 09:29:49 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:29:49 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:29:49 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:29:49.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:29:49 compute-0 sudo[101114]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:49 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Nov 24 09:29:49 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Nov 24 09:29:49 compute-0 sudo[101171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:29:49 compute-0 sudo[101171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:49 compute-0 sudo[101171]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:49 compute-0 sudo[101196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Nov 24 09:29:49 compute-0 sudo[101196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:50 compute-0 sudo[101196]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:50 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:29:50 compute-0 ceph-mon[74331]: 8.1d deep-scrub starts
Nov 24 09:29:50 compute-0 ceph-mon[74331]: 8.1d deep-scrub ok
Nov 24 09:29:50 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:50 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:50 compute-0 ceph-mon[74331]: pgmap v6: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:29:50 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 24 09:29:50 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 24 09:29:50 compute-0 ceph-mon[74331]: 12.2 deep-scrub starts
Nov 24 09:29:50 compute-0 ceph-mon[74331]: 12.2 deep-scrub ok
Nov 24 09:29:50 compute-0 ceph-mon[74331]: 8.8 deep-scrub starts
Nov 24 09:29:50 compute-0 ceph-mon[74331]: 8.8 deep-scrub ok
Nov 24 09:29:50 compute-0 ceph-mon[74331]: 10.2 scrub starts
Nov 24 09:29:50 compute-0 ceph-mon[74331]: 10.2 scrub ok
Nov 24 09:29:50 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:50 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:29:50 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:50 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Nov 24 09:29:50 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 09:29:50 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : mgrmap e31: compute-0.mauvni(active, since 5s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:29:50 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Nov 24 09:29:50 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 24 09:29:50 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Nov 24 09:29:50 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Nov 24 09:29:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:50 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf4003e40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:50 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 79 pg[9.8( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=79 pruub=11.765710831s) [2] r=-1 lpr=79 pi=[54,79)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active pruub 208.069244385s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:50 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 79 pg[9.8( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=79 pruub=11.765666008s) [2] r=-1 lpr=79 pi=[54,79)/1 crt=45'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 208.069244385s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:50 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 79 pg[9.18( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=5 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=79 pruub=11.769035339s) [2] r=-1 lpr=79 pi=[54,79)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active pruub 208.073379517s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:50 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 79 pg[9.18( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=5 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=79 pruub=11.769004822s) [2] r=-1 lpr=79 pi=[54,79)/1 crt=45'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 208.073379517s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:50 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 09:29:50 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:50 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 09:29:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:50 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:50 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:50 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Nov 24 09:29:50 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 24 09:29:50 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Nov 24 09:29:50 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Nov 24 09:29:50 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Nov 24 09:29:50 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Nov 24 09:29:50 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Nov 24 09:29:50 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Nov 24 09:29:50 compute-0 sudo[101242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 24 09:29:50 compute-0 sudo[101242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:50 compute-0 sudo[101242]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:50 compute-0 sudo[101267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph
Nov 24 09:29:50 compute-0 sudo[101267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:50 compute-0 sudo[101267]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:50 compute-0 sudo[101292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.conf.new
Nov 24 09:29:50 compute-0 sudo[101292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:50 compute-0 sudo[101292]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:50 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Nov 24 09:29:50 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Nov 24 09:29:50 compute-0 sudo[101317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:29:50 compute-0 sudo[101317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:50 compute-0 sudo[101317]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:50 compute-0 sshd-session[101323]: Accepted publickey for zuul from 192.168.122.30 port 58456 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 09:29:50 compute-0 sudo[101344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.conf.new
Nov 24 09:29:50 compute-0 systemd-logind[822]: New session 38 of user zuul.
Nov 24 09:29:50 compute-0 systemd[1]: Started Session 38 of User zuul.
Nov 24 09:29:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:50 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:50 compute-0 sshd-session[101323]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:29:50 compute-0 sudo[101344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:50 compute-0 sudo[101344]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:50 compute-0 sudo[101400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.conf.new
Nov 24 09:29:50 compute-0 sudo[101400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:29:50] "GET /metrics HTTP/1.1" 200 46587 "" "Prometheus/2.51.0"
Nov 24 09:29:50 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:29:50] "GET /metrics HTTP/1.1" 200 46587 "" "Prometheus/2.51.0"
Nov 24 09:29:50 compute-0 sudo[101400]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:51 compute-0 sudo[101454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.conf.new
Nov 24 09:29:51 compute-0 sudo[101454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:51 compute-0 sudo[101454]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:51 compute-0 sudo[101496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Nov 24 09:29:51 compute-0 sudo[101496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:51 compute-0 sudo[101496]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:51 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:29:51 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:29:51 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:29:51 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:29:51 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:51 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:51 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 09:29:51 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 09:29:51 compute-0 ceph-mon[74331]: mgrmap e31: compute-0.mauvni(active, since 5s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:29:51 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 24 09:29:51 compute-0 ceph-mon[74331]: osdmap e79: 3 total, 3 up, 3 in
Nov 24 09:29:51 compute-0 ceph-mon[74331]: 12.3 scrub starts
Nov 24 09:29:51 compute-0 ceph-mon[74331]: 12.3 scrub ok
Nov 24 09:29:51 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:51 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:51 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 24 09:29:51 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 24 09:29:51 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:29:51 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:29:51 compute-0 ceph-mon[74331]: Updating compute-0:/etc/ceph/ceph.conf
Nov 24 09:29:51 compute-0 ceph-mon[74331]: Updating compute-1:/etc/ceph/ceph.conf
Nov 24 09:29:51 compute-0 ceph-mon[74331]: Updating compute-2:/etc/ceph/ceph.conf
Nov 24 09:29:51 compute-0 ceph-mon[74331]: 11.5 scrub starts
Nov 24 09:29:51 compute-0 ceph-mon[74331]: 11.5 scrub ok
Nov 24 09:29:51 compute-0 ceph-mon[74331]: 10.13 scrub starts
Nov 24 09:29:51 compute-0 ceph-mon[74331]: 10.13 scrub ok
Nov 24 09:29:51 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:29:51 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 24 09:29:51 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:29:51.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 24 09:29:51 compute-0 sudo[101521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config
Nov 24 09:29:51 compute-0 sudo[101521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:51 compute-0 sudo[101521]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:51 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Nov 24 09:29:51 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Nov 24 09:29:51 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Nov 24 09:29:51 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 80 pg[9.18( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=5 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=80) [2]/[0] r=0 lpr=80 pi=[54,80)/1 crt=45'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:51 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 80 pg[9.18( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=5 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=80) [2]/[0] r=0 lpr=80 pi=[54,80)/1 crt=45'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:51 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 80 pg[9.8( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=80) [2]/[0] r=0 lpr=80 pi=[54,80)/1 crt=45'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:51 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 80 pg[9.8( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=80) [2]/[0] r=0 lpr=80 pi=[54,80)/1 crt=45'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:51 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v9: 353 pgs: 2 remapped+peering, 351 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 13 op/s
Nov 24 09:29:51 compute-0 sudo[101547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config
Nov 24 09:29:51 compute-0 sudo[101547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:51 compute-0 sudo[101547]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:51 compute-0 sudo[101572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf.new
Nov 24 09:29:51 compute-0 sudo[101572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:51 compute-0 sudo[101572]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:51 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:29:51 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:29:51 compute-0 sudo[101615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:29:51 compute-0 sudo[101615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:51 compute-0 sudo[101615]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:51 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:29:51 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:29:51 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:29:51.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:29:51 compute-0 sudo[101669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf.new
Nov 24 09:29:51 compute-0 sudo[101669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:51 compute-0 sudo[101669]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:51 compute-0 sudo[101745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf.new
Nov 24 09:29:51 compute-0 sudo[101745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:51 compute-0 sudo[101745]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:51 compute-0 sudo[101793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf.new
Nov 24 09:29:51 compute-0 sudo[101793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:51 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 24 09:29:51 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 24 09:29:51 compute-0 sudo[101793]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:51 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 12.12 deep-scrub starts
Nov 24 09:29:51 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 12.12 deep-scrub ok
Nov 24 09:29:51 compute-0 sudo[101818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf.new /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:29:51 compute-0 sudo[101818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:51 compute-0 sudo[101818]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:51 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 24 09:29:51 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 24 09:29:51 compute-0 sudo[101843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 24 09:29:51 compute-0 sudo[101843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:51 compute-0 sudo[101843]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:51 compute-0 sudo[101868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph
Nov 24 09:29:51 compute-0 sudo[101868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:51 compute-0 sudo[101868]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:51 compute-0 python3.9[101790]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:29:51 compute-0 sudo[101893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.client.admin.keyring.new
Nov 24 09:29:51 compute-0 sudo[101893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:51 compute-0 sudo[101893]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:52 compute-0 sudo[101927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:29:52 compute-0 sudo[101927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:52 compute-0 sudo[101927]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:52 compute-0 sudo[101952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.client.admin.keyring.new
Nov 24 09:29:52 compute-0 sudo[101952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:52 compute-0 sudo[101952]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:52 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 09:29:52 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 09:29:52 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 24 09:29:52 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 24 09:29:52 compute-0 sudo[102002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.client.admin.keyring.new
Nov 24 09:29:52 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Nov 24 09:29:52 compute-0 sudo[102002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:52 compute-0 sudo[102002]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:52 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Nov 24 09:29:52 compute-0 ceph-mon[74331]: Updating compute-0:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:29:52 compute-0 ceph-mon[74331]: Updating compute-1:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:29:52 compute-0 ceph-mon[74331]: osdmap e80: 3 total, 3 up, 3 in
Nov 24 09:29:52 compute-0 ceph-mon[74331]: pgmap v9: 353 pgs: 2 remapped+peering, 351 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 13 op/s
Nov 24 09:29:52 compute-0 ceph-mon[74331]: Updating compute-2:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.conf
Nov 24 09:29:52 compute-0 ceph-mon[74331]: 11.3 scrub starts
Nov 24 09:29:52 compute-0 ceph-mon[74331]: 11.3 scrub ok
Nov 24 09:29:52 compute-0 ceph-mon[74331]: 11.4 scrub starts
Nov 24 09:29:52 compute-0 ceph-mon[74331]: 11.4 scrub ok
Nov 24 09:29:52 compute-0 ceph-mon[74331]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 24 09:29:52 compute-0 ceph-mon[74331]: 12.12 deep-scrub starts
Nov 24 09:29:52 compute-0 ceph-mon[74331]: 12.12 deep-scrub ok
Nov 24 09:29:52 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Nov 24 09:29:52 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 81 pg[9.18( v 45'1130 (0'0,45'1130] local-lis/les=80/81 n=5 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=80) [2]/[0] async=[2] r=0 lpr=80 pi=[54,80)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:52 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:52 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 81 pg[9.8( v 45'1130 (0'0,45'1130] local-lis/les=80/81 n=6 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=80) [2]/[0] async=[2] r=0 lpr=80 pi=[54,80)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:52 compute-0 sudo[102038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.client.admin.keyring.new
Nov 24 09:29:52 compute-0 sudo[102038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:52 compute-0 sudo[102038]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:52 compute-0 sudo[102063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Nov 24 09:29:52 compute-0 sudo[102063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:52 compute-0 sudo[102063]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:52 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 09:29:52 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 09:29:52 compute-0 sudo[102088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config
Nov 24 09:29:52 compute-0 sudo[102088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:52 compute-0 sudo[102088]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:52 compute-0 sudo[102114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config
Nov 24 09:29:52 compute-0 sudo[102114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:52 compute-0 sudo[102114]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:52 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:52 compute-0 sudo[102148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring.new
Nov 24 09:29:52 compute-0 sudo[102148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:52 compute-0 sudo[102148]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:52 compute-0 sudo[102183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:29:52 compute-0 sudo[102183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:52 compute-0 sudo[102183]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:52 compute-0 sudo[102217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring.new
Nov 24 09:29:52 compute-0 sudo[102217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:52 compute-0 sudo[102217]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:52 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 12.c scrub starts
Nov 24 09:29:52 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 12.c scrub ok
Nov 24 09:29:52 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:29:52 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Nov 24 09:29:52 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Nov 24 09:29:52 compute-0 sudo[102277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring.new
Nov 24 09:29:52 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Nov 24 09:29:52 compute-0 sudo[102277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:52 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 82 pg[9.8( v 45'1130 (0'0,45'1130] local-lis/les=80/81 n=6 ec=54/39 lis/c=80/54 les/c/f=81/55/0 sis=82 pruub=15.483053207s) [2] async=[2] r=-1 lpr=82 pi=[54,82)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active pruub 214.236526489s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:52 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 82 pg[9.8( v 45'1130 (0'0,45'1130] local-lis/les=80/81 n=6 ec=54/39 lis/c=80/54 les/c/f=81/55/0 sis=82 pruub=15.482927322s) [2] r=-1 lpr=82 pi=[54,82)/1 crt=45'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.236526489s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:52 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 82 pg[9.18( v 45'1130 (0'0,45'1130] local-lis/les=80/81 n=5 ec=54/39 lis/c=80/54 les/c/f=81/55/0 sis=82 pruub=15.477869987s) [2] async=[2] r=-1 lpr=82 pi=[54,82)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active pruub 214.231933594s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:52 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 82 pg[9.18( v 45'1130 (0'0,45'1130] local-lis/les=80/81 n=5 ec=54/39 lis/c=80/54 les/c/f=81/55/0 sis=82 pruub=15.477767944s) [2] r=-1 lpr=82 pi=[54,82)/1 crt=45'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.231933594s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:52 compute-0 sudo[102277]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:52 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 09:29:52 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:52 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 09:29:52 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:52 compute-0 sudo[102326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring.new
Nov 24 09:29:52 compute-0 sudo[102326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:52 compute-0 sudo[102326]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:52 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc003c30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:52 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 09:29:52 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 09:29:52 compute-0 sudo[102351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-84a084c3-61a7-5de7-8207-1f88efa59a64/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring.new /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 09:29:52 compute-0 sudo[102351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:52 compute-0 sudo[102351]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:52 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:29:52 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:52 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:29:53 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:53 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:29:53 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:29:53 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:29:53.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:29:53 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v12: 353 pgs: 2 remapped+peering, 351 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 0 B/s wr, 13 op/s
Nov 24 09:29:53 compute-0 ceph-mon[74331]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 24 09:29:53 compute-0 ceph-mon[74331]: Updating compute-1:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 09:29:53 compute-0 ceph-mon[74331]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 24 09:29:53 compute-0 ceph-mon[74331]: osdmap e81: 3 total, 3 up, 3 in
Nov 24 09:29:53 compute-0 ceph-mon[74331]: Updating compute-0:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 09:29:53 compute-0 ceph-mon[74331]: 8.1f scrub starts
Nov 24 09:29:53 compute-0 ceph-mon[74331]: 8.1f scrub ok
Nov 24 09:29:53 compute-0 ceph-mon[74331]: 11.f scrub starts
Nov 24 09:29:53 compute-0 ceph-mon[74331]: 11.f scrub ok
Nov 24 09:29:53 compute-0 ceph-mon[74331]: 12.c scrub starts
Nov 24 09:29:53 compute-0 ceph-mon[74331]: 12.c scrub ok
Nov 24 09:29:53 compute-0 ceph-mon[74331]: osdmap e82: 3 total, 3 up, 3 in
Nov 24 09:29:53 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:53 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:53 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:53 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:53 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:29:53 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:29:53 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:29:53.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:29:53 compute-0 sudo[102502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tulewssojerfseuvkiiwqucjgdfuifyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976593.1237495-56-102004679200010/AnsiballZ_command.py'
Nov 24 09:29:53 compute-0 sudo[102502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:29:53 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 09:29:53 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:53 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 09:29:53 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:53 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:29:53 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:53 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:29:53 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:53 compute-0 sudo[102505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:29:53 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 12.b scrub starts
Nov 24 09:29:53 compute-0 sudo[102505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:53 compute-0 sudo[102505]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:53 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 12.b scrub ok
Nov 24 09:29:53 compute-0 python3.9[102504]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                             pushd /var/tmp
                                             curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                             pushd repo-setup-main
                                             python3 -m venv ./venv
                                             PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                             ./venv/bin/repo-setup current-podified -b antelope
                                             popd
                                             rm -rf repo-setup-main
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:29:53 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Nov 24 09:29:53 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Nov 24 09:29:53 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Nov 24 09:29:53 compute-0 sudo[102530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 24 09:29:53 compute-0 sudo[102530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:54 compute-0 podman[102604]: 2025-11-24 09:29:54.249159193 +0000 UTC m=+0.071245659 container create eeeec91d9f21c5d2de3136c1d535949ea87dd41b4f7b6721b53bd3b2e2e70d3d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_jang, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:29:54 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:54 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:54 compute-0 podman[102604]: 2025-11-24 09:29:54.200010533 +0000 UTC m=+0.022097019 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:29:54 compute-0 ceph-mon[74331]: Updating compute-2:/var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/config/ceph.client.admin.keyring
Nov 24 09:29:54 compute-0 ceph-mon[74331]: pgmap v12: 353 pgs: 2 remapped+peering, 351 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 0 B/s wr, 13 op/s
Nov 24 09:29:54 compute-0 ceph-mon[74331]: 10.4 deep-scrub starts
Nov 24 09:29:54 compute-0 ceph-mon[74331]: 10.4 deep-scrub ok
Nov 24 09:29:54 compute-0 ceph-mon[74331]: 8.18 scrub starts
Nov 24 09:29:54 compute-0 ceph-mon[74331]: 8.18 scrub ok
Nov 24 09:29:54 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:54 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:54 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:54 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:54 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:29:54 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:29:54 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:29:54 compute-0 ceph-mon[74331]: 12.b scrub starts
Nov 24 09:29:54 compute-0 ceph-mon[74331]: 12.b scrub ok
Nov 24 09:29:54 compute-0 ceph-mon[74331]: osdmap e83: 3 total, 3 up, 3 in
Nov 24 09:29:54 compute-0 systemd[1]: Started libpod-conmon-eeeec91d9f21c5d2de3136c1d535949ea87dd41b4f7b6721b53bd3b2e2e70d3d.scope.
Nov 24 09:29:54 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:29:54 compute-0 podman[102604]: 2025-11-24 09:29:54.38554377 +0000 UTC m=+0.207630256 container init eeeec91d9f21c5d2de3136c1d535949ea87dd41b4f7b6721b53bd3b2e2e70d3d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:29:54 compute-0 podman[102604]: 2025-11-24 09:29:54.394209718 +0000 UTC m=+0.216296194 container start eeeec91d9f21c5d2de3136c1d535949ea87dd41b4f7b6721b53bd3b2e2e70d3d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Nov 24 09:29:54 compute-0 stoic_jang[102622]: 167 167
Nov 24 09:29:54 compute-0 systemd[1]: libpod-eeeec91d9f21c5d2de3136c1d535949ea87dd41b4f7b6721b53bd3b2e2e70d3d.scope: Deactivated successfully.
Nov 24 09:29:54 compute-0 podman[102604]: 2025-11-24 09:29:54.402081353 +0000 UTC m=+0.224167869 container attach eeeec91d9f21c5d2de3136c1d535949ea87dd41b4f7b6721b53bd3b2e2e70d3d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_jang, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 09:29:54 compute-0 podman[102604]: 2025-11-24 09:29:54.403262776 +0000 UTC m=+0.225349252 container died eeeec91d9f21c5d2de3136c1d535949ea87dd41b4f7b6721b53bd3b2e2e70d3d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 24 09:29:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8ca631034395505e3a0810d8d7e0926c6c40986ac3db93aab73c0d21b420743-merged.mount: Deactivated successfully.
Nov 24 09:29:54 compute-0 podman[102604]: 2025-11-24 09:29:54.505817613 +0000 UTC m=+0.327904069 container remove eeeec91d9f21c5d2de3136c1d535949ea87dd41b4f7b6721b53bd3b2e2e70d3d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_jang, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:29:54 compute-0 systemd[1]: libpod-conmon-eeeec91d9f21c5d2de3136c1d535949ea87dd41b4f7b6721b53bd3b2e2e70d3d.scope: Deactivated successfully.
Nov 24 09:29:54 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:54 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf4003e40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:54 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 12.e scrub starts
Nov 24 09:29:54 compute-0 podman[102646]: 2025-11-24 09:29:54.73574692 +0000 UTC m=+0.069193132 container create 485cedd0f31a014defd347a6ca4b51a2e6d9778814c7d6b3e189cc8ca28ef8ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_bardeen, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:29:54 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 12.e scrub ok
Nov 24 09:29:54 compute-0 systemd[1]: Started libpod-conmon-485cedd0f31a014defd347a6ca4b51a2e6d9778814c7d6b3e189cc8ca28ef8ae.scope.
Nov 24 09:29:54 compute-0 podman[102646]: 2025-11-24 09:29:54.703409862 +0000 UTC m=+0.036856114 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:29:54 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:29:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62b4ebb6b66908e27483f9ce497924a82213e2ba189c588aee208366c55cfcb2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:29:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62b4ebb6b66908e27483f9ce497924a82213e2ba189c588aee208366c55cfcb2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:29:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62b4ebb6b66908e27483f9ce497924a82213e2ba189c588aee208366c55cfcb2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:29:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62b4ebb6b66908e27483f9ce497924a82213e2ba189c588aee208366c55cfcb2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:29:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62b4ebb6b66908e27483f9ce497924a82213e2ba189c588aee208366c55cfcb2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:29:54 compute-0 podman[102646]: 2025-11-24 09:29:54.847912491 +0000 UTC m=+0.181358703 container init 485cedd0f31a014defd347a6ca4b51a2e6d9778814c7d6b3e189cc8ca28ef8ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:29:54 compute-0 podman[102646]: 2025-11-24 09:29:54.854590485 +0000 UTC m=+0.188036697 container start 485cedd0f31a014defd347a6ca4b51a2e6d9778814c7d6b3e189cc8ca28ef8ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_bardeen, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Nov 24 09:29:54 compute-0 podman[102646]: 2025-11-24 09:29:54.859272983 +0000 UTC m=+0.192719215 container attach 485cedd0f31a014defd347a6ca4b51a2e6d9778814c7d6b3e189cc8ca28ef8ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_bardeen, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:29:54 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:54 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:55 compute-0 modest_bardeen[102662]: --> passed data devices: 0 physical, 1 LVM
Nov 24 09:29:55 compute-0 modest_bardeen[102662]: --> All data devices are unavailable
Nov 24 09:29:55 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:29:55 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:29:55 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:29:55.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:29:55 compute-0 systemd[1]: libpod-485cedd0f31a014defd347a6ca4b51a2e6d9778814c7d6b3e189cc8ca28ef8ae.scope: Deactivated successfully.
Nov 24 09:29:55 compute-0 podman[102646]: 2025-11-24 09:29:55.208707381 +0000 UTC m=+0.542153593 container died 485cedd0f31a014defd347a6ca4b51a2e6d9778814c7d6b3e189cc8ca28ef8ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_bardeen, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 09:29:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-62b4ebb6b66908e27483f9ce497924a82213e2ba189c588aee208366c55cfcb2-merged.mount: Deactivated successfully.
Nov 24 09:29:55 compute-0 podman[102646]: 2025-11-24 09:29:55.258265743 +0000 UTC m=+0.591711955 container remove 485cedd0f31a014defd347a6ca4b51a2e6d9778814c7d6b3e189cc8ca28ef8ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_bardeen, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:29:55 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v14: 353 pgs: 2 remapped+peering, 351 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 0 B/s wr, 13 op/s
Nov 24 09:29:55 compute-0 systemd[1]: libpod-conmon-485cedd0f31a014defd347a6ca4b51a2e6d9778814c7d6b3e189cc8ca28ef8ae.scope: Deactivated successfully.
Nov 24 09:29:55 compute-0 sudo[102530]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:55 compute-0 sudo[102689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:29:55 compute-0 sudo[102689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:55 compute-0 ceph-mon[74331]: 12.1d scrub starts
Nov 24 09:29:55 compute-0 ceph-mon[74331]: 12.1d scrub ok
Nov 24 09:29:55 compute-0 ceph-mon[74331]: 11.7 scrub starts
Nov 24 09:29:55 compute-0 ceph-mon[74331]: 11.7 scrub ok
Nov 24 09:29:55 compute-0 ceph-mon[74331]: 12.e scrub starts
Nov 24 09:29:55 compute-0 ceph-mon[74331]: 12.e scrub ok
Nov 24 09:29:55 compute-0 sudo[102689]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:55 compute-0 sudo[102715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- lvm list --format json
Nov 24 09:29:55 compute-0 sudo[102715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:55 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:29:55 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 24 09:29:55 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:29:55.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 24 09:29:55 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Nov 24 09:29:55 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Nov 24 09:29:55 compute-0 podman[102784]: 2025-11-24 09:29:55.856703822 +0000 UTC m=+0.047235639 container create 235132357b96a50fd721c409d7481525e578851c670fb1e67d0d98a7bdff9976 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_hellman, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:29:55 compute-0 systemd[1]: Started libpod-conmon-235132357b96a50fd721c409d7481525e578851c670fb1e67d0d98a7bdff9976.scope.
Nov 24 09:29:55 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:29:55 compute-0 podman[102784]: 2025-11-24 09:29:55.833780893 +0000 UTC m=+0.024312750 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:29:55 compute-0 podman[102784]: 2025-11-24 09:29:55.9465477 +0000 UTC m=+0.137079537 container init 235132357b96a50fd721c409d7481525e578851c670fb1e67d0d98a7bdff9976 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_hellman, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:29:55 compute-0 podman[102784]: 2025-11-24 09:29:55.956593807 +0000 UTC m=+0.147125624 container start 235132357b96a50fd721c409d7481525e578851c670fb1e67d0d98a7bdff9976 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Nov 24 09:29:55 compute-0 xenodochial_hellman[102800]: 167 167
Nov 24 09:29:55 compute-0 podman[102784]: 2025-11-24 09:29:55.963078004 +0000 UTC m=+0.153609831 container attach 235132357b96a50fd721c409d7481525e578851c670fb1e67d0d98a7bdff9976 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:29:55 compute-0 systemd[1]: libpod-235132357b96a50fd721c409d7481525e578851c670fb1e67d0d98a7bdff9976.scope: Deactivated successfully.
Nov 24 09:29:55 compute-0 conmon[102800]: conmon 235132357b96a50fd721 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-235132357b96a50fd721c409d7481525e578851c670fb1e67d0d98a7bdff9976.scope/container/memory.events
Nov 24 09:29:55 compute-0 podman[102784]: 2025-11-24 09:29:55.967040023 +0000 UTC m=+0.157571860 container died 235132357b96a50fd721c409d7481525e578851c670fb1e67d0d98a7bdff9976 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 09:29:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a4d45295aebb02cfa4b158e3f79fd0e267fdd308480cb67f89857172fbeccfb-merged.mount: Deactivated successfully.
Nov 24 09:29:56 compute-0 podman[102784]: 2025-11-24 09:29:56.088095528 +0000 UTC m=+0.278627335 container remove 235132357b96a50fd721c409d7481525e578851c670fb1e67d0d98a7bdff9976 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Nov 24 09:29:56 compute-0 systemd[1]: libpod-conmon-235132357b96a50fd721c409d7481525e578851c670fb1e67d0d98a7bdff9976.scope: Deactivated successfully.
Nov 24 09:29:56 compute-0 podman[102822]: 2025-11-24 09:29:56.280472793 +0000 UTC m=+0.043156866 container create 6cf2f29b31a65dd56e9e6b36185b4c579ad9703ad3b4e069bedbd45f818bf8d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_lovelace, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:29:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:56 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:56 compute-0 systemd[1]: Started libpod-conmon-6cf2f29b31a65dd56e9e6b36185b4c579ad9703ad3b4e069bedbd45f818bf8d6.scope.
Nov 24 09:29:56 compute-0 podman[102822]: 2025-11-24 09:29:56.26104948 +0000 UTC m=+0.023733573 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:29:56 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:29:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3d356fb88036049cd6d6f84733539c4bce07fb8402065da2e83d18ec1274959/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:29:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3d356fb88036049cd6d6f84733539c4bce07fb8402065da2e83d18ec1274959/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:29:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3d356fb88036049cd6d6f84733539c4bce07fb8402065da2e83d18ec1274959/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:29:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3d356fb88036049cd6d6f84733539c4bce07fb8402065da2e83d18ec1274959/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:29:56 compute-0 podman[102822]: 2025-11-24 09:29:56.384002628 +0000 UTC m=+0.146686741 container init 6cf2f29b31a65dd56e9e6b36185b4c579ad9703ad3b4e069bedbd45f818bf8d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:29:56 compute-0 podman[102822]: 2025-11-24 09:29:56.391168424 +0000 UTC m=+0.153852497 container start 6cf2f29b31a65dd56e9e6b36185b4c579ad9703ad3b4e069bedbd45f818bf8d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_lovelace, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:29:56 compute-0 ceph-mon[74331]: pgmap v14: 353 pgs: 2 remapped+peering, 351 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 0 B/s wr, 13 op/s
Nov 24 09:29:56 compute-0 ceph-mon[74331]: 8.5 deep-scrub starts
Nov 24 09:29:56 compute-0 ceph-mon[74331]: 8.5 deep-scrub ok
Nov 24 09:29:56 compute-0 ceph-mon[74331]: 11.1c scrub starts
Nov 24 09:29:56 compute-0 ceph-mon[74331]: 11.1c scrub ok
Nov 24 09:29:56 compute-0 ceph-mon[74331]: 10.8 scrub starts
Nov 24 09:29:56 compute-0 ceph-mon[74331]: 10.8 scrub ok
Nov 24 09:29:56 compute-0 podman[102822]: 2025-11-24 09:29:56.394580338 +0000 UTC m=+0.157264451 container attach 6cf2f29b31a65dd56e9e6b36185b4c579ad9703ad3b4e069bedbd45f818bf8d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_lovelace, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 24 09:29:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:56 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:56 compute-0 gracious_lovelace[102838]: {
Nov 24 09:29:56 compute-0 gracious_lovelace[102838]:     "0": [
Nov 24 09:29:56 compute-0 gracious_lovelace[102838]:         {
Nov 24 09:29:56 compute-0 gracious_lovelace[102838]:             "devices": [
Nov 24 09:29:56 compute-0 gracious_lovelace[102838]:                 "/dev/loop3"
Nov 24 09:29:56 compute-0 gracious_lovelace[102838]:             ],
Nov 24 09:29:56 compute-0 gracious_lovelace[102838]:             "lv_name": "ceph_lv0",
Nov 24 09:29:56 compute-0 gracious_lovelace[102838]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:29:56 compute-0 gracious_lovelace[102838]:             "lv_size": "21470642176",
Nov 24 09:29:56 compute-0 gracious_lovelace[102838]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=84a084c3-61a7-5de7-8207-1f88efa59a64,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 24 09:29:56 compute-0 gracious_lovelace[102838]:             "lv_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:29:56 compute-0 gracious_lovelace[102838]:             "name": "ceph_lv0",
Nov 24 09:29:56 compute-0 gracious_lovelace[102838]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:29:56 compute-0 gracious_lovelace[102838]:             "tags": {
Nov 24 09:29:56 compute-0 gracious_lovelace[102838]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:29:56 compute-0 gracious_lovelace[102838]:                 "ceph.block_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:29:56 compute-0 gracious_lovelace[102838]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 09:29:56 compute-0 gracious_lovelace[102838]:                 "ceph.cluster_fsid": "84a084c3-61a7-5de7-8207-1f88efa59a64",
Nov 24 09:29:56 compute-0 gracious_lovelace[102838]:                 "ceph.cluster_name": "ceph",
Nov 24 09:29:56 compute-0 gracious_lovelace[102838]:                 "ceph.crush_device_class": "",
Nov 24 09:29:56 compute-0 gracious_lovelace[102838]:                 "ceph.encrypted": "0",
Nov 24 09:29:56 compute-0 gracious_lovelace[102838]:                 "ceph.osd_fsid": "4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c",
Nov 24 09:29:56 compute-0 gracious_lovelace[102838]:                 "ceph.osd_id": "0",
Nov 24 09:29:56 compute-0 gracious_lovelace[102838]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 09:29:56 compute-0 gracious_lovelace[102838]:                 "ceph.type": "block",
Nov 24 09:29:56 compute-0 gracious_lovelace[102838]:                 "ceph.vdo": "0",
Nov 24 09:29:56 compute-0 gracious_lovelace[102838]:                 "ceph.with_tpm": "0"
Nov 24 09:29:56 compute-0 gracious_lovelace[102838]:             },
Nov 24 09:29:56 compute-0 gracious_lovelace[102838]:             "type": "block",
Nov 24 09:29:56 compute-0 gracious_lovelace[102838]:             "vg_name": "ceph_vg0"
Nov 24 09:29:56 compute-0 gracious_lovelace[102838]:         }
Nov 24 09:29:56 compute-0 gracious_lovelace[102838]:     ]
Nov 24 09:29:56 compute-0 gracious_lovelace[102838]: }
Nov 24 09:29:56 compute-0 systemd[1]: libpod-6cf2f29b31a65dd56e9e6b36185b4c579ad9703ad3b4e069bedbd45f818bf8d6.scope: Deactivated successfully.
Nov 24 09:29:56 compute-0 podman[102822]: 2025-11-24 09:29:56.690004493 +0000 UTC m=+0.452688566 container died 6cf2f29b31a65dd56e9e6b36185b4c579ad9703ad3b4e069bedbd45f818bf8d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:29:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3d356fb88036049cd6d6f84733539c4bce07fb8402065da2e83d18ec1274959-merged.mount: Deactivated successfully.
Nov 24 09:29:56 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 12.19 scrub starts
Nov 24 09:29:56 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 12.19 scrub ok
Nov 24 09:29:56 compute-0 podman[102822]: 2025-11-24 09:29:56.75650663 +0000 UTC m=+0.519190713 container remove 6cf2f29b31a65dd56e9e6b36185b4c579ad9703ad3b4e069bedbd45f818bf8d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 24 09:29:56 compute-0 systemd[1]: libpod-conmon-6cf2f29b31a65dd56e9e6b36185b4c579ad9703ad3b4e069bedbd45f818bf8d6.scope: Deactivated successfully.
Nov 24 09:29:56 compute-0 sudo[102715]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:56 compute-0 sudo[102862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:29:56 compute-0 sudo[102862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:56 compute-0 sudo[102862]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:56 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:56 compute-0 sudo[102887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- raw list --format json
Nov 24 09:29:56 compute-0 sudo[102887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:57 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:29:57 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:29:57 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:29:57.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:29:57 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v15: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 1 objects/s recovering
Nov 24 09:29:57 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Nov 24 09:29:57 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 24 09:29:57 compute-0 podman[102957]: 2025-11-24 09:29:57.364359438 +0000 UTC m=+0.050166380 container create a60c6710452fd0c7d8b185c12d6fd2d1123783e29ae4d2143832297e472eba63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_booth, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:29:57 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Nov 24 09:29:57 compute-0 ceph-mon[74331]: 12.1e deep-scrub starts
Nov 24 09:29:57 compute-0 ceph-mon[74331]: 12.1e deep-scrub ok
Nov 24 09:29:57 compute-0 ceph-mon[74331]: 11.1e scrub starts
Nov 24 09:29:57 compute-0 ceph-mon[74331]: 11.1e scrub ok
Nov 24 09:29:57 compute-0 ceph-mon[74331]: 12.19 scrub starts
Nov 24 09:29:57 compute-0 ceph-mon[74331]: 12.19 scrub ok
Nov 24 09:29:57 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 24 09:29:57 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 24 09:29:57 compute-0 systemd[1]: Started libpod-conmon-a60c6710452fd0c7d8b185c12d6fd2d1123783e29ae4d2143832297e472eba63.scope.
Nov 24 09:29:57 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 24 09:29:57 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Nov 24 09:29:57 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 84 pg[9.9( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=84 pruub=12.695670128s) [2] r=-1 lpr=84 pi=[54,84)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active pruub 216.069274902s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:57 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 84 pg[9.9( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=84 pruub=12.695622444s) [2] r=-1 lpr=84 pi=[54,84)/1 crt=45'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 216.069274902s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:57 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Nov 24 09:29:57 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 84 pg[9.19( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=5 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=84 pruub=12.699002266s) [2] r=-1 lpr=84 pi=[54,84)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active pruub 216.073471069s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:57 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 84 pg[9.19( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=5 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=84 pruub=12.698986053s) [2] r=-1 lpr=84 pi=[54,84)/1 crt=45'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 216.073471069s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:57 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:29:57 compute-0 podman[102957]: 2025-11-24 09:29:57.344739309 +0000 UTC m=+0.030546271 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:29:57 compute-0 podman[102957]: 2025-11-24 09:29:57.457882997 +0000 UTC m=+0.143689939 container init a60c6710452fd0c7d8b185c12d6fd2d1123783e29ae4d2143832297e472eba63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:29:57 compute-0 podman[102957]: 2025-11-24 09:29:57.465919908 +0000 UTC m=+0.151726860 container start a60c6710452fd0c7d8b185c12d6fd2d1123783e29ae4d2143832297e472eba63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_booth, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:29:57 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:29:57 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:29:57 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:29:57.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:29:57 compute-0 confident_booth[102974]: 167 167
Nov 24 09:29:57 compute-0 podman[102957]: 2025-11-24 09:29:57.470480913 +0000 UTC m=+0.156287885 container attach a60c6710452fd0c7d8b185c12d6fd2d1123783e29ae4d2143832297e472eba63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_booth, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:29:57 compute-0 systemd[1]: libpod-a60c6710452fd0c7d8b185c12d6fd2d1123783e29ae4d2143832297e472eba63.scope: Deactivated successfully.
Nov 24 09:29:57 compute-0 podman[102957]: 2025-11-24 09:29:57.471527622 +0000 UTC m=+0.157334564 container died a60c6710452fd0c7d8b185c12d6fd2d1123783e29ae4d2143832297e472eba63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 24 09:29:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa2987df6656a31f1ad48ba4f91c2b648da95b1b3d166080b7a6850f603fa44e-merged.mount: Deactivated successfully.
Nov 24 09:29:57 compute-0 podman[102957]: 2025-11-24 09:29:57.515228872 +0000 UTC m=+0.201035814 container remove a60c6710452fd0c7d8b185c12d6fd2d1123783e29ae4d2143832297e472eba63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_booth, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:29:57 compute-0 systemd[1]: libpod-conmon-a60c6710452fd0c7d8b185c12d6fd2d1123783e29ae4d2143832297e472eba63.scope: Deactivated successfully.
Nov 24 09:29:57 compute-0 podman[102999]: 2025-11-24 09:29:57.706320701 +0000 UTC m=+0.073102729 container create dab88a921e28de75d84ca3273cf3c86b1cd0c23f183a60eab2a8b9307459d8d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_gates, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:29:57 compute-0 systemd[1]: Started libpod-conmon-dab88a921e28de75d84ca3273cf3c86b1cd0c23f183a60eab2a8b9307459d8d9.scope.
Nov 24 09:29:57 compute-0 podman[102999]: 2025-11-24 09:29:57.654815746 +0000 UTC m=+0.021597794 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:29:57 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 10.18 deep-scrub starts
Nov 24 09:29:57 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:29:57 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 10.18 deep-scrub ok
Nov 24 09:29:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be69fdb7cc4f4e870678f1a31da21d1a5259111e1525ae0d7b24cbec2ab47f73/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:29:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be69fdb7cc4f4e870678f1a31da21d1a5259111e1525ae0d7b24cbec2ab47f73/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:29:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be69fdb7cc4f4e870678f1a31da21d1a5259111e1525ae0d7b24cbec2ab47f73/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:29:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be69fdb7cc4f4e870678f1a31da21d1a5259111e1525ae0d7b24cbec2ab47f73/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:29:57 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:29:57 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Nov 24 09:29:57 compute-0 podman[102999]: 2025-11-24 09:29:57.790624087 +0000 UTC m=+0.157406135 container init dab88a921e28de75d84ca3273cf3c86b1cd0c23f183a60eab2a8b9307459d8d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_gates, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:29:57 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Nov 24 09:29:57 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Nov 24 09:29:57 compute-0 podman[102999]: 2025-11-24 09:29:57.805901437 +0000 UTC m=+0.172683505 container start dab88a921e28de75d84ca3273cf3c86b1cd0c23f183a60eab2a8b9307459d8d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default)
Nov 24 09:29:57 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 85 pg[9.9( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=85) [2]/[0] r=0 lpr=85 pi=[54,85)/1 crt=45'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:57 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 85 pg[9.9( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=85) [2]/[0] r=0 lpr=85 pi=[54,85)/1 crt=45'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:57 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 85 pg[9.19( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=5 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=85) [2]/[0] r=0 lpr=85 pi=[54,85)/1 crt=45'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:57 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 85 pg[9.19( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=5 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=85) [2]/[0] r=0 lpr=85 pi=[54,85)/1 crt=45'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 09:29:57 compute-0 podman[102999]: 2025-11-24 09:29:57.813677921 +0000 UTC m=+0.180459949 container attach dab88a921e28de75d84ca3273cf3c86b1cd0c23f183a60eab2a8b9307459d8d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_gates, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 24 09:29:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:58 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc003c50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:58 compute-0 ceph-mon[74331]: pgmap v15: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 1 objects/s recovering
Nov 24 09:29:58 compute-0 ceph-mon[74331]: 8.1c scrub starts
Nov 24 09:29:58 compute-0 ceph-mon[74331]: 8.1c scrub ok
Nov 24 09:29:58 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 24 09:29:58 compute-0 ceph-mon[74331]: osdmap e84: 3 total, 3 up, 3 in
Nov 24 09:29:58 compute-0 ceph-mon[74331]: 8.1b deep-scrub starts
Nov 24 09:29:58 compute-0 ceph-mon[74331]: 8.1b deep-scrub ok
Nov 24 09:29:58 compute-0 ceph-mon[74331]: 10.18 deep-scrub starts
Nov 24 09:29:58 compute-0 ceph-mon[74331]: 10.18 deep-scrub ok
Nov 24 09:29:58 compute-0 ceph-mon[74331]: osdmap e85: 3 total, 3 up, 3 in
Nov 24 09:29:58 compute-0 lvm[103095]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 09:29:58 compute-0 lvm[103095]: VG ceph_vg0 finished
Nov 24 09:29:58 compute-0 suspicious_gates[103015]: {}
Nov 24 09:29:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:58 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf4003e40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:58 compute-0 systemd[1]: libpod-dab88a921e28de75d84ca3273cf3c86b1cd0c23f183a60eab2a8b9307459d8d9.scope: Deactivated successfully.
Nov 24 09:29:58 compute-0 systemd[1]: libpod-dab88a921e28de75d84ca3273cf3c86b1cd0c23f183a60eab2a8b9307459d8d9.scope: Consumed 1.195s CPU time.
Nov 24 09:29:58 compute-0 podman[102999]: 2025-11-24 09:29:58.527535631 +0000 UTC m=+0.894317669 container died dab88a921e28de75d84ca3273cf3c86b1cd0c23f183a60eab2a8b9307459d8d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_gates, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:29:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-be69fdb7cc4f4e870678f1a31da21d1a5259111e1525ae0d7b24cbec2ab47f73-merged.mount: Deactivated successfully.
Nov 24 09:29:58 compute-0 podman[102999]: 2025-11-24 09:29:58.586588293 +0000 UTC m=+0.953370321 container remove dab88a921e28de75d84ca3273cf3c86b1cd0c23f183a60eab2a8b9307459d8d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_gates, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:29:58 compute-0 systemd[1]: libpod-conmon-dab88a921e28de75d84ca3273cf3c86b1cd0c23f183a60eab2a8b9307459d8d9.scope: Deactivated successfully.
Nov 24 09:29:58 compute-0 sudo[102887]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:58 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:29:58 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:58 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:29:58 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:58 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Nov 24 09:29:58 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:58 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 12.1c scrub starts
Nov 24 09:29:58 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 12.1c scrub ok
Nov 24 09:29:58 compute-0 sudo[103111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:29:58 compute-0 sudo[103111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:58 compute-0 sudo[103110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:29:58 compute-0 sudo[103111]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:58 compute-0 sudo[103110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:58 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Nov 24 09:29:58 compute-0 sudo[103110]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:58 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Nov 24 09:29:58 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Nov 24 09:29:58 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 86 pg[9.19( v 45'1130 (0'0,45'1130] local-lis/les=85/86 n=5 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=85) [2]/[0] async=[2] r=0 lpr=85 pi=[54,85)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:58 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 86 pg[9.9( v 45'1130 (0'0,45'1130] local-lis/les=85/86 n=6 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=85) [2]/[0] async=[2] r=0 lpr=85 pi=[54,85)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:29:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:29:58 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:29:58 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (monmap changed)...
Nov 24 09:29:58 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (monmap changed)...
Nov 24 09:29:58 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Nov 24 09:29:58 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Nov 24 09:29:59 compute-0 sudo[103160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:29:59 compute-0 sudo[103160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:59 compute-0 sudo[103160]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:59 compute-0 sudo[103185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:29:59 compute-0 sudo[103185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:59 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:29:59 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:29:59 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:29:59.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:29:59 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v19: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 40 B/s, 2 objects/s recovering
Nov 24 09:29:59 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Nov 24 09:29:59 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 24 09:29:59 compute-0 ceph-mon[74331]: 12.4 scrub starts
Nov 24 09:29:59 compute-0 ceph-mon[74331]: 12.4 scrub ok
Nov 24 09:29:59 compute-0 ceph-mon[74331]: 11.1d scrub starts
Nov 24 09:29:59 compute-0 ceph-mon[74331]: 11.1d scrub ok
Nov 24 09:29:59 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:59 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:59 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:59 compute-0 ceph-mon[74331]: 12.1c scrub starts
Nov 24 09:29:59 compute-0 ceph-mon[74331]: 12.1c scrub ok
Nov 24 09:29:59 compute-0 ceph-mon[74331]: osdmap e86: 3 total, 3 up, 3 in
Nov 24 09:29:59 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 24 09:29:59 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 24 09:29:59 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:29:59 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 24 09:29:59 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 24 09:29:59 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:29:59 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:29:59 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:29:59.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:29:59 compute-0 podman[103227]: 2025-11-24 09:29:59.472857889 +0000 UTC m=+0.040495934 container create e11580f92c3b6f5e58e42777447b1d68e631dc4c7187ba5b6f25391ce0f19357 (image=quay.io/ceph/ceph:v19, name=charming_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 09:29:59 compute-0 systemd[1]: Started libpod-conmon-e11580f92c3b6f5e58e42777447b1d68e631dc4c7187ba5b6f25391ce0f19357.scope.
Nov 24 09:29:59 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:29:59 compute-0 podman[103227]: 2025-11-24 09:29:59.457520047 +0000 UTC m=+0.025158112 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:29:59 compute-0 podman[103227]: 2025-11-24 09:29:59.557397991 +0000 UTC m=+0.125036056 container init e11580f92c3b6f5e58e42777447b1d68e631dc4c7187ba5b6f25391ce0f19357 (image=quay.io/ceph/ceph:v19, name=charming_williamson, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid)
Nov 24 09:29:59 compute-0 podman[103227]: 2025-11-24 09:29:59.568011513 +0000 UTC m=+0.135649588 container start e11580f92c3b6f5e58e42777447b1d68e631dc4c7187ba5b6f25391ce0f19357 (image=quay.io/ceph/ceph:v19, name=charming_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Nov 24 09:29:59 compute-0 podman[103227]: 2025-11-24 09:29:59.572694861 +0000 UTC m=+0.140332926 container attach e11580f92c3b6f5e58e42777447b1d68e631dc4c7187ba5b6f25391ce0f19357 (image=quay.io/ceph/ceph:v19, name=charming_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:29:59 compute-0 charming_williamson[103245]: 167 167
Nov 24 09:29:59 compute-0 systemd[1]: libpod-e11580f92c3b6f5e58e42777447b1d68e631dc4c7187ba5b6f25391ce0f19357.scope: Deactivated successfully.
Nov 24 09:29:59 compute-0 podman[103227]: 2025-11-24 09:29:59.577446372 +0000 UTC m=+0.145084427 container died e11580f92c3b6f5e58e42777447b1d68e631dc4c7187ba5b6f25391ce0f19357 (image=quay.io/ceph/ceph:v19, name=charming_williamson, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:29:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-42e0859ae5599dabd5043f3be6a1ddb11bbbc650822c0edcc479b17770b05c61-merged.mount: Deactivated successfully.
Nov 24 09:29:59 compute-0 podman[103227]: 2025-11-24 09:29:59.628772301 +0000 UTC m=+0.196410346 container remove e11580f92c3b6f5e58e42777447b1d68e631dc4c7187ba5b6f25391ce0f19357 (image=quay.io/ceph/ceph:v19, name=charming_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 24 09:29:59 compute-0 systemd[1]: libpod-conmon-e11580f92c3b6f5e58e42777447b1d68e631dc4c7187ba5b6f25391ce0f19357.scope: Deactivated successfully.
Nov 24 09:29:59 compute-0 sudo[103185]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:59 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:29:59 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:59 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:29:59 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:29:59 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Nov 24 09:29:59 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.mauvni (monmap changed)...
Nov 24 09:29:59 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.mauvni (monmap changed)...
Nov 24 09:29:59 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.mauvni", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Nov 24 09:29:59 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.mauvni", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 24 09:29:59 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.mauvni on compute-0
Nov 24 09:29:59 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.mauvni on compute-0
Nov 24 09:29:59 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Nov 24 09:29:59 compute-0 sudo[103263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:29:59 compute-0 sudo[103263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:29:59 compute-0 sudo[103263]: pam_unix(sudo:session): session closed for user root
Nov 24 09:29:59 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Nov 24 09:29:59 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 24 09:29:59 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Nov 24 09:29:59 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Nov 24 09:29:59 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 87 pg[9.9( v 45'1130 (0'0,45'1130] local-lis/les=85/86 n=6 ec=54/39 lis/c=85/54 les/c/f=86/55/0 sis=87 pruub=14.982713699s) [2] async=[2] r=-1 lpr=87 pi=[54,87)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active pruub 220.772460938s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:59 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 87 pg[9.9( v 45'1130 (0'0,45'1130] local-lis/les=85/86 n=6 ec=54/39 lis/c=85/54 les/c/f=86/55/0 sis=87 pruub=14.982624054s) [2] r=-1 lpr=87 pi=[54,87)/1 crt=45'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 220.772460938s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:59 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 87 pg[9.a( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=87 pruub=10.279687881s) [1] r=-1 lpr=87 pi=[54,87)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active pruub 216.069839478s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:59 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 87 pg[9.a( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=87 pruub=10.279659271s) [1] r=-1 lpr=87 pi=[54,87)/1 crt=45'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 216.069839478s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:59 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 87 pg[9.1a( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=5 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=87 pruub=10.282909393s) [1] r=-1 lpr=87 pi=[54,87)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active pruub 216.073394775s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:59 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 87 pg[9.1a( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=5 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=87 pruub=10.282877922s) [1] r=-1 lpr=87 pi=[54,87)/1 crt=45'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 216.073394775s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:59 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 87 pg[9.19( v 45'1130 (0'0,45'1130] local-lis/les=85/86 n=5 ec=54/39 lis/c=85/54 les/c/f=86/55/0 sis=87 pruub=14.981780052s) [2] async=[2] r=-1 lpr=87 pi=[54,87)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active pruub 220.772430420s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:29:59 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 87 pg[9.19( v 45'1130 (0'0,45'1130] local-lis/les=85/86 n=5 ec=54/39 lis/c=85/54 les/c/f=86/55/0 sis=87 pruub=14.981726646s) [2] r=-1 lpr=87 pi=[54,87)/1 crt=45'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 220.772430420s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:29:59 compute-0 sudo[103288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:29:59 compute-0 sudo[103288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:30:00 compute-0 ceph-mon[74331]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 24 09:30:00 compute-0 podman[103336]: 2025-11-24 09:30:00.184010144 +0000 UTC m=+0.046996422 container create 4d96ac62c65fb41f26ce8a696ff11c0c6f86796ea94ce923dea72d522fef796e (image=quay.io/ceph/ceph:v19, name=agitated_maxwell, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 09:30:00 compute-0 systemd[1]: Started libpod-conmon-4d96ac62c65fb41f26ce8a696ff11c0c6f86796ea94ce923dea72d522fef796e.scope.
Nov 24 09:30:00 compute-0 podman[103336]: 2025-11-24 09:30:00.160737444 +0000 UTC m=+0.023723742 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 24 09:30:00 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:30:00 compute-0 podman[103336]: 2025-11-24 09:30:00.27015402 +0000 UTC m=+0.133140688 container init 4d96ac62c65fb41f26ce8a696ff11c0c6f86796ea94ce923dea72d522fef796e (image=quay.io/ceph/ceph:v19, name=agitated_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Nov 24 09:30:00 compute-0 podman[103336]: 2025-11-24 09:30:00.276509425 +0000 UTC m=+0.139495703 container start 4d96ac62c65fb41f26ce8a696ff11c0c6f86796ea94ce923dea72d522fef796e (image=quay.io/ceph/ceph:v19, name=agitated_maxwell, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 09:30:00 compute-0 podman[103336]: 2025-11-24 09:30:00.279767634 +0000 UTC m=+0.142753912 container attach 4d96ac62c65fb41f26ce8a696ff11c0c6f86796ea94ce923dea72d522fef796e (image=quay.io/ceph/ceph:v19, name=agitated_maxwell, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:30:00 compute-0 agitated_maxwell[103352]: 167 167
Nov 24 09:30:00 compute-0 systemd[1]: libpod-4d96ac62c65fb41f26ce8a696ff11c0c6f86796ea94ce923dea72d522fef796e.scope: Deactivated successfully.
Nov 24 09:30:00 compute-0 podman[103336]: 2025-11-24 09:30:00.281705438 +0000 UTC m=+0.144691726 container died 4d96ac62c65fb41f26ce8a696ff11c0c6f86796ea94ce923dea72d522fef796e (image=quay.io/ceph/ceph:v19, name=agitated_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 24 09:30:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:00 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-d71ebe22cf1895025ad1f56f82a7bcafa393ea31144ae8d80e02fef05b35d691-merged.mount: Deactivated successfully.
Nov 24 09:30:00 compute-0 podman[103336]: 2025-11-24 09:30:00.331542536 +0000 UTC m=+0.194528814 container remove 4d96ac62c65fb41f26ce8a696ff11c0c6f86796ea94ce923dea72d522fef796e (image=quay.io/ceph/ceph:v19, name=agitated_maxwell, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 09:30:00 compute-0 systemd[1]: libpod-conmon-4d96ac62c65fb41f26ce8a696ff11c0c6f86796ea94ce923dea72d522fef796e.scope: Deactivated successfully.
Nov 24 09:30:00 compute-0 sudo[103288]: pam_unix(sudo:session): session closed for user root
Nov 24 09:30:00 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:30:00 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:00 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:30:00 compute-0 ceph-mon[74331]: Reconfiguring mon.compute-0 (monmap changed)...
Nov 24 09:30:00 compute-0 ceph-mon[74331]: Reconfiguring daemon mon.compute-0 on compute-0
Nov 24 09:30:00 compute-0 ceph-mon[74331]: 11.19 scrub starts
Nov 24 09:30:00 compute-0 ceph-mon[74331]: 11.19 scrub ok
Nov 24 09:30:00 compute-0 ceph-mon[74331]: pgmap v19: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 40 B/s, 2 objects/s recovering
Nov 24 09:30:00 compute-0 ceph-mon[74331]: 8.4 scrub starts
Nov 24 09:30:00 compute-0 ceph-mon[74331]: 8.4 scrub ok
Nov 24 09:30:00 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:00 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:00 compute-0 ceph-mon[74331]: 10.19 scrub starts
Nov 24 09:30:00 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.mauvni", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 24 09:30:00 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.mauvni", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 24 09:30:00 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 09:30:00 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:30:00 compute-0 ceph-mon[74331]: 10.19 scrub ok
Nov 24 09:30:00 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 24 09:30:00 compute-0 ceph-mon[74331]: osdmap e87: 3 total, 3 up, 3 in
Nov 24 09:30:00 compute-0 ceph-mon[74331]: overall HEALTH_OK
Nov 24 09:30:00 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:30:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:00 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf4003e40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:00 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:00 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-0 (monmap changed)...
Nov 24 09:30:00 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-0 (monmap changed)...
Nov 24 09:30:00 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Nov 24 09:30:00 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 24 09:30:00 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-0 on compute-0
Nov 24 09:30:00 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-0 on compute-0
Nov 24 09:30:00 compute-0 sudo[103370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:30:00 compute-0 sudo[103370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:30:00 compute-0 sudo[103370]: pam_unix(sudo:session): session closed for user root
Nov 24 09:30:00 compute-0 sudo[103395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:30:00 compute-0 sudo[103395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:30:00 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 12.8 scrub starts
Nov 24 09:30:00 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 12.8 scrub ok
Nov 24 09:30:00 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Nov 24 09:30:00 compute-0 sudo[102502]: pam_unix(sudo:session): session closed for user root
Nov 24 09:30:00 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Nov 24 09:30:00 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Nov 24 09:30:00 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 88 pg[9.a( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] r=0 lpr=88 pi=[54,88)/1 crt=45'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:30:00 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 88 pg[9.a( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=6 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] r=0 lpr=88 pi=[54,88)/1 crt=45'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 09:30:00 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 88 pg[9.1a( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=5 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] r=0 lpr=88 pi=[54,88)/1 crt=45'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:30:00 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 88 pg[9.1a( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=5 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] r=0 lpr=88 pi=[54,88)/1 crt=45'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 09:30:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:00 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc003c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:00 compute-0 podman[103437]: 2025-11-24 09:30:00.932795464 +0000 UTC m=+0.038328475 container create 27257fe31454a85dc74910371581b54a9d34dba8fe78f57baec819b170087634 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_easley, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 09:30:00 compute-0 systemd[1]: Started libpod-conmon-27257fe31454a85dc74910371581b54a9d34dba8fe78f57baec819b170087634.scope.
Nov 24 09:30:00 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:30:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:30:00] "GET /metrics HTTP/1.1" 200 48286 "" "Prometheus/2.51.0"
Nov 24 09:30:00 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:30:00] "GET /metrics HTTP/1.1" 200 48286 "" "Prometheus/2.51.0"
Nov 24 09:30:01 compute-0 podman[103437]: 2025-11-24 09:30:01.005677346 +0000 UTC m=+0.111210387 container init 27257fe31454a85dc74910371581b54a9d34dba8fe78f57baec819b170087634 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_easley, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:30:01 compute-0 podman[103437]: 2025-11-24 09:30:00.91738632 +0000 UTC m=+0.022919361 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:30:01 compute-0 podman[103437]: 2025-11-24 09:30:01.013993974 +0000 UTC m=+0.119526995 container start 27257fe31454a85dc74910371581b54a9d34dba8fe78f57baec819b170087634 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Nov 24 09:30:01 compute-0 podman[103437]: 2025-11-24 09:30:01.016895414 +0000 UTC m=+0.122428465 container attach 27257fe31454a85dc74910371581b54a9d34dba8fe78f57baec819b170087634 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_easley, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid)
Nov 24 09:30:01 compute-0 quirky_easley[103477]: 167 167
Nov 24 09:30:01 compute-0 systemd[1]: libpod-27257fe31454a85dc74910371581b54a9d34dba8fe78f57baec819b170087634.scope: Deactivated successfully.
Nov 24 09:30:01 compute-0 podman[103437]: 2025-11-24 09:30:01.019609898 +0000 UTC m=+0.125142919 container died 27257fe31454a85dc74910371581b54a9d34dba8fe78f57baec819b170087634 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_easley, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:30:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad6731ee7f27ddcd5f75ce4805bb91bfc08c7d3533d9700b31a0e5c9aacd822b-merged.mount: Deactivated successfully.
Nov 24 09:30:01 compute-0 podman[103437]: 2025-11-24 09:30:01.057966982 +0000 UTC m=+0.163500003 container remove 27257fe31454a85dc74910371581b54a9d34dba8fe78f57baec819b170087634 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_easley, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:30:01 compute-0 systemd[1]: libpod-conmon-27257fe31454a85dc74910371581b54a9d34dba8fe78f57baec819b170087634.scope: Deactivated successfully.
Nov 24 09:30:01 compute-0 sudo[103395]: pam_unix(sudo:session): session closed for user root
Nov 24 09:30:01 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:30:01 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:01 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:30:01 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:01 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)...
Nov 24 09:30:01 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)...
Nov 24 09:30:01 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on compute-0
Nov 24 09:30:01 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on compute-0
Nov 24 09:30:01 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:01 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 24 09:30:01 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:01.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 24 09:30:01 compute-0 sudo[103496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:30:01 compute-0 sudo[103496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:30:01 compute-0 sudo[103496]: pam_unix(sudo:session): session closed for user root
Nov 24 09:30:01 compute-0 sudo[103521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:30:01 compute-0 sudo[103521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:30:01 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v22: 353 pgs: 2 remapped+peering, 351 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:30:01 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:01 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:01 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:01.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:01 compute-0 ceph-mon[74331]: Reconfiguring mgr.compute-0.mauvni (monmap changed)...
Nov 24 09:30:01 compute-0 ceph-mon[74331]: Reconfiguring daemon mgr.compute-0.mauvni on compute-0
Nov 24 09:30:01 compute-0 ceph-mon[74331]: 8.f scrub starts
Nov 24 09:30:01 compute-0 ceph-mon[74331]: 8.f scrub ok
Nov 24 09:30:01 compute-0 ceph-mon[74331]: 11.1b scrub starts
Nov 24 09:30:01 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:01 compute-0 ceph-mon[74331]: 11.1b scrub ok
Nov 24 09:30:01 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:01 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 24 09:30:01 compute-0 ceph-mon[74331]: Reconfiguring crash.compute-0 (monmap changed)...
Nov 24 09:30:01 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 24 09:30:01 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:30:01 compute-0 ceph-mon[74331]: Reconfiguring daemon crash.compute-0 on compute-0
Nov 24 09:30:01 compute-0 ceph-mon[74331]: 12.8 scrub starts
Nov 24 09:30:01 compute-0 ceph-mon[74331]: 12.8 scrub ok
Nov 24 09:30:01 compute-0 ceph-mon[74331]: osdmap e88: 3 total, 3 up, 3 in
Nov 24 09:30:01 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:01 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:01 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 24 09:30:01 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:30:01 compute-0 ceph-mon[74331]: 8.3 deep-scrub starts
Nov 24 09:30:01 compute-0 ceph-mon[74331]: 8.3 deep-scrub ok
Nov 24 09:30:01 compute-0 podman[103562]: 2025-11-24 09:30:01.582490961 +0000 UTC m=+0.043005293 container create 8df040b1a965a842710f864943c688e002747b822eda5a8ca62b13b67e49c99b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_williamson, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:30:01 compute-0 systemd[1]: Started libpod-conmon-8df040b1a965a842710f864943c688e002747b822eda5a8ca62b13b67e49c99b.scope.
Nov 24 09:30:01 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:30:01 compute-0 podman[103562]: 2025-11-24 09:30:01.645587784 +0000 UTC m=+0.106102136 container init 8df040b1a965a842710f864943c688e002747b822eda5a8ca62b13b67e49c99b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_williamson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 09:30:01 compute-0 podman[103562]: 2025-11-24 09:30:01.65165722 +0000 UTC m=+0.112171552 container start 8df040b1a965a842710f864943c688e002747b822eda5a8ca62b13b67e49c99b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_williamson, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:30:01 compute-0 peaceful_williamson[103578]: 167 167
Nov 24 09:30:01 compute-0 systemd[1]: libpod-8df040b1a965a842710f864943c688e002747b822eda5a8ca62b13b67e49c99b.scope: Deactivated successfully.
Nov 24 09:30:01 compute-0 podman[103562]: 2025-11-24 09:30:01.566692566 +0000 UTC m=+0.027206928 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:30:01 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Nov 24 09:30:01 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Nov 24 09:30:01 compute-0 podman[103562]: 2025-11-24 09:30:01.753828517 +0000 UTC m=+0.214342869 container attach 8df040b1a965a842710f864943c688e002747b822eda5a8ca62b13b67e49c99b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_williamson, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:30:01 compute-0 podman[103562]: 2025-11-24 09:30:01.755073251 +0000 UTC m=+0.215587583 container died 8df040b1a965a842710f864943c688e002747b822eda5a8ca62b13b67e49c99b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_williamson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 24 09:30:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2efffc1e44c9df7db3e9efe3c4b8806405a60588e890231550963f92e8464fb-merged.mount: Deactivated successfully.
Nov 24 09:30:01 compute-0 podman[103562]: 2025-11-24 09:30:01.790893225 +0000 UTC m=+0.251407557 container remove 8df040b1a965a842710f864943c688e002747b822eda5a8ca62b13b67e49c99b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_williamson, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:30:01 compute-0 systemd[1]: libpod-conmon-8df040b1a965a842710f864943c688e002747b822eda5a8ca62b13b67e49c99b.scope: Deactivated successfully.
Nov 24 09:30:01 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Nov 24 09:30:01 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Nov 24 09:30:01 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Nov 24 09:30:01 compute-0 sudo[103521]: pam_unix(sudo:session): session closed for user root
Nov 24 09:30:01 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:30:01 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:01 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:30:01 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:01 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Reconfiguring rgw.rgw.compute-0.zlrxyg (unknown last config time)...
Nov 24 09:30:01 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Reconfiguring rgw.rgw.compute-0.zlrxyg (unknown last config time)...
Nov 24 09:30:01 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.zlrxyg", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Nov 24 09:30:01 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.zlrxyg", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 24 09:30:01 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Nov 24 09:30:01 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Reconfiguring daemon rgw.rgw.compute-0.zlrxyg on compute-0
Nov 24 09:30:01 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Reconfiguring daemon rgw.rgw.compute-0.zlrxyg on compute-0
Nov 24 09:30:02 compute-0 sudo[103604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:30:02 compute-0 sudo[103604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:30:02 compute-0 sudo[103604]: pam_unix(sudo:session): session closed for user root
Nov 24 09:30:02 compute-0 sudo[103629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:30:02 compute-0 sudo[103629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:30:02 compute-0 sshd-session[101369]: Connection closed by 192.168.122.30 port 58456
Nov 24 09:30:02 compute-0 sshd-session[101323]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:30:02 compute-0 systemd[1]: session-38.scope: Deactivated successfully.
Nov 24 09:30:02 compute-0 systemd[1]: session-38.scope: Consumed 8.452s CPU time.
Nov 24 09:30:02 compute-0 systemd-logind[822]: Session 38 logged out. Waiting for processes to exit.
Nov 24 09:30:02 compute-0 systemd-logind[822]: Removed session 38.
Nov 24 09:30:02 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:02 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc003c90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:02 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 89 pg[9.1a( v 45'1130 (0'0,45'1130] local-lis/les=88/89 n=5 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] async=[1] r=0 lpr=88 pi=[54,88)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:30:02 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 89 pg[9.a( v 45'1130 (0'0,45'1130] local-lis/les=88/89 n=6 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=88) [1]/[0] async=[1] r=0 lpr=88 pi=[54,88)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:30:02 compute-0 podman[103669]: 2025-11-24 09:30:02.453456046 +0000 UTC m=+0.091808003 container create e48ddf29d555c9284d3bddae5b0922733a91e33c835ab84bc8d6102bd7f73285 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_easley, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:30:02 compute-0 podman[103669]: 2025-11-24 09:30:02.384605355 +0000 UTC m=+0.022957342 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:30:02 compute-0 systemd[1]: Started libpod-conmon-e48ddf29d555c9284d3bddae5b0922733a91e33c835ab84bc8d6102bd7f73285.scope.
Nov 24 09:30:02 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:30:02 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:02 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:02 compute-0 ceph-mon[74331]: Reconfiguring osd.0 (monmap changed)...
Nov 24 09:30:02 compute-0 ceph-mon[74331]: Reconfiguring daemon osd.0 on compute-0
Nov 24 09:30:02 compute-0 ceph-mon[74331]: pgmap v22: 353 pgs: 2 remapped+peering, 351 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:30:02 compute-0 ceph-mon[74331]: 11.1a scrub starts
Nov 24 09:30:02 compute-0 ceph-mon[74331]: 11.1a scrub ok
Nov 24 09:30:02 compute-0 ceph-mon[74331]: 10.14 scrub starts
Nov 24 09:30:02 compute-0 ceph-mon[74331]: 10.14 scrub ok
Nov 24 09:30:02 compute-0 ceph-mon[74331]: osdmap e89: 3 total, 3 up, 3 in
Nov 24 09:30:02 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:02 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:02 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.zlrxyg", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 24 09:30:02 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.zlrxyg", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 24 09:30:02 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:30:02 compute-0 podman[103669]: 2025-11-24 09:30:02.544308432 +0000 UTC m=+0.182660419 container init e48ddf29d555c9284d3bddae5b0922733a91e33c835ab84bc8d6102bd7f73285 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_easley, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:30:02 compute-0 ceph-mon[74331]: 12.7 scrub starts
Nov 24 09:30:02 compute-0 ceph-mon[74331]: 12.7 scrub ok
Nov 24 09:30:02 compute-0 podman[103669]: 2025-11-24 09:30:02.551228572 +0000 UTC m=+0.189580529 container start e48ddf29d555c9284d3bddae5b0922733a91e33c835ab84bc8d6102bd7f73285 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:30:02 compute-0 podman[103669]: 2025-11-24 09:30:02.555658374 +0000 UTC m=+0.194010361 container attach e48ddf29d555c9284d3bddae5b0922733a91e33c835ab84bc8d6102bd7f73285 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_easley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid)
Nov 24 09:30:02 compute-0 relaxed_easley[103685]: 167 167
Nov 24 09:30:02 compute-0 systemd[1]: libpod-e48ddf29d555c9284d3bddae5b0922733a91e33c835ab84bc8d6102bd7f73285.scope: Deactivated successfully.
Nov 24 09:30:02 compute-0 conmon[103685]: conmon e48ddf29d555c9284d3b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e48ddf29d555c9284d3bddae5b0922733a91e33c835ab84bc8d6102bd7f73285.scope/container/memory.events
Nov 24 09:30:02 compute-0 podman[103669]: 2025-11-24 09:30:02.560353542 +0000 UTC m=+0.198705499 container died e48ddf29d555c9284d3bddae5b0922733a91e33c835ab84bc8d6102bd7f73285 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_easley, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:30:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-8aa9167eec4c1ece3827346a53a761f0730d6e9d6b34bab7c376d512c9269cb5-merged.mount: Deactivated successfully.
Nov 24 09:30:02 compute-0 podman[103669]: 2025-11-24 09:30:02.643662471 +0000 UTC m=+0.282014428 container remove e48ddf29d555c9284d3bddae5b0922733a91e33c835ab84bc8d6102bd7f73285 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Nov 24 09:30:02 compute-0 systemd[1]: libpod-conmon-e48ddf29d555c9284d3bddae5b0922733a91e33c835ab84bc8d6102bd7f73285.scope: Deactivated successfully.
Nov 24 09:30:02 compute-0 sudo[103629]: pam_unix(sudo:session): session closed for user root
Nov 24 09:30:02 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 12.6 deep-scrub starts
Nov 24 09:30:02 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:30:02 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 12.6 deep-scrub ok
Nov 24 09:30:02 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:02 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:30:02 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:02 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Reconfiguring node-exporter.compute-0 (unknown last config time)...
Nov 24 09:30:02 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Reconfiguring node-exporter.compute-0 (unknown last config time)...
Nov 24 09:30:02 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Reconfiguring daemon node-exporter.compute-0 on compute-0
Nov 24 09:30:02 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Reconfiguring daemon node-exporter.compute-0 on compute-0
Nov 24 09:30:02 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:30:02 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Nov 24 09:30:02 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Nov 24 09:30:02 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Nov 24 09:30:02 compute-0 sudo[103701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:30:02 compute-0 sudo[103701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:30:02 compute-0 sudo[103701]: pam_unix(sudo:session): session closed for user root
Nov 24 09:30:02 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 90 pg[9.a( v 45'1130 (0'0,45'1130] local-lis/les=88/89 n=6 ec=54/39 lis/c=88/54 les/c/f=89/55/0 sis=90 pruub=15.461030006s) [1] async=[1] r=-1 lpr=90 pi=[54,90)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active pruub 224.285278320s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:30:02 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 90 pg[9.a( v 45'1130 (0'0,45'1130] local-lis/les=88/89 n=6 ec=54/39 lis/c=88/54 les/c/f=89/55/0 sis=90 pruub=15.460924149s) [1] r=-1 lpr=90 pi=[54,90)/1 crt=45'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 224.285278320s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:30:02 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 90 pg[9.1a( v 45'1130 (0'0,45'1130] local-lis/les=88/89 n=5 ec=54/39 lis/c=88/54 les/c/f=89/55/0 sis=90 pruub=15.457158089s) [1] async=[1] r=-1 lpr=90 pi=[54,90)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active pruub 224.281784058s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:30:02 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 90 pg[9.1a( v 45'1130 (0'0,45'1130] local-lis/les=88/89 n=5 ec=54/39 lis/c=88/54 les/c/f=89/55/0 sis=90 pruub=15.457054138s) [1] r=-1 lpr=90 pi=[54,90)/1 crt=45'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 224.281784058s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:30:02 compute-0 sudo[103726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/node-exporter:v1.7.0 --timeout 895 _orch deploy --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:30:02 compute-0 sudo[103726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:30:02 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:02 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:03 compute-0 systemd[1]: Stopping Ceph node-exporter.compute-0 for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:30:03 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:03 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:03 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:03.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:03 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v25: 353 pgs: 1 active+clean+scrubbing+deep, 2 remapped+peering, 350 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 59 B/s, 2 objects/s recovering
Nov 24 09:30:03 compute-0 podman[103799]: 2025-11-24 09:30:03.344327518 +0000 UTC m=+0.047295520 container died 7b41a24888e2dd3dca187bd76560d76829b7d7b7dcf75bceeedb6a669c1298b7 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:30:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3d8fbadef1587ff586d2c16083d30393d0c2cfe352d726c2398b19cd8375193-merged.mount: Deactivated successfully.
Nov 24 09:30:03 compute-0 podman[103799]: 2025-11-24 09:30:03.397641203 +0000 UTC m=+0.100609195 container remove 7b41a24888e2dd3dca187bd76560d76829b7d7b7dcf75bceeedb6a669c1298b7 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:30:03 compute-0 bash[103799]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0
Nov 24 09:30:03 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@node-exporter.compute-0.service: Main process exited, code=exited, status=143/n/a
Nov 24 09:30:03 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:03 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:03 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:03.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:03 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@node-exporter.compute-0.service: Failed with result 'exit-code'.
Nov 24 09:30:03 compute-0 systemd[1]: Stopped Ceph node-exporter.compute-0 for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:30:03 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@node-exporter.compute-0.service: Consumed 1.874s CPU time.
Nov 24 09:30:03 compute-0 systemd[1]: Starting Ceph node-exporter.compute-0 for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:30:03 compute-0 ceph-mon[74331]: Reconfiguring rgw.rgw.compute-0.zlrxyg (unknown last config time)...
Nov 24 09:30:03 compute-0 ceph-mon[74331]: Reconfiguring daemon rgw.rgw.compute-0.zlrxyg on compute-0
Nov 24 09:30:03 compute-0 ceph-mon[74331]: 12.6 deep-scrub starts
Nov 24 09:30:03 compute-0 ceph-mon[74331]: 12.6 deep-scrub ok
Nov 24 09:30:03 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:03 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:03 compute-0 ceph-mon[74331]: osdmap e90: 3 total, 3 up, 3 in
Nov 24 09:30:03 compute-0 ceph-mon[74331]: 10.3 scrub starts
Nov 24 09:30:03 compute-0 ceph-mon[74331]: 10.3 scrub ok
Nov 24 09:30:03 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 12.10 scrub starts
Nov 24 09:30:03 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 12.10 scrub ok
Nov 24 09:30:03 compute-0 podman[103906]: 2025-11-24 09:30:03.733394996 +0000 UTC m=+0.037889582 container create c1042f9aaa96d1cc7323d0bb263b746783ae7f616fd1b71ffa56027caf075582 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:30:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa326360fabdfde1038259ff7bd88599db13b17caf385cfc3056ecfdc085ecf4/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Nov 24 09:30:03 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Nov 24 09:30:03 compute-0 podman[103906]: 2025-11-24 09:30:03.799876932 +0000 UTC m=+0.104371608 container init c1042f9aaa96d1cc7323d0bb263b746783ae7f616fd1b71ffa56027caf075582 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:30:03 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Nov 24 09:30:03 compute-0 podman[103906]: 2025-11-24 09:30:03.804663004 +0000 UTC m=+0.109157630 container start c1042f9aaa96d1cc7323d0bb263b746783ae7f616fd1b71ffa56027caf075582 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:30:03 compute-0 bash[103906]: c1042f9aaa96d1cc7323d0bb263b746783ae7f616fd1b71ffa56027caf075582
Nov 24 09:30:03 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Nov 24 09:30:03 compute-0 podman[103906]: 2025-11-24 09:30:03.717464658 +0000 UTC m=+0.021959274 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Nov 24 09:30:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[103923]: ts=2025-11-24T09:30:03.816Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Nov 24 09:30:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[103923]: ts=2025-11-24T09:30:03.817Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Nov 24 09:30:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[103923]: ts=2025-11-24T09:30:03.818Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Nov 24 09:30:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[103923]: ts=2025-11-24T09:30:03.818Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Nov 24 09:30:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[103923]: ts=2025-11-24T09:30:03.818Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Nov 24 09:30:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[103923]: ts=2025-11-24T09:30:03.818Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Nov 24 09:30:03 compute-0 systemd[1]: Started Ceph node-exporter.compute-0 for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:30:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[103923]: ts=2025-11-24T09:30:03.821Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Nov 24 09:30:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[103923]: ts=2025-11-24T09:30:03.822Z caller=node_exporter.go:117 level=info collector=arp
Nov 24 09:30:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[103923]: ts=2025-11-24T09:30:03.822Z caller=node_exporter.go:117 level=info collector=bcache
Nov 24 09:30:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[103923]: ts=2025-11-24T09:30:03.822Z caller=node_exporter.go:117 level=info collector=bonding
Nov 24 09:30:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[103923]: ts=2025-11-24T09:30:03.822Z caller=node_exporter.go:117 level=info collector=btrfs
Nov 24 09:30:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[103923]: ts=2025-11-24T09:30:03.822Z caller=node_exporter.go:117 level=info collector=conntrack
Nov 24 09:30:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[103923]: ts=2025-11-24T09:30:03.822Z caller=node_exporter.go:117 level=info collector=cpu
Nov 24 09:30:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[103923]: ts=2025-11-24T09:30:03.822Z caller=node_exporter.go:117 level=info collector=cpufreq
Nov 24 09:30:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[103923]: ts=2025-11-24T09:30:03.822Z caller=node_exporter.go:117 level=info collector=diskstats
Nov 24 09:30:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[103923]: ts=2025-11-24T09:30:03.822Z caller=node_exporter.go:117 level=info collector=dmi
Nov 24 09:30:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[103923]: ts=2025-11-24T09:30:03.823Z caller=node_exporter.go:117 level=info collector=edac
Nov 24 09:30:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[103923]: ts=2025-11-24T09:30:03.823Z caller=node_exporter.go:117 level=info collector=entropy
Nov 24 09:30:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[103923]: ts=2025-11-24T09:30:03.823Z caller=node_exporter.go:117 level=info collector=fibrechannel
Nov 24 09:30:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[103923]: ts=2025-11-24T09:30:03.823Z caller=node_exporter.go:117 level=info collector=filefd
Nov 24 09:30:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[103923]: ts=2025-11-24T09:30:03.823Z caller=node_exporter.go:117 level=info collector=filesystem
Nov 24 09:30:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[103923]: ts=2025-11-24T09:30:03.823Z caller=node_exporter.go:117 level=info collector=hwmon
Nov 24 09:30:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[103923]: ts=2025-11-24T09:30:03.823Z caller=node_exporter.go:117 level=info collector=infiniband
Nov 24 09:30:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[103923]: ts=2025-11-24T09:30:03.823Z caller=node_exporter.go:117 level=info collector=ipvs
Nov 24 09:30:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[103923]: ts=2025-11-24T09:30:03.823Z caller=node_exporter.go:117 level=info collector=loadavg
Nov 24 09:30:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[103923]: ts=2025-11-24T09:30:03.823Z caller=node_exporter.go:117 level=info collector=mdadm
Nov 24 09:30:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[103923]: ts=2025-11-24T09:30:03.823Z caller=node_exporter.go:117 level=info collector=meminfo
Nov 24 09:30:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[103923]: ts=2025-11-24T09:30:03.823Z caller=node_exporter.go:117 level=info collector=netclass
Nov 24 09:30:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[103923]: ts=2025-11-24T09:30:03.823Z caller=node_exporter.go:117 level=info collector=netdev
Nov 24 09:30:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[103923]: ts=2025-11-24T09:30:03.823Z caller=node_exporter.go:117 level=info collector=netstat
Nov 24 09:30:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[103923]: ts=2025-11-24T09:30:03.824Z caller=node_exporter.go:117 level=info collector=nfs
Nov 24 09:30:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[103923]: ts=2025-11-24T09:30:03.824Z caller=node_exporter.go:117 level=info collector=nfsd
Nov 24 09:30:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[103923]: ts=2025-11-24T09:30:03.824Z caller=node_exporter.go:117 level=info collector=nvme
Nov 24 09:30:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[103923]: ts=2025-11-24T09:30:03.824Z caller=node_exporter.go:117 level=info collector=os
Nov 24 09:30:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[103923]: ts=2025-11-24T09:30:03.824Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Nov 24 09:30:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[103923]: ts=2025-11-24T09:30:03.824Z caller=node_exporter.go:117 level=info collector=pressure
Nov 24 09:30:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[103923]: ts=2025-11-24T09:30:03.824Z caller=node_exporter.go:117 level=info collector=rapl
Nov 24 09:30:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[103923]: ts=2025-11-24T09:30:03.824Z caller=node_exporter.go:117 level=info collector=schedstat
Nov 24 09:30:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[103923]: ts=2025-11-24T09:30:03.824Z caller=node_exporter.go:117 level=info collector=selinux
Nov 24 09:30:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[103923]: ts=2025-11-24T09:30:03.824Z caller=node_exporter.go:117 level=info collector=sockstat
Nov 24 09:30:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[103923]: ts=2025-11-24T09:30:03.824Z caller=node_exporter.go:117 level=info collector=softnet
Nov 24 09:30:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[103923]: ts=2025-11-24T09:30:03.825Z caller=node_exporter.go:117 level=info collector=stat
Nov 24 09:30:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[103923]: ts=2025-11-24T09:30:03.825Z caller=node_exporter.go:117 level=info collector=tapestats
Nov 24 09:30:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[103923]: ts=2025-11-24T09:30:03.825Z caller=node_exporter.go:117 level=info collector=textfile
Nov 24 09:30:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[103923]: ts=2025-11-24T09:30:03.825Z caller=node_exporter.go:117 level=info collector=thermal_zone
Nov 24 09:30:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[103923]: ts=2025-11-24T09:30:03.825Z caller=node_exporter.go:117 level=info collector=time
Nov 24 09:30:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[103923]: ts=2025-11-24T09:30:03.825Z caller=node_exporter.go:117 level=info collector=udp_queues
Nov 24 09:30:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[103923]: ts=2025-11-24T09:30:03.825Z caller=node_exporter.go:117 level=info collector=uname
Nov 24 09:30:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[103923]: ts=2025-11-24T09:30:03.825Z caller=node_exporter.go:117 level=info collector=vmstat
Nov 24 09:30:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[103923]: ts=2025-11-24T09:30:03.825Z caller=node_exporter.go:117 level=info collector=xfs
Nov 24 09:30:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[103923]: ts=2025-11-24T09:30:03.825Z caller=node_exporter.go:117 level=info collector=zfs
Nov 24 09:30:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[103923]: ts=2025-11-24T09:30:03.827Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Nov 24 09:30:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0[103923]: ts=2025-11-24T09:30:03.827Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Nov 24 09:30:03 compute-0 sudo[103726]: pam_unix(sudo:session): session closed for user root
Nov 24 09:30:03 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:30:03 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:03 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:30:03 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:03 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Reconfiguring alertmanager.compute-0 (dependencies changed)...
Nov 24 09:30:03 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Reconfiguring alertmanager.compute-0 (dependencies changed)...
Nov 24 09:30:03 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Reconfiguring daemon alertmanager.compute-0 on compute-0
Nov 24 09:30:03 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Reconfiguring daemon alertmanager.compute-0 on compute-0
Nov 24 09:30:03 compute-0 sudo[103932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:30:04 compute-0 sudo[103932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:30:04 compute-0 sudo[103932]: pam_unix(sudo:session): session closed for user root
Nov 24 09:30:04 compute-0 sudo[103957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/alertmanager:v0.25.0 --timeout 895 _orch deploy --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:30:04 compute-0 sudo[103957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:30:04 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:04 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:04 compute-0 podman[104000]: 2025-11-24 09:30:04.37021008 +0000 UTC m=+0.046872579 volume create 8018e77bbeaea46563dd0d8c7571e47321fee2c3901b79b07ef011576c509713
Nov 24 09:30:04 compute-0 podman[104000]: 2025-11-24 09:30:04.38260799 +0000 UTC m=+0.059270489 container create dd2c432550ee4bab9ac9b70be128bf9d9a5a67006ed54c5b408c672c8620a979 (image=quay.io/prometheus/alertmanager:v0.25.0, name=nice_goldwasser, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:30:04 compute-0 systemd[1]: Started libpod-conmon-dd2c432550ee4bab9ac9b70be128bf9d9a5a67006ed54c5b408c672c8620a979.scope.
Nov 24 09:30:04 compute-0 podman[104000]: 2025-11-24 09:30:04.349313666 +0000 UTC m=+0.025976215 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Nov 24 09:30:04 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:30:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8c49ca54e7409284c03964f9be65d39e842a05ccb0e3fe85af3b6c25b0f7f0e/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Nov 24 09:30:04 compute-0 podman[104000]: 2025-11-24 09:30:04.486143634 +0000 UTC m=+0.162806153 container init dd2c432550ee4bab9ac9b70be128bf9d9a5a67006ed54c5b408c672c8620a979 (image=quay.io/prometheus/alertmanager:v0.25.0, name=nice_goldwasser, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:30:04 compute-0 podman[104000]: 2025-11-24 09:30:04.49839781 +0000 UTC m=+0.175060309 container start dd2c432550ee4bab9ac9b70be128bf9d9a5a67006ed54c5b408c672c8620a979 (image=quay.io/prometheus/alertmanager:v0.25.0, name=nice_goldwasser, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:30:04 compute-0 nice_goldwasser[104017]: 65534 65534
Nov 24 09:30:04 compute-0 systemd[1]: libpod-dd2c432550ee4bab9ac9b70be128bf9d9a5a67006ed54c5b408c672c8620a979.scope: Deactivated successfully.
Nov 24 09:30:04 compute-0 podman[104000]: 2025-11-24 09:30:04.502885424 +0000 UTC m=+0.179547923 container attach dd2c432550ee4bab9ac9b70be128bf9d9a5a67006ed54c5b408c672c8620a979 (image=quay.io/prometheus/alertmanager:v0.25.0, name=nice_goldwasser, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:30:04 compute-0 podman[104000]: 2025-11-24 09:30:04.504386236 +0000 UTC m=+0.181048755 container died dd2c432550ee4bab9ac9b70be128bf9d9a5a67006ed54c5b408c672c8620a979 (image=quay.io/prometheus/alertmanager:v0.25.0, name=nice_goldwasser, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:30:04 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:04 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc003cb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8c49ca54e7409284c03964f9be65d39e842a05ccb0e3fe85af3b6c25b0f7f0e-merged.mount: Deactivated successfully.
Nov 24 09:30:04 compute-0 podman[104000]: 2025-11-24 09:30:04.54607462 +0000 UTC m=+0.222737119 container remove dd2c432550ee4bab9ac9b70be128bf9d9a5a67006ed54c5b408c672c8620a979 (image=quay.io/prometheus/alertmanager:v0.25.0, name=nice_goldwasser, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:30:04 compute-0 podman[104000]: 2025-11-24 09:30:04.549624448 +0000 UTC m=+0.226286967 volume remove 8018e77bbeaea46563dd0d8c7571e47321fee2c3901b79b07ef011576c509713
Nov 24 09:30:04 compute-0 ceph-mon[74331]: Reconfiguring node-exporter.compute-0 (unknown last config time)...
Nov 24 09:30:04 compute-0 ceph-mon[74331]: Reconfiguring daemon node-exporter.compute-0 on compute-0
Nov 24 09:30:04 compute-0 ceph-mon[74331]: pgmap v25: 353 pgs: 1 active+clean+scrubbing+deep, 2 remapped+peering, 350 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 59 B/s, 2 objects/s recovering
Nov 24 09:30:04 compute-0 ceph-mon[74331]: 12.10 scrub starts
Nov 24 09:30:04 compute-0 ceph-mon[74331]: 12.10 scrub ok
Nov 24 09:30:04 compute-0 ceph-mon[74331]: osdmap e91: 3 total, 3 up, 3 in
Nov 24 09:30:04 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:04 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:04 compute-0 ceph-mon[74331]: 8.6 scrub starts
Nov 24 09:30:04 compute-0 ceph-mon[74331]: 8.6 scrub ok
Nov 24 09:30:04 compute-0 systemd[1]: libpod-conmon-dd2c432550ee4bab9ac9b70be128bf9d9a5a67006ed54c5b408c672c8620a979.scope: Deactivated successfully.
Nov 24 09:30:04 compute-0 podman[104033]: 2025-11-24 09:30:04.611832327 +0000 UTC m=+0.040462963 volume create a13bfb5f9ef260818b8ebda43ba4b575cbc4c46212a4fa4cde8043a88fdfcc07
Nov 24 09:30:04 compute-0 podman[104033]: 2025-11-24 09:30:04.619410285 +0000 UTC m=+0.048040921 container create c188627917355398536b8012e6432ff81791e58e43a0ad5e1870fcd841ffdf54 (image=quay.io/prometheus/alertmanager:v0.25.0, name=practical_jennings, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:30:04 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 10.15 deep-scrub starts
Nov 24 09:30:04 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 10.15 deep-scrub ok
Nov 24 09:30:04 compute-0 systemd[1]: Started libpod-conmon-c188627917355398536b8012e6432ff81791e58e43a0ad5e1870fcd841ffdf54.scope.
Nov 24 09:30:04 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:30:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/699e2e437334c67ec3d29ef092d9c859037c371d3ee5647aaf8c7ad4637b14f4/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Nov 24 09:30:04 compute-0 podman[104033]: 2025-11-24 09:30:04.598152631 +0000 UTC m=+0.026783287 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Nov 24 09:30:04 compute-0 podman[104033]: 2025-11-24 09:30:04.698753144 +0000 UTC m=+0.127383810 container init c188627917355398536b8012e6432ff81791e58e43a0ad5e1870fcd841ffdf54 (image=quay.io/prometheus/alertmanager:v0.25.0, name=practical_jennings, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:30:04 compute-0 podman[104033]: 2025-11-24 09:30:04.706165399 +0000 UTC m=+0.134796035 container start c188627917355398536b8012e6432ff81791e58e43a0ad5e1870fcd841ffdf54 (image=quay.io/prometheus/alertmanager:v0.25.0, name=practical_jennings, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:30:04 compute-0 practical_jennings[104049]: 65534 65534
Nov 24 09:30:04 compute-0 systemd[1]: libpod-c188627917355398536b8012e6432ff81791e58e43a0ad5e1870fcd841ffdf54.scope: Deactivated successfully.
Nov 24 09:30:04 compute-0 podman[104033]: 2025-11-24 09:30:04.710374294 +0000 UTC m=+0.139004960 container attach c188627917355398536b8012e6432ff81791e58e43a0ad5e1870fcd841ffdf54 (image=quay.io/prometheus/alertmanager:v0.25.0, name=practical_jennings, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:30:04 compute-0 podman[104033]: 2025-11-24 09:30:04.711700481 +0000 UTC m=+0.140331117 container died c188627917355398536b8012e6432ff81791e58e43a0ad5e1870fcd841ffdf54 (image=quay.io/prometheus/alertmanager:v0.25.0, name=practical_jennings, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:30:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-699e2e437334c67ec3d29ef092d9c859037c371d3ee5647aaf8c7ad4637b14f4-merged.mount: Deactivated successfully.
Nov 24 09:30:04 compute-0 podman[104033]: 2025-11-24 09:30:04.753518409 +0000 UTC m=+0.182149045 container remove c188627917355398536b8012e6432ff81791e58e43a0ad5e1870fcd841ffdf54 (image=quay.io/prometheus/alertmanager:v0.25.0, name=practical_jennings, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:30:04 compute-0 podman[104033]: 2025-11-24 09:30:04.758382333 +0000 UTC m=+0.187012979 volume remove a13bfb5f9ef260818b8ebda43ba4b575cbc4c46212a4fa4cde8043a88fdfcc07
Nov 24 09:30:04 compute-0 systemd[1]: libpod-conmon-c188627917355398536b8012e6432ff81791e58e43a0ad5e1870fcd841ffdf54.scope: Deactivated successfully.
Nov 24 09:30:04 compute-0 systemd[1]: Stopping Ceph alertmanager.compute-0 for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:30:04 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:04 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:04 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[98094]: ts=2025-11-24T09:30:04.954Z caller=main.go:583 level=info msg="Received SIGTERM, exiting gracefully..."
Nov 24 09:30:04 compute-0 podman[104098]: 2025-11-24 09:30:04.965482222 +0000 UTC m=+0.046815248 container died 32681d7ec5cc8674cee7672941d75d1674b5a61184918a28db89f06c57c7c5f8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:30:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-9128bf420034ef6f01c3d1c6331bf79eff09ec3ed84cf7539f0b02de21039bd9-merged.mount: Deactivated successfully.
Nov 24 09:30:05 compute-0 podman[104098]: 2025-11-24 09:30:04.999904087 +0000 UTC m=+0.081237103 container remove 32681d7ec5cc8674cee7672941d75d1674b5a61184918a28db89f06c57c7c5f8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:30:05 compute-0 podman[104098]: 2025-11-24 09:30:05.00472867 +0000 UTC m=+0.086061716 volume remove aefd7ce6b9dfb4441a7c905b4ba016f6fff7115a1321898dd2b88ee2cc7ec854
Nov 24 09:30:05 compute-0 bash[104098]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0
Nov 24 09:30:05 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@alertmanager.compute-0.service: Deactivated successfully.
Nov 24 09:30:05 compute-0 systemd[1]: Stopped Ceph alertmanager.compute-0 for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:30:05 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@alertmanager.compute-0.service: Consumed 1.081s CPU time.
Nov 24 09:30:05 compute-0 systemd[1]: Starting Ceph alertmanager.compute-0 for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:30:05 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:05 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:05 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:05.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:05 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v27: 353 pgs: 1 active+clean+scrubbing+deep, 2 remapped+peering, 350 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 2 objects/s recovering
Nov 24 09:30:05 compute-0 podman[104202]: 2025-11-24 09:30:05.392537023 +0000 UTC m=+0.041799209 volume create 63422426c69fa37c387ce26f10dd3e69f754cab48130c4225d581b1c4264943a
Nov 24 09:30:05 compute-0 podman[104202]: 2025-11-24 09:30:05.40334998 +0000 UTC m=+0.052612156 container create 333e8d52ac14c1ad2562a9b1108149f074ce2b54eb58b09f4ec22c7b717459e6 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:30:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a8bf1c5429a3b8b47917746f34853708bcb7bd032e84000717bf57fa8186bd0/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Nov 24 09:30:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a8bf1c5429a3b8b47917746f34853708bcb7bd032e84000717bf57fa8186bd0/merged/etc/alertmanager supports timestamps until 2038 (0x7fffffff)
Nov 24 09:30:05 compute-0 podman[104202]: 2025-11-24 09:30:05.458854494 +0000 UTC m=+0.108116700 container init 333e8d52ac14c1ad2562a9b1108149f074ce2b54eb58b09f4ec22c7b717459e6 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:30:05 compute-0 podman[104202]: 2025-11-24 09:30:05.463991636 +0000 UTC m=+0.113253822 container start 333e8d52ac14c1ad2562a9b1108149f074ce2b54eb58b09f4ec22c7b717459e6 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:30:05 compute-0 bash[104202]: 333e8d52ac14c1ad2562a9b1108149f074ce2b54eb58b09f4ec22c7b717459e6
Nov 24 09:30:05 compute-0 podman[104202]: 2025-11-24 09:30:05.372485423 +0000 UTC m=+0.021747629 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Nov 24 09:30:05 compute-0 systemd[1]: Started Ceph alertmanager.compute-0 for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:30:05 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:05 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:05 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:05.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:30:05.490Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)"
Nov 24 09:30:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:30:05.490Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)"
Nov 24 09:30:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:30:05.500Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.122.100 port=9094
Nov 24 09:30:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:30:05.501Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Nov 24 09:30:05 compute-0 sudo[103957]: pam_unix(sudo:session): session closed for user root
Nov 24 09:30:05 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:30:05 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:05 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:30:05 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:05 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Reconfiguring grafana.compute-0 (dependencies changed)...
Nov 24 09:30:05 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Reconfiguring grafana.compute-0 (dependencies changed)...
Nov 24 09:30:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:30:05.544Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml
Nov 24 09:30:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:30:05.545Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml
Nov 24 09:30:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:30:05.550Z caller=tls_config.go:232 level=info msg="Listening on" address=192.168.122.100:9093
Nov 24 09:30:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:30:05.550Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=192.168.122.100:9093
Nov 24 09:30:05 compute-0 ceph-mon[74331]: Reconfiguring alertmanager.compute-0 (dependencies changed)...
Nov 24 09:30:05 compute-0 ceph-mon[74331]: Reconfiguring daemon alertmanager.compute-0 on compute-0
Nov 24 09:30:05 compute-0 ceph-mon[74331]: 9.a scrub starts
Nov 24 09:30:05 compute-0 ceph-mon[74331]: 9.a scrub ok
Nov 24 09:30:05 compute-0 ceph-mon[74331]: 10.15 deep-scrub starts
Nov 24 09:30:05 compute-0 ceph-mon[74331]: 10.15 deep-scrub ok
Nov 24 09:30:05 compute-0 ceph-mon[74331]: 12.18 scrub starts
Nov 24 09:30:05 compute-0 ceph-mon[74331]: 12.18 scrub ok
Nov 24 09:30:05 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:05 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:05 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Reconfiguring daemon grafana.compute-0 on compute-0
Nov 24 09:30:05 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Reconfiguring daemon grafana.compute-0 on compute-0
Nov 24 09:30:05 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 12.a scrub starts
Nov 24 09:30:05 compute-0 sudo[104238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:30:05 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 12.a scrub ok
Nov 24 09:30:05 compute-0 sudo[104238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:30:05 compute-0 sudo[104238]: pam_unix(sudo:session): session closed for user root
Nov 24 09:30:05 compute-0 sudo[104263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/grafana:10.4.0 --timeout 895 _orch deploy --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64
Nov 24 09:30:05 compute-0 sudo[104263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:30:06 compute-0 podman[104306]: 2025-11-24 09:30:06.209024192 +0000 UTC m=+0.059044733 container create f70417f601303c0b7a655fbe49e73ac949360026beeefd7e8de9784a55763e8b (image=quay.io/ceph/grafana:10.4.0, name=recursing_heyrovsky, maintainer=Grafana Labs <hello@grafana.com>)
Nov 24 09:30:06 compute-0 systemd[1]: Started libpod-conmon-f70417f601303c0b7a655fbe49e73ac949360026beeefd7e8de9784a55763e8b.scope.
Nov 24 09:30:06 compute-0 podman[104306]: 2025-11-24 09:30:06.188756726 +0000 UTC m=+0.038777437 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Nov 24 09:30:06 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:30:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:06 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf4003e40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:06 compute-0 podman[104306]: 2025-11-24 09:30:06.314709905 +0000 UTC m=+0.164730476 container init f70417f601303c0b7a655fbe49e73ac949360026beeefd7e8de9784a55763e8b (image=quay.io/ceph/grafana:10.4.0, name=recursing_heyrovsky, maintainer=Grafana Labs <hello@grafana.com>)
Nov 24 09:30:06 compute-0 podman[104306]: 2025-11-24 09:30:06.324927576 +0000 UTC m=+0.174948137 container start f70417f601303c0b7a655fbe49e73ac949360026beeefd7e8de9784a55763e8b (image=quay.io/ceph/grafana:10.4.0, name=recursing_heyrovsky, maintainer=Grafana Labs <hello@grafana.com>)
Nov 24 09:30:06 compute-0 podman[104306]: 2025-11-24 09:30:06.328512574 +0000 UTC m=+0.178533145 container attach f70417f601303c0b7a655fbe49e73ac949360026beeefd7e8de9784a55763e8b (image=quay.io/ceph/grafana:10.4.0, name=recursing_heyrovsky, maintainer=Grafana Labs <hello@grafana.com>)
Nov 24 09:30:06 compute-0 recursing_heyrovsky[104323]: 472 0
Nov 24 09:30:06 compute-0 systemd[1]: libpod-f70417f601303c0b7a655fbe49e73ac949360026beeefd7e8de9784a55763e8b.scope: Deactivated successfully.
Nov 24 09:30:06 compute-0 podman[104306]: 2025-11-24 09:30:06.332583826 +0000 UTC m=+0.182604367 container died f70417f601303c0b7a655fbe49e73ac949360026beeefd7e8de9784a55763e8b (image=quay.io/ceph/grafana:10.4.0, name=recursing_heyrovsky, maintainer=Grafana Labs <hello@grafana.com>)
Nov 24 09:30:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-959d42e6f5f1fc67915eafbff8eac8509c81f90b3e714f9d4242bc0c1b25a1fd-merged.mount: Deactivated successfully.
Nov 24 09:30:06 compute-0 podman[104306]: 2025-11-24 09:30:06.385747176 +0000 UTC m=+0.235767727 container remove f70417f601303c0b7a655fbe49e73ac949360026beeefd7e8de9784a55763e8b (image=quay.io/ceph/grafana:10.4.0, name=recursing_heyrovsky, maintainer=Grafana Labs <hello@grafana.com>)
Nov 24 09:30:06 compute-0 systemd[1]: libpod-conmon-f70417f601303c0b7a655fbe49e73ac949360026beeefd7e8de9784a55763e8b.scope: Deactivated successfully.
Nov 24 09:30:06 compute-0 podman[104341]: 2025-11-24 09:30:06.457285381 +0000 UTC m=+0.047776053 container create 0a99f0373d0f6416a89b7319cb0eebdb65234685cbba815bc14c47fd72ac3a20 (image=quay.io/ceph/grafana:10.4.0, name=frosty_agnesi, maintainer=Grafana Labs <hello@grafana.com>)
Nov 24 09:30:06 compute-0 systemd[1]: Started libpod-conmon-0a99f0373d0f6416a89b7319cb0eebdb65234685cbba815bc14c47fd72ac3a20.scope.
Nov 24 09:30:06 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:30:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:06 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:06 compute-0 podman[104341]: 2025-11-24 09:30:06.436937093 +0000 UTC m=+0.027427785 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Nov 24 09:30:06 compute-0 podman[104341]: 2025-11-24 09:30:06.531666985 +0000 UTC m=+0.122157677 container init 0a99f0373d0f6416a89b7319cb0eebdb65234685cbba815bc14c47fd72ac3a20 (image=quay.io/ceph/grafana:10.4.0, name=frosty_agnesi, maintainer=Grafana Labs <hello@grafana.com>)
Nov 24 09:30:06 compute-0 podman[104341]: 2025-11-24 09:30:06.537483605 +0000 UTC m=+0.127974277 container start 0a99f0373d0f6416a89b7319cb0eebdb65234685cbba815bc14c47fd72ac3a20 (image=quay.io/ceph/grafana:10.4.0, name=frosty_agnesi, maintainer=Grafana Labs <hello@grafana.com>)
Nov 24 09:30:06 compute-0 frosty_agnesi[104357]: 472 0
Nov 24 09:30:06 compute-0 podman[104341]: 2025-11-24 09:30:06.540628371 +0000 UTC m=+0.131119043 container attach 0a99f0373d0f6416a89b7319cb0eebdb65234685cbba815bc14c47fd72ac3a20 (image=quay.io/ceph/grafana:10.4.0, name=frosty_agnesi, maintainer=Grafana Labs <hello@grafana.com>)
Nov 24 09:30:06 compute-0 systemd[1]: libpod-0a99f0373d0f6416a89b7319cb0eebdb65234685cbba815bc14c47fd72ac3a20.scope: Deactivated successfully.
Nov 24 09:30:06 compute-0 podman[104341]: 2025-11-24 09:30:06.541954698 +0000 UTC m=+0.132445370 container died 0a99f0373d0f6416a89b7319cb0eebdb65234685cbba815bc14c47fd72ac3a20 (image=quay.io/ceph/grafana:10.4.0, name=frosty_agnesi, maintainer=Grafana Labs <hello@grafana.com>)
Nov 24 09:30:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-202c895610cc9214900162ffa5a65a6229495e7f772ab00daf12cb3ce8996b70-merged.mount: Deactivated successfully.
Nov 24 09:30:06 compute-0 podman[104341]: 2025-11-24 09:30:06.582988685 +0000 UTC m=+0.173479357 container remove 0a99f0373d0f6416a89b7319cb0eebdb65234685cbba815bc14c47fd72ac3a20 (image=quay.io/ceph/grafana:10.4.0, name=frosty_agnesi, maintainer=Grafana Labs <hello@grafana.com>)
Nov 24 09:30:06 compute-0 systemd[1]: libpod-conmon-0a99f0373d0f6416a89b7319cb0eebdb65234685cbba815bc14c47fd72ac3a20.scope: Deactivated successfully.
Nov 24 09:30:06 compute-0 ceph-mon[74331]: pgmap v27: 353 pgs: 1 active+clean+scrubbing+deep, 2 remapped+peering, 350 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 2 objects/s recovering
Nov 24 09:30:06 compute-0 ceph-mon[74331]: 9.1a scrub starts
Nov 24 09:30:06 compute-0 ceph-mon[74331]: 9.1a scrub ok
Nov 24 09:30:06 compute-0 ceph-mon[74331]: Reconfiguring grafana.compute-0 (dependencies changed)...
Nov 24 09:30:06 compute-0 ceph-mon[74331]: Reconfiguring daemon grafana.compute-0 on compute-0
Nov 24 09:30:06 compute-0 ceph-mon[74331]: 12.a scrub starts
Nov 24 09:30:06 compute-0 ceph-mon[74331]: 12.a scrub ok
Nov 24 09:30:06 compute-0 ceph-mon[74331]: 9.d scrub starts
Nov 24 09:30:06 compute-0 ceph-mon[74331]: 9.d scrub ok
Nov 24 09:30:06 compute-0 systemd[1]: Stopping Ceph grafana.compute-0 for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:30:06 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 9.c deep-scrub starts
Nov 24 09:30:06 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 9.c deep-scrub ok
Nov 24 09:30:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:06 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc003cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:07 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:07 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 24 09:30:07 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:07.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 24 09:30:07 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v28: 353 pgs: 353 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:30:07 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Nov 24 09:30:07 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 24 09:30:07 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:07 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:07 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:07.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:30:07.502Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000689259s
Nov 24 09:30:07 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Nov 24 09:30:07 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 9.0 scrub starts
Nov 24 09:30:07 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 9.0 scrub ok
Nov 24 09:30:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=server t=2025-11-24T09:30:07.954930032Z level=info msg="Shutdown started" reason="System signal: terminated"
Nov 24 09:30:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=ticker t=2025-11-24T09:30:07.955189519Z level=info msg=stopped last_tick=2025-11-24T09:30:00Z
Nov 24 09:30:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=tracing t=2025-11-24T09:30:07.95524794Z level=info msg="Closing tracing"
Nov 24 09:30:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=grafana-apiserver t=2025-11-24T09:30:07.955323232Z level=info msg="StorageObjectCountTracker pruner is exiting"
Nov 24 09:30:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[98649]: logger=sqlstore.transactions t=2025-11-24T09:30:07.966737496Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Nov 24 09:30:08 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 24 09:30:08 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Nov 24 09:30:08 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Nov 24 09:30:08 compute-0 podman[104403]: 2025-11-24 09:30:08.067319929 +0000 UTC m=+1.268959169 container died a0674656060959d25392ea4042b567724541ad68ff4b7e0cdef72cb164c1b850 (image=quay.io/ceph/grafana:10.4.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 24 09:30:08 compute-0 ceph-mon[74331]: 9.c deep-scrub starts
Nov 24 09:30:08 compute-0 ceph-mon[74331]: 9.c deep-scrub ok
Nov 24 09:30:08 compute-0 ceph-mon[74331]: 9.13 scrub starts
Nov 24 09:30:08 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 24 09:30:08 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 24 09:30:08 compute-0 ceph-mon[74331]: 9.13 scrub ok
Nov 24 09:30:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f569a0a73c5eca6b9867f2fd2c49dff79b11c3f7968f5a53a5ba66d1653ddc9-merged.mount: Deactivated successfully.
Nov 24 09:30:08 compute-0 podman[104403]: 2025-11-24 09:30:08.199051358 +0000 UTC m=+1.400690588 container remove a0674656060959d25392ea4042b567724541ad68ff4b7e0cdef72cb164c1b850 (image=quay.io/ceph/grafana:10.4.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 24 09:30:08 compute-0 bash[104403]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0
Nov 24 09:30:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:08 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:08 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@grafana.compute-0.service: Deactivated successfully.
Nov 24 09:30:08 compute-0 systemd[1]: Stopped Ceph grafana.compute-0 for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:30:08 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@grafana.compute-0.service: Consumed 4.117s CPU time.
Nov 24 09:30:08 compute-0 systemd[1]: Starting Ceph grafana.compute-0 for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:30:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:08 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf4003e40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:08 compute-0 podman[104510]: 2025-11-24 09:30:08.631740924 +0000 UTC m=+0.076420320 container create 64e58e60bc23a7d57cc9d528e4c0a82e4df02b33e046975aeb8ef22ad0995bf2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 24 09:30:08 compute-0 podman[104510]: 2025-11-24 09:30:08.594310356 +0000 UTC m=+0.038989782 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Nov 24 09:30:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e20b0d1a16daf49a9084745a37ebe4e63bb9e84c52fcb91816df415f1fd9ef7c/merged/etc/grafana/certs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:30:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e20b0d1a16daf49a9084745a37ebe4e63bb9e84c52fcb91816df415f1fd9ef7c/merged/etc/grafana/grafana.ini supports timestamps until 2038 (0x7fffffff)
Nov 24 09:30:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e20b0d1a16daf49a9084745a37ebe4e63bb9e84c52fcb91816df415f1fd9ef7c/merged/etc/grafana/provisioning/dashboards supports timestamps until 2038 (0x7fffffff)
Nov 24 09:30:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e20b0d1a16daf49a9084745a37ebe4e63bb9e84c52fcb91816df415f1fd9ef7c/merged/var/lib/grafana/grafana.db supports timestamps until 2038 (0x7fffffff)
Nov 24 09:30:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e20b0d1a16daf49a9084745a37ebe4e63bb9e84c52fcb91816df415f1fd9ef7c/merged/etc/grafana/provisioning/datasources supports timestamps until 2038 (0x7fffffff)
Nov 24 09:30:08 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Nov 24 09:30:08 compute-0 podman[104510]: 2025-11-24 09:30:08.758301051 +0000 UTC m=+0.202980457 container init 64e58e60bc23a7d57cc9d528e4c0a82e4df02b33e046975aeb8ef22ad0995bf2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 24 09:30:08 compute-0 podman[104510]: 2025-11-24 09:30:08.763968097 +0000 UTC m=+0.208647503 container start 64e58e60bc23a7d57cc9d528e4c0a82e4df02b33e046975aeb8ef22ad0995bf2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 24 09:30:08 compute-0 bash[104510]: 64e58e60bc23a7d57cc9d528e4c0a82e4df02b33e046975aeb8ef22ad0995bf2
Nov 24 09:30:08 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Nov 24 09:30:08 compute-0 systemd[1]: Started Ceph grafana.compute-0 for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:30:08 compute-0 sudo[104263]: pam_unix(sudo:session): session closed for user root
Nov 24 09:30:08 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:30:08 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:08 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:30:08 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:08 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-1 (monmap changed)...
Nov 24 09:30:08 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-1 (monmap changed)...
Nov 24 09:30:08 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Nov 24 09:30:08 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 24 09:30:08 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-1 on compute-1
Nov 24 09:30:08 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-1 on compute-1
Nov 24 09:30:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:08 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=settings t=2025-11-24T09:30:08.939148429Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2025-11-24T09:30:08Z
Nov 24 09:30:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=settings t=2025-11-24T09:30:08.939404245Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
Nov 24 09:30:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=settings t=2025-11-24T09:30:08.939417886Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
Nov 24 09:30:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=settings t=2025-11-24T09:30:08.939422686Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
Nov 24 09:30:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=settings t=2025-11-24T09:30:08.939426646Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
Nov 24 09:30:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=settings t=2025-11-24T09:30:08.939430566Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
Nov 24 09:30:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=settings t=2025-11-24T09:30:08.939434206Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
Nov 24 09:30:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=settings t=2025-11-24T09:30:08.939438606Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
Nov 24 09:30:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=settings t=2025-11-24T09:30:08.939442687Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
Nov 24 09:30:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=settings t=2025-11-24T09:30:08.939446107Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
Nov 24 09:30:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=settings t=2025-11-24T09:30:08.939449357Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
Nov 24 09:30:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=settings t=2025-11-24T09:30:08.939452817Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
Nov 24 09:30:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=settings t=2025-11-24T09:30:08.939456307Z level=info msg=Target target=[all]
Nov 24 09:30:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=settings t=2025-11-24T09:30:08.939462487Z level=info msg="Path Home" path=/usr/share/grafana
Nov 24 09:30:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=settings t=2025-11-24T09:30:08.939465927Z level=info msg="Path Data" path=/var/lib/grafana
Nov 24 09:30:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=settings t=2025-11-24T09:30:08.939469437Z level=info msg="Path Logs" path=/var/log/grafana
Nov 24 09:30:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=settings t=2025-11-24T09:30:08.939473277Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
Nov 24 09:30:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=settings t=2025-11-24T09:30:08.939477677Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
Nov 24 09:30:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=settings t=2025-11-24T09:30:08.939481638Z level=info msg="App mode production"
Nov 24 09:30:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=sqlstore t=2025-11-24T09:30:08.939780836Z level=info msg="Connecting to DB" dbtype=sqlite3
Nov 24 09:30:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=sqlstore t=2025-11-24T09:30:08.939801706Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r-----
Nov 24 09:30:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=migrator t=2025-11-24T09:30:08.940349402Z level=info msg="Starting DB migrations"
Nov 24 09:30:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=migrator t=2025-11-24T09:30:08.957740849Z level=info msg="migrations completed" performed=0 skipped=547 duration=660.208µs
Nov 24 09:30:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=sqlstore t=2025-11-24T09:30:08.958801448Z level=info msg="Created default organization"
Nov 24 09:30:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=secrets t=2025-11-24T09:30:08.959310312Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
Nov 24 09:30:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=plugin.store t=2025-11-24T09:30:08.977517242Z level=info msg="Loading plugins..."
Nov 24 09:30:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=local.finder t=2025-11-24T09:30:09.055671629Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
Nov 24 09:30:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=plugin.store t=2025-11-24T09:30:09.05571321Z level=info msg="Plugins loaded" count=55 duration=78.196648ms
Nov 24 09:30:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=query_data t=2025-11-24T09:30:09.058379654Z level=info msg="Query Service initialization"
Nov 24 09:30:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=live.push_http t=2025-11-24T09:30:09.061199481Z level=info msg="Live Push Gateway initialization"
Nov 24 09:30:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=ngalert.migration t=2025-11-24T09:30:09.064749889Z level=info msg=Starting
Nov 24 09:30:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=ngalert.state.manager t=2025-11-24T09:30:09.127690278Z level=info msg="Running in alternative execution of Error/NoData mode"
Nov 24 09:30:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=infra.usagestats.collector t=2025-11-24T09:30:09.129870528Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
Nov 24 09:30:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=provisioning.datasources t=2025-11-24T09:30:09.13249888Z level=info msg="inserting datasource from configuration" name=Dashboard1 uid=P43CA22E17D0F9596
Nov 24 09:30:09 compute-0 ceph-mon[74331]: pgmap v28: 353 pgs: 353 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:30:09 compute-0 ceph-mon[74331]: 9.0 scrub starts
Nov 24 09:30:09 compute-0 ceph-mon[74331]: 9.0 scrub ok
Nov 24 09:30:09 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 24 09:30:09 compute-0 ceph-mon[74331]: osdmap e92: 3 total, 3 up, 3 in
Nov 24 09:30:09 compute-0 ceph-mon[74331]: 9.9 scrub starts
Nov 24 09:30:09 compute-0 ceph-mon[74331]: 9.9 scrub ok
Nov 24 09:30:09 compute-0 ceph-mon[74331]: 8.12 scrub starts
Nov 24 09:30:09 compute-0 ceph-mon[74331]: 8.12 scrub ok
Nov 24 09:30:09 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:09 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:09 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 24 09:30:09 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 24 09:30:09 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:30:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=provisioning.alerting t=2025-11-24T09:30:09.156963152Z level=info msg="starting to provision alerting"
Nov 24 09:30:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=provisioning.alerting t=2025-11-24T09:30:09.156987562Z level=info msg="finished to provision alerting"
Nov 24 09:30:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=ngalert.state.manager t=2025-11-24T09:30:09.157075516Z level=info msg="Warming state cache for startup"
Nov 24 09:30:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=grafanaStorageLogger t=2025-11-24T09:30:09.157262051Z level=info msg="Storage starting"
Nov 24 09:30:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=ngalert.multiorg.alertmanager t=2025-11-24T09:30:09.157461676Z level=info msg="Starting MultiOrg Alertmanager"
Nov 24 09:30:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=provisioning.dashboard t=2025-11-24T09:30:09.158666789Z level=info msg="starting to provision dashboards"
Nov 24 09:30:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=http.server t=2025-11-24T09:30:09.160588382Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
Nov 24 09:30:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=http.server t=2025-11-24T09:30:09.160925131Z level=info msg="HTTP Server Listen" address=192.168.122.100:3000 protocol=https subUrl= socket=
Nov 24 09:30:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=provisioning.dashboard t=2025-11-24T09:30:09.200838107Z level=info msg="finished to provision dashboards"
Nov 24 09:30:09 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:09 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:09 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:09.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=ngalert.state.manager t=2025-11-24T09:30:09.21442154Z level=info msg="State cache has been initialized" states=0 duration=57.338905ms
Nov 24 09:30:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=ngalert.scheduler t=2025-11-24T09:30:09.214482713Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1
Nov 24 09:30:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=ticker t=2025-11-24T09:30:09.214555944Z level=info msg=starting first_tick=2025-11-24T09:30:10Z
Nov 24 09:30:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=grafana.update.checker t=2025-11-24T09:30:09.24862611Z level=info msg="Update check succeeded" duration=91.280567ms
Nov 24 09:30:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=plugins.update.checker t=2025-11-24T09:30:09.25699402Z level=info msg="Update check succeeded" duration=99.632157ms
Nov 24 09:30:09 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v30: 353 pgs: 353 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:30:09 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Nov 24 09:30:09 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 24 09:30:09 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 09:30:09 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:09 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 24 09:30:09 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:09.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 24 09:30:09 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:09 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 09:30:09 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:09 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)...
Nov 24 09:30:09 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)...
Nov 24 09:30:09 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on compute-1
Nov 24 09:30:09 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on compute-1
Nov 24 09:30:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=grafana-apiserver t=2025-11-24T09:30:09.559084389Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager"
Nov 24 09:30:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=grafana-apiserver t=2025-11-24T09:30:09.55952362Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager"
Nov 24 09:30:09 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Nov 24 09:30:09 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Nov 24 09:30:10 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Nov 24 09:30:10 compute-0 ceph-mon[74331]: 9.1 scrub starts
Nov 24 09:30:10 compute-0 ceph-mon[74331]: 9.1 scrub ok
Nov 24 09:30:10 compute-0 ceph-mon[74331]: Reconfiguring crash.compute-1 (monmap changed)...
Nov 24 09:30:10 compute-0 ceph-mon[74331]: Reconfiguring daemon crash.compute-1 on compute-1
Nov 24 09:30:10 compute-0 ceph-mon[74331]: 9.f scrub starts
Nov 24 09:30:10 compute-0 ceph-mon[74331]: 9.f scrub ok
Nov 24 09:30:10 compute-0 ceph-mon[74331]: pgmap v30: 353 pgs: 353 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:30:10 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 24 09:30:10 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 24 09:30:10 compute-0 ceph-mon[74331]: 8.19 deep-scrub starts
Nov 24 09:30:10 compute-0 ceph-mon[74331]: 8.19 deep-scrub ok
Nov 24 09:30:10 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:10 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:10 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 24 09:30:10 compute-0 ceph-mon[74331]: Reconfiguring osd.1 (monmap changed)...
Nov 24 09:30:10 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:30:10 compute-0 ceph-mon[74331]: Reconfiguring daemon osd.1 on compute-1
Nov 24 09:30:10 compute-0 ceph-mon[74331]: 9.4 scrub starts
Nov 24 09:30:10 compute-0 ceph-mon[74331]: 9.4 scrub ok
Nov 24 09:30:10 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 24 09:30:10 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Nov 24 09:30:10 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Nov 24 09:30:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:10 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc003cf0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:10 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 09:30:10 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:10 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 09:30:10 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:10 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-1 (monmap changed)...
Nov 24 09:30:10 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-1 (monmap changed)...
Nov 24 09:30:10 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-1 on compute-1
Nov 24 09:30:10 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-1 on compute-1
Nov 24 09:30:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:10 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:10 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Nov 24 09:30:10 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Nov 24 09:30:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:10 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf4003e40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:10 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 09:30:10 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:10 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 09:30:10 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:10 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-2 (monmap changed)...
Nov 24 09:30:10 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-2 (monmap changed)...
Nov 24 09:30:10 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-2 on compute-2
Nov 24 09:30:10 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-2 on compute-2
Nov 24 09:30:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:30:10] "GET /metrics HTTP/1.1" 200 48286 "" "Prometheus/2.51.0"
Nov 24 09:30:10 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:30:10] "GET /metrics HTTP/1.1" 200 48286 "" "Prometheus/2.51.0"
Nov 24 09:30:11 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 24 09:30:11 compute-0 ceph-mon[74331]: osdmap e93: 3 total, 3 up, 3 in
Nov 24 09:30:11 compute-0 ceph-mon[74331]: 9.8 scrub starts
Nov 24 09:30:11 compute-0 ceph-mon[74331]: 9.8 scrub ok
Nov 24 09:30:11 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:11 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:11 compute-0 ceph-mon[74331]: Reconfiguring mon.compute-1 (monmap changed)...
Nov 24 09:30:11 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 24 09:30:11 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 24 09:30:11 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:30:11 compute-0 ceph-mon[74331]: Reconfiguring daemon mon.compute-1 on compute-1
Nov 24 09:30:11 compute-0 ceph-mon[74331]: 9.e scrub starts
Nov 24 09:30:11 compute-0 ceph-mon[74331]: 9.e scrub ok
Nov 24 09:30:11 compute-0 ceph-mon[74331]: 9.1c scrub starts
Nov 24 09:30:11 compute-0 ceph-mon[74331]: 9.1c scrub ok
Nov 24 09:30:11 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:11 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:11 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 24 09:30:11 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 24 09:30:11 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:30:11 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:11 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:11 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:11.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:11 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v32: 353 pgs: 353 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:30:11 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Nov 24 09:30:11 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 24 09:30:11 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:11 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:11 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:11.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:11 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Nov 24 09:30:11 compute-0 ceph-osd[82549]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Nov 24 09:30:11 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 09:30:11 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:11 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 09:30:11 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:11 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-2.rzcnzg (monmap changed)...
Nov 24 09:30:11 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-2.rzcnzg (monmap changed)...
Nov 24 09:30:11 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.rzcnzg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Nov 24 09:30:11 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.rzcnzg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 24 09:30:11 compute-0 ceph-mgr[74626]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-2.rzcnzg on compute-2
Nov 24 09:30:11 compute-0 ceph-mgr[74626]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-2.rzcnzg on compute-2
Nov 24 09:30:12 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Nov 24 09:30:12 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 24 09:30:12 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Nov 24 09:30:12 compute-0 ceph-mon[74331]: Reconfiguring mon.compute-2 (monmap changed)...
Nov 24 09:30:12 compute-0 ceph-mon[74331]: Reconfiguring daemon mon.compute-2 on compute-2
Nov 24 09:30:12 compute-0 ceph-mon[74331]: 9.b deep-scrub starts
Nov 24 09:30:12 compute-0 ceph-mon[74331]: 9.b deep-scrub ok
Nov 24 09:30:12 compute-0 ceph-mon[74331]: pgmap v32: 353 pgs: 353 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:30:12 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 24 09:30:12 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 24 09:30:12 compute-0 ceph-mon[74331]: 9.6 scrub starts
Nov 24 09:30:12 compute-0 ceph-mon[74331]: 9.6 scrub ok
Nov 24 09:30:12 compute-0 ceph-mon[74331]: 9.12 scrub starts
Nov 24 09:30:12 compute-0 ceph-mon[74331]: 9.12 scrub ok
Nov 24 09:30:12 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:12 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:12 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.rzcnzg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 24 09:30:12 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.rzcnzg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 24 09:30:12 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 09:30:12 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:30:12 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Nov 24 09:30:12 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:12 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf4003e40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:12 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:12 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc003d10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:12 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 09:30:12 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:12 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 09:30:12 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:12 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Nov 24 09:30:12 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Nov 24 09:30:12 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Nov 24 09:30:12 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Nov 24 09:30:12 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:12 compute-0 ceph-mgr[74626]: [prometheus INFO root] Restarting engine...
Nov 24 09:30:12 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.error] [24/Nov/2025:09:30:12] ENGINE Bus STOPPING
Nov 24 09:30:12 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: [24/Nov/2025:09:30:12] ENGINE Bus STOPPING
Nov 24 09:30:12 compute-0 sudo[104558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:30:12 compute-0 sudo[104558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:30:12 compute-0 sudo[104558]: pam_unix(sudo:session): session closed for user root
Nov 24 09:30:12 compute-0 sudo[104583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 24 09:30:12 compute-0 sudo[104583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:30:12 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:12 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab000014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:13 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: [24/Nov/2025:09:30:13] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down
Nov 24 09:30:13 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: [24/Nov/2025:09:30:13] ENGINE Bus STOPPED
Nov 24 09:30:13 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.error] [24/Nov/2025:09:30:13] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down
Nov 24 09:30:13 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.error] [24/Nov/2025:09:30:13] ENGINE Bus STOPPED
Nov 24 09:30:13 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: [24/Nov/2025:09:30:13] ENGINE Bus STARTING
Nov 24 09:30:13 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.error] [24/Nov/2025:09:30:13] ENGINE Bus STARTING
Nov 24 09:30:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:30:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Nov 24 09:30:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Nov 24 09:30:13 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Nov 24 09:30:13 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: [24/Nov/2025:09:30:13] ENGINE Serving on http://:::9283
Nov 24 09:30:13 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: [24/Nov/2025:09:30:13] ENGINE Bus STARTED
Nov 24 09:30:13 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.error] [24/Nov/2025:09:30:13] ENGINE Serving on http://:::9283
Nov 24 09:30:13 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.error] [24/Nov/2025:09:30:13] ENGINE Bus STARTED
Nov 24 09:30:13 compute-0 ceph-mgr[74626]: [prometheus INFO root] Engine started.
Nov 24 09:30:13 compute-0 ceph-mon[74331]: Reconfiguring mgr.compute-2.rzcnzg (monmap changed)...
Nov 24 09:30:13 compute-0 ceph-mon[74331]: Reconfiguring daemon mgr.compute-2.rzcnzg on compute-2
Nov 24 09:30:13 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 24 09:30:13 compute-0 ceph-mon[74331]: 9.3 scrub starts
Nov 24 09:30:13 compute-0 ceph-mon[74331]: osdmap e94: 3 total, 3 up, 3 in
Nov 24 09:30:13 compute-0 ceph-mon[74331]: 9.3 scrub ok
Nov 24 09:30:13 compute-0 ceph-mon[74331]: 9.1e scrub starts
Nov 24 09:30:13 compute-0 ceph-mon[74331]: 9.1e scrub ok
Nov 24 09:30:13 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:13 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:13 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Nov 24 09:30:13 compute-0 ceph-mon[74331]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Nov 24 09:30:13 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Nov 24 09:30:13 compute-0 ceph-mon[74331]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Nov 24 09:30:13 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Nov 24 09:30:13 compute-0 ceph-mon[74331]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Nov 24 09:30:13 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:13 compute-0 ceph-mon[74331]: osdmap e95: 3 total, 3 up, 3 in
Nov 24 09:30:13 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:13 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 24 09:30:13 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:13.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 24 09:30:13 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v35: 353 pgs: 353 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:30:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Nov 24 09:30:13 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 24 09:30:13 compute-0 podman[104691]: 2025-11-24 09:30:13.30320058 +0000 UTC m=+0.070526819 container exec 926e81c0f890a1c1ac5ebf5b0a3fc7d39273a3029701ecf933d5ab782a4c6bc4 (image=quay.io/ceph/ceph:v19, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mon-compute-0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:30:13 compute-0 podman[104691]: 2025-11-24 09:30:13.400686428 +0000 UTC m=+0.168012647 container exec_died 926e81c0f890a1c1ac5ebf5b0a3fc7d39273a3029701ecf933d5ab782a4c6bc4 (image=quay.io/ceph/ceph:v19, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mon-compute-0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Nov 24 09:30:13 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:13 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:13 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:13.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:13 compute-0 podman[104828]: 2025-11-24 09:30:13.997665768 +0000 UTC m=+0.064733760 container exec c1042f9aaa96d1cc7323d0bb263b746783ae7f616fd1b71ffa56027caf075582 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:30:14 compute-0 podman[104828]: 2025-11-24 09:30:14.004246438 +0000 UTC m=+0.071314440 container exec_died c1042f9aaa96d1cc7323d0bb263b746783ae7f616fd1b71ffa56027caf075582 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:30:14 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Nov 24 09:30:14 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 24 09:30:14 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Nov 24 09:30:14 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Nov 24 09:30:14 compute-0 ceph-mon[74331]: 9.7 scrub starts
Nov 24 09:30:14 compute-0 ceph-mon[74331]: 9.7 scrub ok
Nov 24 09:30:14 compute-0 ceph-mon[74331]: pgmap v35: 353 pgs: 353 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:30:14 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 24 09:30:14 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 24 09:30:14 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 24 09:30:14 compute-0 ceph-mon[74331]: osdmap e96: 3 total, 3 up, 3 in
Nov 24 09:30:14 compute-0 podman[104901]: 2025-11-24 09:30:14.261662959 +0000 UTC m=+0.051042223 container exec 3adc7e4dbfb76acd70b92bdc8783d49c26735889ac1576ee9a74ae48f52acf62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:30:14 compute-0 podman[104901]: 2025-11-24 09:30:14.27551169 +0000 UTC m=+0.064890954 container exec_died 3adc7e4dbfb76acd70b92bdc8783d49c26735889ac1576ee9a74ae48f52acf62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Nov 24 09:30:14 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:14 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab20001d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:14 compute-0 podman[104967]: 2025-11-24 09:30:14.478454654 +0000 UTC m=+0.054342953 container exec 6c3a81d73f056383702bf60c1dab3f213ae48261b4107ee30655cbadd5ed4114 (image=quay.io/ceph/haproxy:2.3, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf)
Nov 24 09:30:14 compute-0 podman[104967]: 2025-11-24 09:30:14.48447603 +0000 UTC m=+0.060364239 container exec_died 6c3a81d73f056383702bf60c1dab3f213ae48261b4107ee30655cbadd5ed4114 (image=quay.io/ceph/haproxy:2.3, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf)
Nov 24 09:30:14 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:14 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc003d10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:14 compute-0 systemd[92737]: Starting Mark boot as successful...
Nov 24 09:30:14 compute-0 systemd[92737]: Finished Mark boot as successful.
Nov 24 09:30:14 compute-0 podman[105029]: 2025-11-24 09:30:14.722262752 +0000 UTC m=+0.075162145 container exec da5e2e82794b556dfcd8ea30635453752d519b3ce5ab3e77ac09ab6f644d0021 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-0-mglptr, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=keepalived, build-date=2023-02-22T09:23:20)
Nov 24 09:30:14 compute-0 podman[105029]: 2025-11-24 09:30:14.743568517 +0000 UTC m=+0.096467890 container exec_died da5e2e82794b556dfcd8ea30635453752d519b3ce5ab3e77ac09ab6f644d0021 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-0-mglptr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, version=2.2.4, com.redhat.component=keepalived-container, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, distribution-scope=public)
Nov 24 09:30:14 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:14 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:15 compute-0 podman[105096]: 2025-11-24 09:30:15.018713856 +0000 UTC m=+0.060540254 container exec 333e8d52ac14c1ad2562a9b1108149f074ce2b54eb58b09f4ec22c7b717459e6 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:30:15 compute-0 podman[105096]: 2025-11-24 09:30:15.048652108 +0000 UTC m=+0.090478506 container exec_died 333e8d52ac14c1ad2562a9b1108149f074ce2b54eb58b09f4ec22c7b717459e6 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:30:15 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:15 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 24 09:30:15 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:15.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 24 09:30:15 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Nov 24 09:30:15 compute-0 ceph-mon[74331]: 9.1b scrub starts
Nov 24 09:30:15 compute-0 ceph-mon[74331]: 9.1b scrub ok
Nov 24 09:30:15 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Nov 24 09:30:15 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Nov 24 09:30:15 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v38: 353 pgs: 353 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:30:15 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0)
Nov 24 09:30:15 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 24 09:30:15 compute-0 podman[105172]: 2025-11-24 09:30:15.291368666 +0000 UTC m=+0.065848610 container exec 64e58e60bc23a7d57cc9d528e4c0a82e4df02b33e046975aeb8ef22ad0995bf2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 24 09:30:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:30:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:30:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:30:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:30:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:30:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:30:15 compute-0 podman[105172]: 2025-11-24 09:30:15.482409163 +0000 UTC m=+0.256889067 container exec_died 64e58e60bc23a7d57cc9d528e4c0a82e4df02b33e046975aeb8ef22ad0995bf2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 24 09:30:15 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:15 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:15 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:15.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:30:15.506Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.004151316s
Nov 24 09:30:15 compute-0 podman[105282]: 2025-11-24 09:30:15.871567224 +0000 UTC m=+0.049053029 container exec 10beeaa631829ec8676854498a3516687cc150842a3e976767e7a8406d406beb (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:30:15 compute-0 podman[105282]: 2025-11-24 09:30:15.906576685 +0000 UTC m=+0.084062470 container exec_died 10beeaa631829ec8676854498a3516687cc150842a3e976767e7a8406d406beb (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:30:15 compute-0 sudo[104583]: pam_unix(sudo:session): session closed for user root
Nov 24 09:30:15 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:30:15 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:15 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:30:15 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:15 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:30:16 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:16 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:30:16 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:16 compute-0 sudo[105325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:30:16 compute-0 sudo[105325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:30:16 compute-0 sudo[105325]: pam_unix(sudo:session): session closed for user root
Nov 24 09:30:16 compute-0 sudo[105350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 24 09:30:16 compute-0 sudo[105350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:30:16 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Nov 24 09:30:16 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 24 09:30:16 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Nov 24 09:30:16 compute-0 ceph-mon[74331]: 9.18 scrub starts
Nov 24 09:30:16 compute-0 ceph-mon[74331]: 9.18 scrub ok
Nov 24 09:30:16 compute-0 ceph-mon[74331]: osdmap e97: 3 total, 3 up, 3 in
Nov 24 09:30:16 compute-0 ceph-mon[74331]: pgmap v38: 353 pgs: 353 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:30:16 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 24 09:30:16 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 24 09:30:16 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:30:16 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:16 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:16 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:30:16 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:30:16 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:16 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:16 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:30:16 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:30:16 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:30:16 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Nov 24 09:30:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:16 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab000014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:16 compute-0 podman[105420]: 2025-11-24 09:30:16.487972686 +0000 UTC m=+0.036968036 container create 99dce80ba1428871dac28724a0471bbbbcbd65755e1f2351175ee52ae96f64f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 09:30:16 compute-0 systemd[1]: Started libpod-conmon-99dce80ba1428871dac28724a0471bbbbcbd65755e1f2351175ee52ae96f64f6.scope.
Nov 24 09:30:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:16 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab20001d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:16 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:30:16 compute-0 podman[105420]: 2025-11-24 09:30:16.558186815 +0000 UTC m=+0.107182215 container init 99dce80ba1428871dac28724a0471bbbbcbd65755e1f2351175ee52ae96f64f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_robinson, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Nov 24 09:30:16 compute-0 podman[105420]: 2025-11-24 09:30:16.564193861 +0000 UTC m=+0.113189211 container start 99dce80ba1428871dac28724a0471bbbbcbd65755e1f2351175ee52ae96f64f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_robinson, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:30:16 compute-0 podman[105420]: 2025-11-24 09:30:16.567726307 +0000 UTC m=+0.116721677 container attach 99dce80ba1428871dac28724a0471bbbbcbd65755e1f2351175ee52ae96f64f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_robinson, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 24 09:30:16 compute-0 objective_robinson[105437]: 167 167
Nov 24 09:30:16 compute-0 podman[105420]: 2025-11-24 09:30:16.473370055 +0000 UTC m=+0.022365425 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:30:16 compute-0 systemd[1]: libpod-99dce80ba1428871dac28724a0471bbbbcbd65755e1f2351175ee52ae96f64f6.scope: Deactivated successfully.
Nov 24 09:30:16 compute-0 podman[105420]: 2025-11-24 09:30:16.569449425 +0000 UTC m=+0.118444795 container died 99dce80ba1428871dac28724a0471bbbbcbd65755e1f2351175ee52ae96f64f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_robinson, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:30:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-85d6975e38ed037eb4e88c6914714a56e4ef5905e483f5fa75ba91898ef0c08b-merged.mount: Deactivated successfully.
Nov 24 09:30:16 compute-0 podman[105420]: 2025-11-24 09:30:16.608421205 +0000 UTC m=+0.157416565 container remove 99dce80ba1428871dac28724a0471bbbbcbd65755e1f2351175ee52ae96f64f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_robinson, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:30:16 compute-0 systemd[1]: libpod-conmon-99dce80ba1428871dac28724a0471bbbbcbd65755e1f2351175ee52ae96f64f6.scope: Deactivated successfully.
Nov 24 09:30:16 compute-0 podman[105460]: 2025-11-24 09:30:16.750161239 +0000 UTC m=+0.043577179 container create 726257f7b392eb6f870d7c8ab9d0b83dfb643782d8ef9c3d4cd3838cfc4615ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_shamir, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:30:16 compute-0 systemd[1]: Started libpod-conmon-726257f7b392eb6f870d7c8ab9d0b83dfb643782d8ef9c3d4cd3838cfc4615ca.scope.
Nov 24 09:30:16 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:30:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1818c98cde7f44303147774c8172cd1de3a9151821b6d1714b5c7c1a9bd39a3d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:30:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1818c98cde7f44303147774c8172cd1de3a9151821b6d1714b5c7c1a9bd39a3d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:30:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1818c98cde7f44303147774c8172cd1de3a9151821b6d1714b5c7c1a9bd39a3d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:30:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1818c98cde7f44303147774c8172cd1de3a9151821b6d1714b5c7c1a9bd39a3d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:30:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1818c98cde7f44303147774c8172cd1de3a9151821b6d1714b5c7c1a9bd39a3d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:30:16 compute-0 podman[105460]: 2025-11-24 09:30:16.731434974 +0000 UTC m=+0.024850934 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:30:16 compute-0 podman[105460]: 2025-11-24 09:30:16.837621101 +0000 UTC m=+0.131037061 container init 726257f7b392eb6f870d7c8ab9d0b83dfb643782d8ef9c3d4cd3838cfc4615ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_shamir, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:30:16 compute-0 podman[105460]: 2025-11-24 09:30:16.848677965 +0000 UTC m=+0.142093885 container start 726257f7b392eb6f870d7c8ab9d0b83dfb643782d8ef9c3d4cd3838cfc4615ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_shamir, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Nov 24 09:30:16 compute-0 podman[105460]: 2025-11-24 09:30:16.851634606 +0000 UTC m=+0.145050566 container attach 726257f7b392eb6f870d7c8ab9d0b83dfb643782d8ef9c3d4cd3838cfc4615ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 09:30:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:16 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc003eb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:17 compute-0 ceph-mgr[74626]: [dashboard INFO request] [192.168.122.100:44414] [POST] [200] [0.112s] [4.0B] [ae9a8874-b1cc-41d1-9d8a-2c9b4d39d09c] /api/prometheus_receiver
Nov 24 09:30:17 compute-0 vigorous_shamir[105477]: --> passed data devices: 0 physical, 1 LVM
Nov 24 09:30:17 compute-0 vigorous_shamir[105477]: --> All data devices are unavailable
Nov 24 09:30:17 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:17 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 24 09:30:17 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:17.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 24 09:30:17 compute-0 systemd[1]: libpod-726257f7b392eb6f870d7c8ab9d0b83dfb643782d8ef9c3d4cd3838cfc4615ca.scope: Deactivated successfully.
Nov 24 09:30:17 compute-0 podman[105460]: 2025-11-24 09:30:17.232081547 +0000 UTC m=+0.525497487 container died 726257f7b392eb6f870d7c8ab9d0b83dfb643782d8ef9c3d4cd3838cfc4615ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_shamir, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Nov 24 09:30:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-1818c98cde7f44303147774c8172cd1de3a9151821b6d1714b5c7c1a9bd39a3d-merged.mount: Deactivated successfully.
Nov 24 09:30:17 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Nov 24 09:30:17 compute-0 podman[105460]: 2025-11-24 09:30:17.272823207 +0000 UTC m=+0.566239137 container remove 726257f7b392eb6f870d7c8ab9d0b83dfb643782d8ef9c3d4cd3838cfc4615ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_shamir, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:30:17 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v40: 353 pgs: 2 peering, 351 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:30:17 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Nov 24 09:30:17 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Nov 24 09:30:17 compute-0 systemd[1]: libpod-conmon-726257f7b392eb6f870d7c8ab9d0b83dfb643782d8ef9c3d4cd3838cfc4615ca.scope: Deactivated successfully.
Nov 24 09:30:17 compute-0 ceph-mon[74331]: 9.19 scrub starts
Nov 24 09:30:17 compute-0 ceph-mon[74331]: 9.19 scrub ok
Nov 24 09:30:17 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 24 09:30:17 compute-0 ceph-mon[74331]: osdmap e98: 3 total, 3 up, 3 in
Nov 24 09:30:17 compute-0 ceph-mon[74331]: 9.1d scrub starts
Nov 24 09:30:17 compute-0 ceph-mon[74331]: 9.1d scrub ok
Nov 24 09:30:17 compute-0 sudo[105350]: pam_unix(sudo:session): session closed for user root
Nov 24 09:30:17 compute-0 sudo[105505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:30:17 compute-0 sudo[105505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:30:17 compute-0 sudo[105505]: pam_unix(sudo:session): session closed for user root
Nov 24 09:30:17 compute-0 sudo[105530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- lvm list --format json
Nov 24 09:30:17 compute-0 sudo[105530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:30:17 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:17 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:17 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:17.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:17 compute-0 podman[105595]: 2025-11-24 09:30:17.823662638 +0000 UTC m=+0.035383313 container create 7e89350ef31acfd7cd1d5817d1c9a77cad791d36158a888bbde61553fd0ce419 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_boyd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Nov 24 09:30:17 compute-0 systemd[1]: Started libpod-conmon-7e89350ef31acfd7cd1d5817d1c9a77cad791d36158a888bbde61553fd0ce419.scope.
Nov 24 09:30:17 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:30:17 compute-0 podman[105595]: 2025-11-24 09:30:17.902575036 +0000 UTC m=+0.114295721 container init 7e89350ef31acfd7cd1d5817d1c9a77cad791d36158a888bbde61553fd0ce419 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_boyd, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:30:17 compute-0 podman[105595]: 2025-11-24 09:30:17.807487334 +0000 UTC m=+0.019208019 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:30:17 compute-0 podman[105595]: 2025-11-24 09:30:17.907957343 +0000 UTC m=+0.119678008 container start 7e89350ef31acfd7cd1d5817d1c9a77cad791d36158a888bbde61553fd0ce419 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 24 09:30:17 compute-0 podman[105595]: 2025-11-24 09:30:17.911298906 +0000 UTC m=+0.123019591 container attach 7e89350ef31acfd7cd1d5817d1c9a77cad791d36158a888bbde61553fd0ce419 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_boyd, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 24 09:30:17 compute-0 interesting_boyd[105611]: 167 167
Nov 24 09:30:17 compute-0 systemd[1]: libpod-7e89350ef31acfd7cd1d5817d1c9a77cad791d36158a888bbde61553fd0ce419.scope: Deactivated successfully.
Nov 24 09:30:17 compute-0 podman[105595]: 2025-11-24 09:30:17.913655361 +0000 UTC m=+0.125376066 container died 7e89350ef31acfd7cd1d5817d1c9a77cad791d36158a888bbde61553fd0ce419 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_boyd, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:30:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-adcbce4ee137109528940b4784a1d6d90ee1a97dc4a33fa0d49ab4b60262ad29-merged.mount: Deactivated successfully.
Nov 24 09:30:17 compute-0 podman[105595]: 2025-11-24 09:30:17.949204447 +0000 UTC m=+0.160925112 container remove 7e89350ef31acfd7cd1d5817d1c9a77cad791d36158a888bbde61553fd0ce419 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:30:17 compute-0 systemd[1]: libpod-conmon-7e89350ef31acfd7cd1d5817d1c9a77cad791d36158a888bbde61553fd0ce419.scope: Deactivated successfully.
Nov 24 09:30:18 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e99 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:30:18 compute-0 podman[105637]: 2025-11-24 09:30:18.117425118 +0000 UTC m=+0.045663896 container create 2b9220e6c05402624bc58ddf316813cd2e3d15e9c71ffcc23c6ebb551519cd9b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:30:18 compute-0 sshd-session[105630]: Accepted publickey for zuul from 192.168.122.30 port 55434 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 09:30:18 compute-0 systemd-logind[822]: New session 39 of user zuul.
Nov 24 09:30:18 compute-0 systemd[1]: Started Session 39 of User zuul.
Nov 24 09:30:18 compute-0 systemd[1]: Started libpod-conmon-2b9220e6c05402624bc58ddf316813cd2e3d15e9c71ffcc23c6ebb551519cd9b.scope.
Nov 24 09:30:18 compute-0 sshd-session[105630]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:30:18 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:30:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36f3b3e269774ec8acb66d89621b0d43158b29819220b89255fdb3d3451feea4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:30:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36f3b3e269774ec8acb66d89621b0d43158b29819220b89255fdb3d3451feea4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:30:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36f3b3e269774ec8acb66d89621b0d43158b29819220b89255fdb3d3451feea4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:30:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36f3b3e269774ec8acb66d89621b0d43158b29819220b89255fdb3d3451feea4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:30:18 compute-0 podman[105637]: 2025-11-24 09:30:18.188603963 +0000 UTC m=+0.116842771 container init 2b9220e6c05402624bc58ddf316813cd2e3d15e9c71ffcc23c6ebb551519cd9b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_tu, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:30:18 compute-0 podman[105637]: 2025-11-24 09:30:18.095228968 +0000 UTC m=+0.023467796 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:30:18 compute-0 podman[105637]: 2025-11-24 09:30:18.198015582 +0000 UTC m=+0.126254360 container start 2b9220e6c05402624bc58ddf316813cd2e3d15e9c71ffcc23c6ebb551519cd9b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 24 09:30:18 compute-0 podman[105637]: 2025-11-24 09:30:18.203373208 +0000 UTC m=+0.131612036 container attach 2b9220e6c05402624bc58ddf316813cd2e3d15e9c71ffcc23c6ebb551519cd9b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:30:18 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Nov 24 09:30:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:18 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:18 compute-0 ceph-mon[74331]: 9.5 scrub starts
Nov 24 09:30:18 compute-0 ceph-mon[74331]: 9.5 scrub ok
Nov 24 09:30:18 compute-0 ceph-mon[74331]: pgmap v40: 353 pgs: 2 peering, 351 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:30:18 compute-0 ceph-mon[74331]: osdmap e99: 3 total, 3 up, 3 in
Nov 24 09:30:18 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Nov 24 09:30:18 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Nov 24 09:30:18 compute-0 gallant_tu[105655]: {
Nov 24 09:30:18 compute-0 gallant_tu[105655]:     "0": [
Nov 24 09:30:18 compute-0 gallant_tu[105655]:         {
Nov 24 09:30:18 compute-0 gallant_tu[105655]:             "devices": [
Nov 24 09:30:18 compute-0 gallant_tu[105655]:                 "/dev/loop3"
Nov 24 09:30:18 compute-0 gallant_tu[105655]:             ],
Nov 24 09:30:18 compute-0 gallant_tu[105655]:             "lv_name": "ceph_lv0",
Nov 24 09:30:18 compute-0 gallant_tu[105655]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:30:18 compute-0 gallant_tu[105655]:             "lv_size": "21470642176",
Nov 24 09:30:18 compute-0 gallant_tu[105655]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=84a084c3-61a7-5de7-8207-1f88efa59a64,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 24 09:30:18 compute-0 gallant_tu[105655]:             "lv_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:30:18 compute-0 gallant_tu[105655]:             "name": "ceph_lv0",
Nov 24 09:30:18 compute-0 gallant_tu[105655]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:30:18 compute-0 gallant_tu[105655]:             "tags": {
Nov 24 09:30:18 compute-0 gallant_tu[105655]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:30:18 compute-0 gallant_tu[105655]:                 "ceph.block_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:30:18 compute-0 gallant_tu[105655]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 09:30:18 compute-0 gallant_tu[105655]:                 "ceph.cluster_fsid": "84a084c3-61a7-5de7-8207-1f88efa59a64",
Nov 24 09:30:18 compute-0 gallant_tu[105655]:                 "ceph.cluster_name": "ceph",
Nov 24 09:30:18 compute-0 gallant_tu[105655]:                 "ceph.crush_device_class": "",
Nov 24 09:30:18 compute-0 gallant_tu[105655]:                 "ceph.encrypted": "0",
Nov 24 09:30:18 compute-0 gallant_tu[105655]:                 "ceph.osd_fsid": "4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c",
Nov 24 09:30:18 compute-0 gallant_tu[105655]:                 "ceph.osd_id": "0",
Nov 24 09:30:18 compute-0 gallant_tu[105655]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 09:30:18 compute-0 gallant_tu[105655]:                 "ceph.type": "block",
Nov 24 09:30:18 compute-0 gallant_tu[105655]:                 "ceph.vdo": "0",
Nov 24 09:30:18 compute-0 gallant_tu[105655]:                 "ceph.with_tpm": "0"
Nov 24 09:30:18 compute-0 gallant_tu[105655]:             },
Nov 24 09:30:18 compute-0 gallant_tu[105655]:             "type": "block",
Nov 24 09:30:18 compute-0 gallant_tu[105655]:             "vg_name": "ceph_vg0"
Nov 24 09:30:18 compute-0 gallant_tu[105655]:         }
Nov 24 09:30:18 compute-0 gallant_tu[105655]:     ]
Nov 24 09:30:18 compute-0 gallant_tu[105655]: }
Nov 24 09:30:18 compute-0 systemd[1]: libpod-2b9220e6c05402624bc58ddf316813cd2e3d15e9c71ffcc23c6ebb551519cd9b.scope: Deactivated successfully.
Nov 24 09:30:18 compute-0 podman[105637]: 2025-11-24 09:30:18.507645307 +0000 UTC m=+0.435884115 container died 2b9220e6c05402624bc58ddf316813cd2e3d15e9c71ffcc23c6ebb551519cd9b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_tu, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:30:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-36f3b3e269774ec8acb66d89621b0d43158b29819220b89255fdb3d3451feea4-merged.mount: Deactivated successfully.
Nov 24 09:30:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:18 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab00001670 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:18 compute-0 podman[105637]: 2025-11-24 09:30:18.549724513 +0000 UTC m=+0.477963291 container remove 2b9220e6c05402624bc58ddf316813cd2e3d15e9c71ffcc23c6ebb551519cd9b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_tu, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1)
Nov 24 09:30:18 compute-0 systemd[1]: libpod-conmon-2b9220e6c05402624bc58ddf316813cd2e3d15e9c71ffcc23c6ebb551519cd9b.scope: Deactivated successfully.
Nov 24 09:30:18 compute-0 sudo[105530]: pam_unix(sudo:session): session closed for user root
Nov 24 09:30:18 compute-0 sudo[105779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:30:18 compute-0 sudo[105779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:30:18 compute-0 sudo[105779]: pam_unix(sudo:session): session closed for user root
Nov 24 09:30:18 compute-0 sudo[105827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- raw list --format json
Nov 24 09:30:18 compute-0 sudo[105827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:30:18 compute-0 sudo[105876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:30:18 compute-0 sudo[105876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:30:18 compute-0 sudo[105876]: pam_unix(sudo:session): session closed for user root
Nov 24 09:30:18 compute-0 python3.9[105873]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 24 09:30:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:18 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab20001d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:19 compute-0 podman[105967]: 2025-11-24 09:30:19.194205707 +0000 UTC m=+0.042207040 container create 5bed1939a50fe6f8842157c3d25ba0dff2a9a6bdb235996dca51d971440da73e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_ptolemy, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Nov 24 09:30:19 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:19 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 24 09:30:19 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:19.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 24 09:30:19 compute-0 systemd[1]: Started libpod-conmon-5bed1939a50fe6f8842157c3d25ba0dff2a9a6bdb235996dca51d971440da73e.scope.
Nov 24 09:30:19 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:30:19 compute-0 podman[105967]: 2025-11-24 09:30:19.175503204 +0000 UTC m=+0.023504557 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:30:19 compute-0 podman[105967]: 2025-11-24 09:30:19.275403298 +0000 UTC m=+0.123404651 container init 5bed1939a50fe6f8842157c3d25ba0dff2a9a6bdb235996dca51d971440da73e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_ptolemy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 24 09:30:19 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v43: 353 pgs: 2 peering, 351 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:30:19 compute-0 podman[105967]: 2025-11-24 09:30:19.284387525 +0000 UTC m=+0.132388858 container start 5bed1939a50fe6f8842157c3d25ba0dff2a9a6bdb235996dca51d971440da73e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_ptolemy, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 24 09:30:19 compute-0 podman[105967]: 2025-11-24 09:30:19.288309502 +0000 UTC m=+0.136310895 container attach 5bed1939a50fe6f8842157c3d25ba0dff2a9a6bdb235996dca51d971440da73e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_ptolemy, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:30:19 compute-0 priceless_ptolemy[106008]: 167 167
Nov 24 09:30:19 compute-0 systemd[1]: libpod-5bed1939a50fe6f8842157c3d25ba0dff2a9a6bdb235996dca51d971440da73e.scope: Deactivated successfully.
Nov 24 09:30:19 compute-0 conmon[106008]: conmon 5bed1939a50fe6f88421 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5bed1939a50fe6f8842157c3d25ba0dff2a9a6bdb235996dca51d971440da73e.scope/container/memory.events
Nov 24 09:30:19 compute-0 podman[105967]: 2025-11-24 09:30:19.2926192 +0000 UTC m=+0.140620533 container died 5bed1939a50fe6f8842157c3d25ba0dff2a9a6bdb235996dca51d971440da73e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_ptolemy, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 24 09:30:19 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Nov 24 09:30:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4fdfe630af7ac147bdcc38af03b977a667ec62b2c56bbb55c96c47bce33c519-merged.mount: Deactivated successfully.
Nov 24 09:30:19 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Nov 24 09:30:19 compute-0 ceph-mon[74331]: osdmap e100: 3 total, 3 up, 3 in
Nov 24 09:30:19 compute-0 podman[105967]: 2025-11-24 09:30:19.354535752 +0000 UTC m=+0.202537085 container remove 5bed1939a50fe6f8842157c3d25ba0dff2a9a6bdb235996dca51d971440da73e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_ptolemy, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 24 09:30:19 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Nov 24 09:30:19 compute-0 systemd[1]: libpod-conmon-5bed1939a50fe6f8842157c3d25ba0dff2a9a6bdb235996dca51d971440da73e.scope: Deactivated successfully.
Nov 24 09:30:19 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:19 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:19 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:19.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:19 compute-0 podman[106062]: 2025-11-24 09:30:19.502406853 +0000 UTC m=+0.040924565 container create 497ebb95140c629620128a11561b3d556e93ae982644141f8a318d8a06336e38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_chaplygin, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 24 09:30:19 compute-0 systemd[1]: Started libpod-conmon-497ebb95140c629620128a11561b3d556e93ae982644141f8a318d8a06336e38.scope.
Nov 24 09:30:19 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:30:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5be8b7a8aaf77f63b85e607baee4485eb4e6a86c79dc5a02702fe0e1aecdfbde/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:30:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5be8b7a8aaf77f63b85e607baee4485eb4e6a86c79dc5a02702fe0e1aecdfbde/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:30:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5be8b7a8aaf77f63b85e607baee4485eb4e6a86c79dc5a02702fe0e1aecdfbde/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:30:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5be8b7a8aaf77f63b85e607baee4485eb4e6a86c79dc5a02702fe0e1aecdfbde/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:30:19 compute-0 podman[106062]: 2025-11-24 09:30:19.484029038 +0000 UTC m=+0.022546770 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:30:19 compute-0 podman[106062]: 2025-11-24 09:30:19.591439969 +0000 UTC m=+0.129957701 container init 497ebb95140c629620128a11561b3d556e93ae982644141f8a318d8a06336e38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_chaplygin, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:30:19 compute-0 podman[106062]: 2025-11-24 09:30:19.60056575 +0000 UTC m=+0.139083462 container start 497ebb95140c629620128a11561b3d556e93ae982644141f8a318d8a06336e38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:30:19 compute-0 podman[106062]: 2025-11-24 09:30:19.604323513 +0000 UTC m=+0.142841285 container attach 497ebb95140c629620128a11561b3d556e93ae982644141f8a318d8a06336e38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_chaplygin, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:30:20 compute-0 python3.9[106197]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:30:20 compute-0 lvm[106256]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 09:30:20 compute-0 lvm[106256]: VG ceph_vg0 finished
Nov 24 09:30:20 compute-0 cool_chaplygin[106079]: {}
Nov 24 09:30:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:20 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc003eb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:20 compute-0 systemd[1]: libpod-497ebb95140c629620128a11561b3d556e93ae982644141f8a318d8a06336e38.scope: Deactivated successfully.
Nov 24 09:30:20 compute-0 systemd[1]: libpod-497ebb95140c629620128a11561b3d556e93ae982644141f8a318d8a06336e38.scope: Consumed 1.181s CPU time.
Nov 24 09:30:20 compute-0 podman[106062]: 2025-11-24 09:30:20.331923081 +0000 UTC m=+0.870440803 container died 497ebb95140c629620128a11561b3d556e93ae982644141f8a318d8a06336e38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_chaplygin, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 24 09:30:20 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Nov 24 09:30:20 compute-0 ceph-mon[74331]: pgmap v43: 353 pgs: 2 peering, 351 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:30:20 compute-0 ceph-mon[74331]: osdmap e101: 3 total, 3 up, 3 in
Nov 24 09:30:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-5be8b7a8aaf77f63b85e607baee4485eb4e6a86c79dc5a02702fe0e1aecdfbde-merged.mount: Deactivated successfully.
Nov 24 09:30:20 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Nov 24 09:30:20 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Nov 24 09:30:20 compute-0 podman[106062]: 2025-11-24 09:30:20.44255584 +0000 UTC m=+0.981073572 container remove 497ebb95140c629620128a11561b3d556e93ae982644141f8a318d8a06336e38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Nov 24 09:30:20 compute-0 systemd[1]: libpod-conmon-497ebb95140c629620128a11561b3d556e93ae982644141f8a318d8a06336e38.scope: Deactivated successfully.
Nov 24 09:30:20 compute-0 sudo[105827]: pam_unix(sudo:session): session closed for user root
Nov 24 09:30:20 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:30:20 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:20 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:20 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:30:20 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:20 compute-0 sudo[106296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:30:20 compute-0 sudo[106296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:30:20 compute-0 sudo[106296]: pam_unix(sudo:session): session closed for user root
Nov 24 09:30:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:20 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab00001670 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:30:20] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Nov 24 09:30:20 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:30:20] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Nov 24 09:30:21 compute-0 sudo[106447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbdousocqtofsjmppuixmlxkvozgezhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976620.7962074-93-259436660456619/AnsiballZ_command.py'
Nov 24 09:30:21 compute-0 sudo[106447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:30:21 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:21 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 24 09:30:21 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:21.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 24 09:30:21 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v46: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 24 09:30:21 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Nov 24 09:30:21 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 24 09:30:21 compute-0 python3.9[106449]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:30:21 compute-0 ceph-mon[74331]: osdmap e102: 3 total, 3 up, 3 in
Nov 24 09:30:21 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:21 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:30:21 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 24 09:30:21 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 24 09:30:21 compute-0 sudo[106447]: pam_unix(sudo:session): session closed for user root
Nov 24 09:30:21 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:21 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 24 09:30:21 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:21.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 24 09:30:21 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Nov 24 09:30:21 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 24 09:30:21 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Nov 24 09:30:21 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Nov 24 09:30:21 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 103 pg[9.10( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=2 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=103 pruub=12.479741096s) [1] r=-1 lpr=103 pi=[54,103)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active pruub 240.069793701s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:30:21 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 103 pg[9.10( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=2 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=103 pruub=12.479701042s) [1] r=-1 lpr=103 pi=[54,103)/1 crt=45'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 240.069793701s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:30:22 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:22 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab20001d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:22 compute-0 sudo[106601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmmzbqtmzngdkqeqnnccvllfishaakdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976621.9408057-129-107037034238552/AnsiballZ_stat.py'
Nov 24 09:30:22 compute-0 sudo[106601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:30:22 compute-0 ceph-mon[74331]: pgmap v46: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 24 09:30:22 compute-0 ceph-mon[74331]: 9.1f deep-scrub starts
Nov 24 09:30:22 compute-0 ceph-mon[74331]: 9.1f deep-scrub ok
Nov 24 09:30:22 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 24 09:30:22 compute-0 ceph-mon[74331]: osdmap e103: 3 total, 3 up, 3 in
Nov 24 09:30:22 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:22 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc003eb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:22 compute-0 python3.9[106603]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:30:22 compute-0 sudo[106601]: pam_unix(sudo:session): session closed for user root
Nov 24 09:30:22 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Nov 24 09:30:22 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Nov 24 09:30:22 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Nov 24 09:30:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 104 pg[9.10( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=2 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=104) [1]/[0] r=0 lpr=104 pi=[54,104)/1 crt=45'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:30:22 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 104 pg[9.10( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=2 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=104) [1]/[0] r=0 lpr=104 pi=[54,104)/1 crt=45'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 09:30:22 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:22 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:23 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:30:23 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:23 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 24 09:30:23 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:23.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 24 09:30:23 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v49: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:30:23 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0)
Nov 24 09:30:23 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 24 09:30:23 compute-0 sudo[106756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onlvttjvldzvoqztrbzjyfjxhtznnunz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976622.8498082-162-64255265277207/AnsiballZ_file.py'
Nov 24 09:30:23 compute-0 sudo[106756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:30:23 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:23 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:23 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:23.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:23 compute-0 python3.9[106758]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:30:23 compute-0 sudo[106756]: pam_unix(sudo:session): session closed for user root
Nov 24 09:30:23 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Nov 24 09:30:23 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 24 09:30:23 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Nov 24 09:30:23 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Nov 24 09:30:23 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 105 pg[9.11( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=5 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=105 pruub=10.453415871s) [1] r=-1 lpr=105 pi=[54,105)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active pruub 240.069564819s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:30:23 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 105 pg[9.11( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=5 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=105 pruub=10.453376770s) [1] r=-1 lpr=105 pi=[54,105)/1 crt=45'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 240.069564819s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:30:23 compute-0 ceph-mon[74331]: osdmap e104: 3 total, 3 up, 3 in
Nov 24 09:30:23 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 24 09:30:23 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 24 09:30:23 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 105 pg[9.10( v 45'1130 (0'0,45'1130] local-lis/les=104/105 n=2 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=104) [1]/[0] async=[1] r=0 lpr=104 pi=[54,104)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:30:24 compute-0 sudo[106909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xijzplzejktjtxwjfjefolbhlknkzavp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976623.835486-189-148004065898552/AnsiballZ_file.py'
Nov 24 09:30:24 compute-0 sudo[106909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:30:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:24 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab00001670 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:24 compute-0 python3.9[106911]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:30:24 compute-0 sudo[106909]: pam_unix(sudo:session): session closed for user root
Nov 24 09:30:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:24 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab200095a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:24 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Nov 24 09:30:24 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Nov 24 09:30:24 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Nov 24 09:30:24 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 106 pg[9.11( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=5 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=106) [1]/[0] r=0 lpr=106 pi=[54,106)/1 crt=45'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:30:24 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 106 pg[9.11( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=5 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=106) [1]/[0] r=0 lpr=106 pi=[54,106)/1 crt=45'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 09:30:24 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 106 pg[9.10( v 45'1130 (0'0,45'1130] local-lis/les=104/105 n=2 ec=54/39 lis/c=104/54 les/c/f=105/55/0 sis=106 pruub=15.004294395s) [1] async=[1] r=-1 lpr=106 pi=[54,106)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active pruub 245.629806519s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:30:24 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 106 pg[9.10( v 45'1130 (0'0,45'1130] local-lis/les=104/105 n=2 ec=54/39 lis/c=104/54 les/c/f=105/55/0 sis=106 pruub=15.004071236s) [1] r=-1 lpr=106 pi=[54,106)/1 crt=45'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 245.629806519s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:30:24 compute-0 ceph-mon[74331]: pgmap v49: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:30:24 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 24 09:30:24 compute-0 ceph-mon[74331]: osdmap e105: 3 total, 3 up, 3 in
Nov 24 09:30:24 compute-0 ceph-mon[74331]: osdmap e106: 3 total, 3 up, 3 in
Nov 24 09:30:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:24 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc003eb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:25 compute-0 python3.9[107061]: ansible-ansible.builtin.service_facts Invoked
Nov 24 09:30:25 compute-0 network[107079]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 09:30:25 compute-0 network[107080]: 'network-scripts' will be removed from distribution in near future.
Nov 24 09:30:25 compute-0 network[107081]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 09:30:25 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:25 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 24 09:30:25 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:25.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 24 09:30:25 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v52: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:30:25 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0)
Nov 24 09:30:25 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 24 09:30:25 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:25 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:25 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:25.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:25 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Nov 24 09:30:25 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 24 09:30:25 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Nov 24 09:30:25 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Nov 24 09:30:25 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 107 pg[9.12( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=4 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=107 pruub=8.430790901s) [1] r=-1 lpr=107 pi=[54,107)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active pruub 240.074020386s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:30:25 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 107 pg[9.12( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=4 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=107 pruub=8.430728912s) [1] r=-1 lpr=107 pi=[54,107)/1 crt=45'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 240.074020386s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:30:25 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 24 09:30:25 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 24 09:30:25 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 24 09:30:25 compute-0 ceph-mon[74331]: osdmap e107: 3 total, 3 up, 3 in
Nov 24 09:30:25 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 107 pg[9.11( v 45'1130 (0'0,45'1130] local-lis/les=106/107 n=5 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=106) [1]/[0] async=[1] r=0 lpr=106 pi=[54,106)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:30:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:26 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:26 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab00001670 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:26 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Nov 24 09:30:26 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Nov 24 09:30:26 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Nov 24 09:30:26 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 108 pg[9.12( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=4 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=108) [1]/[0] r=0 lpr=108 pi=[54,108)/1 crt=45'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:30:26 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 108 pg[9.11( v 45'1130 (0'0,45'1130] local-lis/les=106/107 n=5 ec=54/39 lis/c=106/54 les/c/f=107/55/0 sis=108 pruub=15.010944366s) [1] async=[1] r=-1 lpr=108 pi=[54,108)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active pruub 247.657546997s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:30:26 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 108 pg[9.12( v 45'1130 (0'0,45'1130] local-lis/les=54/55 n=4 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=108) [1]/[0] r=0 lpr=108 pi=[54,108)/1 crt=45'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 09:30:26 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 108 pg[9.11( v 45'1130 (0'0,45'1130] local-lis/les=106/107 n=5 ec=54/39 lis/c=106/54 les/c/f=107/55/0 sis=108 pruub=15.010870934s) [1] r=-1 lpr=108 pi=[54,108)/1 crt=45'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 247.657546997s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:30:26 compute-0 ceph-mon[74331]: pgmap v52: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:30:26 compute-0 ceph-mon[74331]: osdmap e108: 3 total, 3 up, 3 in
Nov 24 09:30:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:26 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab200095a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:30:26.955Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:30:26 compute-0 ceph-mgr[74626]: [dashboard INFO request] [192.168.122.100:44414] [POST] [200] [0.003s] [4.0B] [7a68a975-aa13-4ffa-9037-0bc62257e032] /api/prometheus_receiver
Nov 24 09:30:27 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:27 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.002000053s ======
Nov 24 09:30:27 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:27.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Nov 24 09:30:27 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v55: 353 pgs: 1 unknown, 1 active+remapped, 351 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:30:27 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:27 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:27 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:27.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:27 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Nov 24 09:30:27 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Nov 24 09:30:27 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Nov 24 09:30:27 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 109 pg[9.12( v 45'1130 (0'0,45'1130] local-lis/les=108/109 n=4 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=108) [1]/[0] async=[1] r=0 lpr=108 pi=[54,108)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:30:28 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:30:28 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Nov 24 09:30:28 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Nov 24 09:30:28 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Nov 24 09:30:28 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 110 pg[9.12( v 45'1130 (0'0,45'1130] local-lis/les=108/109 n=4 ec=54/39 lis/c=108/54 les/c/f=109/55/0 sis=110 pruub=15.634166718s) [1] async=[1] r=-1 lpr=110 pi=[54,110)/1 crt=45'1130 lcod 0'0 mlcod 0'0 active pruub 249.662033081s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:30:28 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 110 pg[9.12( v 45'1130 (0'0,45'1130] local-lis/les=108/109 n=4 ec=54/39 lis/c=108/54 les/c/f=109/55/0 sis=110 pruub=15.634110451s) [1] r=-1 lpr=110 pi=[54,110)/1 crt=45'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.662033081s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 09:30:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:28 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc003eb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:28 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:28 compute-0 ceph-mon[74331]: pgmap v55: 353 pgs: 1 unknown, 1 active+remapped, 351 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:30:28 compute-0 ceph-mon[74331]: osdmap e109: 3 total, 3 up, 3 in
Nov 24 09:30:28 compute-0 ceph-mon[74331]: osdmap e110: 3 total, 3 up, 3 in
Nov 24 09:30:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:28 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab00001670 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:29 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Nov 24 09:30:29 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Nov 24 09:30:29 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Nov 24 09:30:29 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:29 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:29 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:29.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:29 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v59: 353 pgs: 1 unknown, 1 active+remapped, 351 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:30:29 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:29 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 24 09:30:29 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:29.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 24 09:30:29 compute-0 python3.9[107345]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:30:30 compute-0 ceph-mon[74331]: osdmap e111: 3 total, 3 up, 3 in
Nov 24 09:30:30 compute-0 ceph-mon[74331]: pgmap v59: 353 pgs: 1 unknown, 1 active+remapped, 351 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 09:30:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:30 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab200095a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:30 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc003eb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:30 compute-0 python3.9[107496]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:30:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:30 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:30:30] "GET /metrics HTTP/1.1" 200 48251 "" "Prometheus/2.51.0"
Nov 24 09:30:30 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:30:30] "GET /metrics HTTP/1.1" 200 48251 "" "Prometheus/2.51.0"
Nov 24 09:30:31 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:30:31 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:31 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:31 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:31.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:31 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v60: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 445 B/s rd, 0 op/s; 23 B/s, 0 objects/s recovering
Nov 24 09:30:31 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0)
Nov 24 09:30:31 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 24 09:30:31 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:31 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:31 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:31.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:32 compute-0 python3.9[107651]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:30:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Nov 24 09:30:32 compute-0 ceph-mon[74331]: pgmap v60: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 445 B/s rd, 0 op/s; 23 B/s, 0 objects/s recovering
Nov 24 09:30:32 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 24 09:30:32 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 24 09:30:32 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 24 09:30:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Nov 24 09:30:32 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Nov 24 09:30:32 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:32 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab00001670 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:32 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:32 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab200095a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:32 compute-0 sudo[107808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuhhzvauhymqfoxinvmmpwfbyckvrejg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976632.653949-333-167546631831937/AnsiballZ_setup.py'
Nov 24 09:30:32 compute-0 sudo[107808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:30:32 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:32 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc003eb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:30:33 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 24 09:30:33 compute-0 ceph-mon[74331]: osdmap e112: 3 total, 3 up, 3 in
Nov 24 09:30:33 compute-0 python3.9[107810]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 09:30:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:33.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:33 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v62: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 366 B/s rd, 0 op/s; 19 B/s, 0 objects/s recovering
Nov 24 09:30:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0)
Nov 24 09:30:33 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 24 09:30:33 compute-0 sudo[107808]: pam_unix(sudo:session): session closed for user root
Nov 24 09:30:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:33.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:33 compute-0 sudo[107893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hoidpdebtysuiskezcmqyeuwuxipmtwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976632.653949-333-167546631831937/AnsiballZ_dnf.py'
Nov 24 09:30:33 compute-0 sudo[107893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:30:34 compute-0 python3.9[107895]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 09:30:34 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Nov 24 09:30:34 compute-0 ceph-mon[74331]: pgmap v62: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 366 B/s rd, 0 op/s; 19 B/s, 0 objects/s recovering
Nov 24 09:30:34 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 24 09:30:34 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 24 09:30:34 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 24 09:30:34 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Nov 24 09:30:34 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Nov 24 09:30:34 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:34 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003e10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:34 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:34 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab000045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:34 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:34 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab200095a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:35 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 24 09:30:35 compute-0 ceph-mon[74331]: osdmap e113: 3 total, 3 up, 3 in
Nov 24 09:30:35 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:35 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:35 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:35.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:35 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v64: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 330 B/s rd, 0 op/s; 17 B/s, 0 objects/s recovering
Nov 24 09:30:35 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0)
Nov 24 09:30:35 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 24 09:30:35 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:35 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 24 09:30:35 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:35.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 24 09:30:36 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Nov 24 09:30:36 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 24 09:30:36 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Nov 24 09:30:36 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Nov 24 09:30:36 compute-0 ceph-mon[74331]: pgmap v64: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 330 B/s rd, 0 op/s; 17 B/s, 0 objects/s recovering
Nov 24 09:30:36 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 24 09:30:36 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 24 09:30:36 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:36 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc003eb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:36 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:36 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003e30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:36 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:36 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab000045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:37 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Nov 24 09:30:37 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Nov 24 09:30:37 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Nov 24 09:30:37 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:37 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:37 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:37.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:37 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 24 09:30:37 compute-0 ceph-mon[74331]: osdmap e114: 3 total, 3 up, 3 in
Nov 24 09:30:37 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v67: 353 pgs: 1 unknown, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 199 B/s rd, 0 op/s
Nov 24 09:30:37 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:37 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:37 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:37.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:38 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:30:38 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Nov 24 09:30:38 compute-0 ceph-mon[74331]: osdmap e115: 3 total, 3 up, 3 in
Nov 24 09:30:38 compute-0 ceph-mon[74331]: pgmap v67: 353 pgs: 1 unknown, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 199 B/s rd, 0 op/s
Nov 24 09:30:38 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Nov 24 09:30:38 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Nov 24 09:30:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:38 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab200095a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:38 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc003ed0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:38 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003e50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:39 compute-0 sudo[107968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:30:39 compute-0 sudo[107968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:30:39 compute-0 sudo[107968]: pam_unix(sudo:session): session closed for user root
Nov 24 09:30:39 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:39 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 24 09:30:39 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:39.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 24 09:30:39 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Nov 24 09:30:39 compute-0 ceph-mon[74331]: osdmap e116: 3 total, 3 up, 3 in
Nov 24 09:30:39 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v69: 353 pgs: 1 unknown, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 201 B/s rd, 0 op/s
Nov 24 09:30:39 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Nov 24 09:30:39 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Nov 24 09:30:39 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:39 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:39 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:39.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=infra.usagestats t=2025-11-24T09:30:40.166731427Z level=info msg="Usage stats are ready to report"
Nov 24 09:30:40 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Nov 24 09:30:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:40 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab000045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:40 compute-0 ceph-mon[74331]: pgmap v69: 353 pgs: 1 unknown, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 201 B/s rd, 0 op/s
Nov 24 09:30:40 compute-0 ceph-mon[74331]: osdmap e117: 3 total, 3 up, 3 in
Nov 24 09:30:40 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Nov 24 09:30:40 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Nov 24 09:30:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:40 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab200095a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:40 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc003ef0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:30:40] "GET /metrics HTTP/1.1" 200 48251 "" "Prometheus/2.51.0"
Nov 24 09:30:40 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:30:40] "GET /metrics HTTP/1.1" 200 48251 "" "Prometheus/2.51.0"
Nov 24 09:30:41 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:41 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:41 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:41.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:41 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v72: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 504 B/s rd, 0 op/s; 54 B/s, 1 objects/s recovering
Nov 24 09:30:41 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0)
Nov 24 09:30:41 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 24 09:30:41 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Nov 24 09:30:41 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 24 09:30:41 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Nov 24 09:30:41 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Nov 24 09:30:41 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:41 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 24 09:30:41 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:41.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 24 09:30:41 compute-0 ceph-mon[74331]: osdmap e118: 3 total, 3 up, 3 in
Nov 24 09:30:41 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 24 09:30:41 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 24 09:30:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:42 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003e70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:42 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab000045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:42 compute-0 ceph-mon[74331]: pgmap v72: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 504 B/s rd, 0 op/s; 54 B/s, 1 objects/s recovering
Nov 24 09:30:42 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 24 09:30:42 compute-0 ceph-mon[74331]: osdmap e119: 3 total, 3 up, 3 in
Nov 24 09:30:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:42 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab200095a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 09:30:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Nov 24 09:30:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Nov 24 09:30:43 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Nov 24 09:30:43 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:43 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 24 09:30:43 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:43.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 24 09:30:43 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v75: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 54 B/s, 1 objects/s recovering
Nov 24 09:30:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Nov 24 09:30:43 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 24 09:30:43 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:43 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 24 09:30:43 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:43.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 24 09:30:44 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Nov 24 09:30:44 compute-0 ceph-mon[74331]: osdmap e120: 3 total, 3 up, 3 in
Nov 24 09:30:44 compute-0 ceph-mon[74331]: pgmap v75: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 54 B/s, 1 objects/s recovering
Nov 24 09:30:44 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 24 09:30:44 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 24 09:30:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:44 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc003f10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:44 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003e90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:44 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:45 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:45 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:45 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:45.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:45 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v76: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s; 36 B/s, 1 objects/s recovering
Nov 24 09:30:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Optimize plan auto_2025-11-24_09:30:45
Nov 24 09:30:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 09:30:45 compute-0 ceph-mgr[74626]: [balancer INFO root] do_upmap
Nov 24 09:30:45 compute-0 ceph-mgr[74626]: [balancer INFO root] pools ['default.rgw.control', 'images', '.rgw.root', '.nfs', 'backups', 'cephfs.cephfs.data', 'vms', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr', 'volumes', 'default.rgw.meta']
Nov 24 09:30:45 compute-0 ceph-mgr[74626]: [balancer INFO root] prepared 0/10 upmap changes
Nov 24 09:30:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 09:30:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:30:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 09:30:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:30:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:30:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:30:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:30:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:30:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:30:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:30:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:30:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:30:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 09:30:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:30:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:30:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:30:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 24 09:30:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:30:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 09:30:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:30:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:30:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:30:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 09:30:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:30:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 09:30:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:30:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:30:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:30:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:30:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 09:30:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:30:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:30:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:30:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:30:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:30:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:30:45 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 24 09:30:45 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Nov 24 09:30:45 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Nov 24 09:30:45 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Nov 24 09:30:45 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 24 09:30:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 09:30:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:30:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:30:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:30:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:30:45 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:45 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 24 09:30:45 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:45.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 24 09:30:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:46 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab2000a6a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:46 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Nov 24 09:30:46 compute-0 ceph-mon[74331]: pgmap v76: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s; 36 B/s, 1 objects/s recovering
Nov 24 09:30:46 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 24 09:30:46 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:30:46 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 24 09:30:46 compute-0 ceph-mon[74331]: osdmap e121: 3 total, 3 up, 3 in
Nov 24 09:30:46 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 24 09:30:46 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 24 09:30:46 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Nov 24 09:30:46 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Nov 24 09:30:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:46 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc003f10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:46 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003e90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:47 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:47 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 24 09:30:47 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:47.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 24 09:30:47 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v79: 353 pgs: 1 active+remapped, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 176 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Nov 24 09:30:47 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0)
Nov 24 09:30:47 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 24 09:30:47 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Nov 24 09:30:47 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:47 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 24 09:30:47 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:47.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 24 09:30:47 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 24 09:30:47 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Nov 24 09:30:47 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 24 09:30:47 compute-0 ceph-mon[74331]: osdmap e122: 3 total, 3 up, 3 in
Nov 24 09:30:47 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 24 09:30:47 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 24 09:30:47 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Nov 24 09:30:48 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:30:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:48 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:48 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab2000a6a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:48 compute-0 ceph-mon[74331]: pgmap v79: 353 pgs: 1 active+remapped, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 176 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Nov 24 09:30:48 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 24 09:30:48 compute-0 ceph-mon[74331]: osdmap e123: 3 total, 3 up, 3 in
Nov 24 09:30:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:48 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc003f10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:49 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:49 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000021s ======
Nov 24 09:30:49 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:49.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Nov 24 09:30:49 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v81: 353 pgs: 1 active+remapped, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Nov 24 09:30:49 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0)
Nov 24 09:30:49 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 24 09:30:49 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:49 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:49 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:49.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:49 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Nov 24 09:30:49 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 24 09:30:49 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 24 09:30:49 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 24 09:30:49 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Nov 24 09:30:49 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Nov 24 09:30:49 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 124 pg[9.19( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=87/87 les/c/f=88/88/0 sis=124) [0] r=0 lpr=124 pi=[87,124)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:30:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:50 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003e90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:50 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:50 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Nov 24 09:30:50 compute-0 ceph-mon[74331]: pgmap v81: 353 pgs: 1 active+remapped, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Nov 24 09:30:50 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 24 09:30:50 compute-0 ceph-mon[74331]: osdmap e124: 3 total, 3 up, 3 in
Nov 24 09:30:50 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Nov 24 09:30:50 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Nov 24 09:30:50 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 125 pg[9.19( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=87/87 les/c/f=88/88/0 sis=125) [0]/[2] r=-1 lpr=125 pi=[87,125)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:30:50 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 125 pg[9.19( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=87/87 les/c/f=88/88/0 sis=125) [0]/[2] r=-1 lpr=125 pi=[87,125)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:30:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:50 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab2000a6a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:30:50] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Nov 24 09:30:50 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:30:50] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Nov 24 09:30:51 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:51 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000021s ======
Nov 24 09:30:51 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:51.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Nov 24 09:30:51 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v84: 353 pgs: 1 remapped+peering, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 429 B/s rd, 0 op/s
Nov 24 09:30:51 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:51 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000021s ======
Nov 24 09:30:51 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:51.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Nov 24 09:30:51 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Nov 24 09:30:51 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Nov 24 09:30:51 compute-0 ceph-mon[74331]: osdmap e125: 3 total, 3 up, 3 in
Nov 24 09:30:51 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Nov 24 09:30:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:52 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc003f10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:52 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003e90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:52 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Nov 24 09:30:52 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Nov 24 09:30:52 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Nov 24 09:30:52 compute-0 ceph-mon[74331]: pgmap v84: 353 pgs: 1 remapped+peering, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 429 B/s rd, 0 op/s
Nov 24 09:30:52 compute-0 ceph-mon[74331]: osdmap e126: 3 total, 3 up, 3 in
Nov 24 09:30:52 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 127 pg[9.19( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=7 ec=54/39 lis/c=125/87 les/c/f=126/88/0 sis=127) [0] r=0 lpr=127 pi=[87,127)/1 luod=0'0 crt=45'1130 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:30:52 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 127 pg[9.19( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=7 ec=54/39 lis/c=125/87 les/c/f=126/88/0 sis=127) [0] r=0 lpr=127 pi=[87,127)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:30:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:52 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:53 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:30:53 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:53 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000021s ======
Nov 24 09:30:53 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:53.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Nov 24 09:30:53 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v87: 353 pgs: 1 remapped+peering, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 24 09:30:53 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:53 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000021s ======
Nov 24 09:30:53 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:53.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Nov 24 09:30:53 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Nov 24 09:30:53 compute-0 ceph-mon[74331]: osdmap e127: 3 total, 3 up, 3 in
Nov 24 09:30:53 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Nov 24 09:30:53 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Nov 24 09:30:53 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 128 pg[9.19( v 45'1130 (0'0,45'1130] local-lis/les=127/128 n=7 ec=54/39 lis/c=125/87 les/c/f=126/88/0 sis=127) [0] r=0 lpr=127 pi=[87,127)/1 crt=45'1130 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:30:54 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:54 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab2000a6a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:54 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:54 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc003f10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:54 compute-0 ceph-mon[74331]: pgmap v87: 353 pgs: 1 remapped+peering, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 24 09:30:54 compute-0 ceph-mon[74331]: osdmap e128: 3 total, 3 up, 3 in
Nov 24 09:30:54 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:54 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003e90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:55 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:55 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:55 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:55.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:55 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v89: 353 pgs: 1 remapped+peering, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 440 B/s rd, 0 op/s
Nov 24 09:30:55 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:55 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000021s ======
Nov 24 09:30:55 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:55.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Nov 24 09:30:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:56 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:56 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab2000a6a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:56 compute-0 ceph-mon[74331]: pgmap v89: 353 pgs: 1 remapped+peering, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 440 B/s rd, 0 op/s
Nov 24 09:30:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:56 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc003f10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:57 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:57 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000021s ======
Nov 24 09:30:57 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:57.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Nov 24 09:30:57 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v90: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s; 18 B/s, 1 objects/s recovering
Nov 24 09:30:57 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0)
Nov 24 09:30:57 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 24 09:30:57 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:57 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:30:57 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:57.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:30:57 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Nov 24 09:30:57 compute-0 ceph-mon[74331]: pgmap v90: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s; 18 B/s, 1 objects/s recovering
Nov 24 09:30:57 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 24 09:30:57 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 24 09:30:57 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 24 09:30:57 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Nov 24 09:30:57 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Nov 24 09:30:57 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 129 pg[9.1a( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=90/90 les/c/f=91/91/0 sis=129) [0] r=0 lpr=129 pi=[90,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:30:58 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:30:58 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Nov 24 09:30:58 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Nov 24 09:30:58 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Nov 24 09:30:58 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 130 pg[9.1a( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=90/90 les/c/f=91/91/0 sis=130) [0]/[1] r=-1 lpr=130 pi=[90,130)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:30:58 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 130 pg[9.1a( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=90/90 les/c/f=91/91/0 sis=130) [0]/[1] r=-1 lpr=130 pi=[90,130)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:30:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:58 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003e90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:58 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0003720 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:58 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 24 09:30:58 compute-0 ceph-mon[74331]: osdmap e129: 3 total, 3 up, 3 in
Nov 24 09:30:58 compute-0 ceph-mon[74331]: osdmap e130: 3 total, 3 up, 3 in
Nov 24 09:30:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:30:58 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab2000a6a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:30:59 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Nov 24 09:30:59 compute-0 sudo[108091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:30:59 compute-0 sudo[108091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:30:59 compute-0 sudo[108091]: pam_unix(sudo:session): session closed for user root
Nov 24 09:30:59 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Nov 24 09:30:59 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Nov 24 09:30:59 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:59 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000020s ======
Nov 24 09:30:59 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:30:59.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000020s
Nov 24 09:30:59 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v94: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 183 B/s rd, 0 op/s; 19 B/s, 1 objects/s recovering
Nov 24 09:30:59 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0)
Nov 24 09:30:59 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 09:30:59 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:30:59 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000021s ======
Nov 24 09:30:59 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:30:59.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Nov 24 09:31:00 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Nov 24 09:31:00 compute-0 ceph-mon[74331]: osdmap e131: 3 total, 3 up, 3 in
Nov 24 09:31:00 compute-0 ceph-mon[74331]: pgmap v94: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 183 B/s rd, 0 op/s; 19 B/s, 1 objects/s recovering
Nov 24 09:31:00 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 09:31:00 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 09:31:00 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 09:31:00 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Nov 24 09:31:00 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Nov 24 09:31:00 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 132 pg[9.1b( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=66/66 les/c/f=67/67/0 sis=132) [0] r=0 lpr=132 pi=[66,132)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:31:00 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 132 pg[9.1a( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=4 ec=54/39 lis/c=130/90 les/c/f=131/91/0 sis=132) [0] r=0 lpr=132 pi=[90,132)/1 luod=0'0 crt=45'1130 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:31:00 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 132 pg[9.1a( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=4 ec=54/39 lis/c=130/90 les/c/f=131/91/0 sis=132) [0] r=0 lpr=132 pi=[90,132)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:31:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:00 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc003f10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:00 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003e90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:00 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0003720 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:31:00] "GET /metrics HTTP/1.1" 200 48253 "" "Prometheus/2.51.0"
Nov 24 09:31:00 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:31:00] "GET /metrics HTTP/1.1" 200 48253 "" "Prometheus/2.51.0"
Nov 24 09:31:01 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Nov 24 09:31:01 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:01 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000021s ======
Nov 24 09:31:01 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:01.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Nov 24 09:31:01 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Nov 24 09:31:01 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Nov 24 09:31:01 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 133 pg[9.1b( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=66/66 les/c/f=67/67/0 sis=133) [0]/[2] r=-1 lpr=133 pi=[66,133)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:31:01 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 133 pg[9.1b( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=66/66 les/c/f=67/67/0 sis=133) [0]/[2] r=-1 lpr=133 pi=[66,133)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:31:01 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 133 pg[9.1a( v 45'1130 (0'0,45'1130] local-lis/les=132/133 n=4 ec=54/39 lis/c=130/90 les/c/f=131/91/0 sis=132) [0] r=0 lpr=132 pi=[90,132)/1 crt=45'1130 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:31:01 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 09:31:01 compute-0 ceph-mon[74331]: osdmap e132: 3 total, 3 up, 3 in
Nov 24 09:31:01 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:31:01 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v97: 353 pgs: 1 remapped+peering, 1 peering, 351 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 579 B/s rd, 0 op/s; 31 B/s, 1 objects/s recovering
Nov 24 09:31:01 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:01 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000021s ======
Nov 24 09:31:01 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:01.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Nov 24 09:31:02 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Nov 24 09:31:02 compute-0 ceph-mon[74331]: osdmap e133: 3 total, 3 up, 3 in
Nov 24 09:31:02 compute-0 ceph-mon[74331]: pgmap v97: 353 pgs: 1 remapped+peering, 1 peering, 351 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 579 B/s rd, 0 op/s; 31 B/s, 1 objects/s recovering
Nov 24 09:31:02 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Nov 24 09:31:02 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Nov 24 09:31:02 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:02 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab2000a6a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:02 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:02 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc003f10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:02 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:02 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003e90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:03 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:31:03 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Nov 24 09:31:03 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Nov 24 09:31:03 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Nov 24 09:31:03 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 135 pg[9.1b( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=2 ec=54/39 lis/c=133/66 les/c/f=134/67/0 sis=135) [0] r=0 lpr=135 pi=[66,135)/1 luod=0'0 crt=45'1130 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:31:03 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 135 pg[9.1b( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=2 ec=54/39 lis/c=133/66 les/c/f=134/67/0 sis=135) [0] r=0 lpr=135 pi=[66,135)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:31:03 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:03 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000021s ======
Nov 24 09:31:03 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:03.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Nov 24 09:31:03 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v100: 353 pgs: 1 remapped+peering, 1 peering, 351 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 27 B/s, 0 objects/s recovering
Nov 24 09:31:03 compute-0 ceph-mon[74331]: osdmap e134: 3 total, 3 up, 3 in
Nov 24 09:31:03 compute-0 ceph-mon[74331]: osdmap e135: 3 total, 3 up, 3 in
Nov 24 09:31:03 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:03 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:03 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:03.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:04 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Nov 24 09:31:04 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Nov 24 09:31:04 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Nov 24 09:31:04 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 136 pg[9.1b( v 45'1130 (0'0,45'1130] local-lis/les=135/136 n=2 ec=54/39 lis/c=133/66 les/c/f=134/67/0 sis=135) [0] r=0 lpr=135 pi=[66,135)/1 crt=45'1130 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:31:04 compute-0 ceph-mon[74331]: pgmap v100: 353 pgs: 1 remapped+peering, 1 peering, 351 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 27 B/s, 0 objects/s recovering
Nov 24 09:31:04 compute-0 ceph-mon[74331]: osdmap e136: 3 total, 3 up, 3 in
Nov 24 09:31:04 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:04 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0003720 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:04 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:04 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab2000a6a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:04 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:04 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc003f10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:05 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:05 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000020s ======
Nov 24 09:31:05 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:05.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000020s
Nov 24 09:31:05 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v102: 353 pgs: 1 remapped+peering, 1 peering, 351 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 507 B/s rd, 0 op/s; 27 B/s, 0 objects/s recovering
Nov 24 09:31:05 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:05 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:05 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:05.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:06 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003e90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:06 compute-0 ceph-mon[74331]: pgmap v102: 353 pgs: 1 remapped+peering, 1 peering, 351 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 507 B/s rd, 0 op/s; 27 B/s, 0 objects/s recovering
Nov 24 09:31:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:06 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0003720 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:06 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab2000a6a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:07 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:07 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000021s ======
Nov 24 09:31:07 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:07.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Nov 24 09:31:07 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v103: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Nov 24 09:31:07 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0)
Nov 24 09:31:07 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 24 09:31:07 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Nov 24 09:31:07 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 24 09:31:07 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Nov 24 09:31:07 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 24 09:31:07 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 24 09:31:07 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Nov 24 09:31:07 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:07 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:07 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:07.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:08 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:31:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:08 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc003f30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:08 compute-0 ceph-mon[74331]: pgmap v103: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Nov 24 09:31:08 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 24 09:31:08 compute-0 ceph-mon[74331]: osdmap e137: 3 total, 3 up, 3 in
Nov 24 09:31:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:08 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003eb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:08 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0003720 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:09 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:09 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:09 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:09.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:09 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v105: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 165 B/s rd, 0 op/s; 17 B/s, 0 objects/s recovering
Nov 24 09:31:09 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0)
Nov 24 09:31:09 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 24 09:31:09 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Nov 24 09:31:09 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 24 09:31:09 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Nov 24 09:31:09 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Nov 24 09:31:09 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 24 09:31:09 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 24 09:31:09 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:09 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000021s ======
Nov 24 09:31:09 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:09.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Nov 24 09:31:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:10 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab2000a6a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:10 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Nov 24 09:31:10 compute-0 ceph-mon[74331]: pgmap v105: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 165 B/s rd, 0 op/s; 17 B/s, 0 objects/s recovering
Nov 24 09:31:10 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 24 09:31:10 compute-0 ceph-mon[74331]: osdmap e138: 3 total, 3 up, 3 in
Nov 24 09:31:10 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Nov 24 09:31:10 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Nov 24 09:31:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:10 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc003f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:10 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003ed0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:31:10] "GET /metrics HTTP/1.1" 200 48253 "" "Prometheus/2.51.0"
Nov 24 09:31:10 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:31:10] "GET /metrics HTTP/1.1" 200 48253 "" "Prometheus/2.51.0"
Nov 24 09:31:11 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:11 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:11 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:11.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:11 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v108: 353 pgs: 1 remapped+peering, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Nov 24 09:31:11 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Nov 24 09:31:11 compute-0 ceph-mon[74331]: osdmap e139: 3 total, 3 up, 3 in
Nov 24 09:31:11 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Nov 24 09:31:11 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Nov 24 09:31:11 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:11 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:11 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:11.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:12 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:12 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003ed0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:12 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Nov 24 09:31:12 compute-0 ceph-mon[74331]: pgmap v108: 353 pgs: 1 remapped+peering, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Nov 24 09:31:12 compute-0 ceph-mon[74331]: osdmap e140: 3 total, 3 up, 3 in
Nov 24 09:31:12 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Nov 24 09:31:12 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Nov 24 09:31:12 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:12 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003ed0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:12 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:12 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab14001080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:31:13 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:13 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000020s ======
Nov 24 09:31:13 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:13.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000020s
Nov 24 09:31:13 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v111: 353 pgs: 1 remapped+peering, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 24 09:31:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Nov 24 09:31:13 compute-0 ceph-mon[74331]: osdmap e141: 3 total, 3 up, 3 in
Nov 24 09:31:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Nov 24 09:31:13 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Nov 24 09:31:13 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:13 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:13 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:13.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:14 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:14 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab00001ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:14 compute-0 ceph-mon[74331]: pgmap v111: 353 pgs: 1 remapped+peering, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 24 09:31:14 compute-0 ceph-mon[74331]: osdmap e142: 3 total, 3 up, 3 in
Nov 24 09:31:14 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:14 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc003f90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:14 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:14 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003ef0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:15 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:15 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000021s ======
Nov 24 09:31:15 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:15.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Nov 24 09:31:15 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v113: 353 pgs: 1 remapped+peering, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 419 B/s rd, 0 op/s
Nov 24 09:31:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:31:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:31:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:31:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fabee9bfc10>)]
Nov 24 09:31:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Nov 24 09:31:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:31:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fabee9bfc40>)]
Nov 24 09:31:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Nov 24 09:31:15 compute-0 sudo[107893]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:15 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:31:15 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:15 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:15 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:15.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:15 compute-0 sudo[108288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnakpazrjuawvqkngdxnlwvuocrikdzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976675.6424453-369-103755229600511/AnsiballZ_command.py'
Nov 24 09:31:15 compute-0 sudo[108288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:31:16 compute-0 python3.9[108290]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:31:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:16 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab14001080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:16 compute-0 ceph-mon[74331]: pgmap v113: 353 pgs: 1 remapped+peering, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 419 B/s rd, 0 op/s
Nov 24 09:31:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:16 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab00001ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:16 compute-0 sudo[108288]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:16 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc003fb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:17 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:17 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:17 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:17.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:17 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v114: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 511 B/s wr, 0 op/s; 36 B/s, 0 objects/s recovering
Nov 24 09:31:17 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0)
Nov 24 09:31:17 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 24 09:31:17 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Nov 24 09:31:17 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:17 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:17 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:17.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:17 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 24 09:31:17 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 24 09:31:17 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 24 09:31:17 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Nov 24 09:31:17 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Nov 24 09:31:17 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : mgrmap e32: compute-0.mauvni(active, since 92s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:31:17 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 143 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=75/75 les/c/f=76/76/0 sis=143) [0] r=0 lpr=143 pi=[75,143)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:31:17 compute-0 sudo[108577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwvyqowidizndeyicjcvbkmewgdjevzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976677.3402593-393-253532365179222/AnsiballZ_selinux.py'
Nov 24 09:31:17 compute-0 sudo[108577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:31:18 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:31:18 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Nov 24 09:31:18 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Nov 24 09:31:18 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Nov 24 09:31:18 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 144 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=75/75 les/c/f=76/76/0 sis=144) [0]/[1] r=-1 lpr=144 pi=[75,144)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:31:18 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 144 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=75/75 les/c/f=76/76/0 sis=144) [0]/[1] r=-1 lpr=144 pi=[75,144)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:31:18 compute-0 python3.9[108579]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 24 09:31:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:18 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003ef0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:18 compute-0 sudo[108577]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:18 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab14002400 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:18 compute-0 ceph-mon[74331]: pgmap v114: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 511 B/s wr, 0 op/s; 36 B/s, 0 objects/s recovering
Nov 24 09:31:18 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 24 09:31:18 compute-0 ceph-mon[74331]: osdmap e143: 3 total, 3 up, 3 in
Nov 24 09:31:18 compute-0 ceph-mon[74331]: mgrmap e32: compute-0.mauvni(active, since 92s), standbys: compute-2.rzcnzg, compute-1.qelqsg
Nov 24 09:31:18 compute-0 ceph-mon[74331]: osdmap e144: 3 total, 3 up, 3 in
Nov 24 09:31:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:18 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab00001ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:19 compute-0 sudo[108730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlfomsnyyxxvnyilurtjxvnnvdceyqyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976678.8081775-426-220256134882460/AnsiballZ_command.py'
Nov 24 09:31:19 compute-0 sudo[108730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:31:19 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Nov 24 09:31:19 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Nov 24 09:31:19 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Nov 24 09:31:19 compute-0 sudo[108733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:31:19 compute-0 sudo[108733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:31:19 compute-0 sudo[108733]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:19 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:19 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:19 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:19.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:19 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v118: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 177 B/s rd, 532 B/s wr, 0 op/s; 38 B/s, 0 objects/s recovering
Nov 24 09:31:19 compute-0 python3.9[108732]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 24 09:31:19 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0)
Nov 24 09:31:19 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 09:31:19 compute-0 sudo[108730]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:19 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:19 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:19 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:19.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:19 compute-0 sudo[108908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glsqvgpfqecujqxwsmlasrpzrzumsjzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976679.506684-450-71733684851151/AnsiballZ_file.py'
Nov 24 09:31:19 compute-0 sudo[108908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:31:19 compute-0 python3.9[108910]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:31:19 compute-0 sudo[108908]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:20 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Nov 24 09:31:20 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 09:31:20 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Nov 24 09:31:20 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Nov 24 09:31:20 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 146 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=101/101 les/c/f=102/102/0 sis=146) [0] r=0 lpr=146 pi=[101,146)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:31:20 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 146 pg[9.1e( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=5 ec=54/39 lis/c=144/75 les/c/f=145/76/0 sis=146) [0] r=0 lpr=146 pi=[75,146)/1 luod=0'0 crt=45'1130 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:31:20 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 146 pg[9.1e( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=5 ec=54/39 lis/c=144/75 les/c/f=145/76/0 sis=146) [0] r=0 lpr=146 pi=[75,146)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:31:20 compute-0 ceph-mon[74331]: osdmap e145: 3 total, 3 up, 3 in
Nov 24 09:31:20 compute-0 ceph-mon[74331]: pgmap v118: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 177 B/s rd, 532 B/s wr, 0 op/s; 38 B/s, 0 objects/s recovering
Nov 24 09:31:20 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 09:31:20 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 09:31:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:20 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc003fd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:20 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003ef0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:20 compute-0 sudo[109061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtdqffugjolciuvnhuikeogvtiduyjpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976680.297231-474-122932360820901/AnsiballZ_mount.py'
Nov 24 09:31:20 compute-0 sudo[109061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:31:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:31:20] "GET /metrics HTTP/1.1" 200 48247 "" "Prometheus/2.51.0"
Nov 24 09:31:20 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:31:20] "GET /metrics HTTP/1.1" 200 48247 "" "Prometheus/2.51.0"
Nov 24 09:31:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:20 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab14002400 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:20 compute-0 python3.9[109063]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 24 09:31:21 compute-0 sudo[109064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:31:21 compute-0 sudo[109064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:31:21 compute-0 sudo[109064]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:21 compute-0 sudo[109061]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:21 compute-0 sudo[109089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 24 09:31:21 compute-0 sudo[109089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:31:21 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Nov 24 09:31:21 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Nov 24 09:31:21 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Nov 24 09:31:21 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 147 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=101/101 les/c/f=102/102/0 sis=147) [0]/[1] r=-1 lpr=147 pi=[101,147)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:31:21 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 147 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/39 lis/c=101/101 les/c/f=102/102/0 sis=147) [0]/[1] r=-1 lpr=147 pi=[101,147)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 09:31:21 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 147 pg[9.1e( v 45'1130 (0'0,45'1130] local-lis/les=146/147 n=5 ec=54/39 lis/c=144/75 les/c/f=145/76/0 sis=146) [0] r=0 lpr=146 pi=[75,146)/1 crt=45'1130 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:31:21 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 09:31:21 compute-0 ceph-mon[74331]: osdmap e146: 3 total, 3 up, 3 in
Nov 24 09:31:21 compute-0 ceph-mon[74331]: osdmap e147: 3 total, 3 up, 3 in
Nov 24 09:31:21 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:21 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000021s ======
Nov 24 09:31:21 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:21.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Nov 24 09:31:21 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v121: 353 pgs: 1 remapped+peering, 352 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 564 B/s rd, 0 op/s; 0 B/s, 1 objects/s recovering
Nov 24 09:31:21 compute-0 podman[109212]: 2025-11-24 09:31:21.56566619 +0000 UTC m=+0.054437052 container exec 926e81c0f890a1c1ac5ebf5b0a3fc7d39273a3029701ecf933d5ab782a4c6bc4 (image=quay.io/ceph/ceph:v19, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mon-compute-0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:31:21 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:21 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000021s ======
Nov 24 09:31:21 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:21.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Nov 24 09:31:21 compute-0 podman[109212]: 2025-11-24 09:31:21.658404604 +0000 UTC m=+0.147175446 container exec_died 926e81c0f890a1c1ac5ebf5b0a3fc7d39273a3029701ecf933d5ab782a4c6bc4 (image=quay.io/ceph/ceph:v19, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Nov 24 09:31:22 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Nov 24 09:31:22 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Nov 24 09:31:22 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Nov 24 09:31:22 compute-0 sudo[109472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmeynhwyuarmjejlzveykxiqzupexkna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976681.9011445-558-108465995628903/AnsiballZ_file.py'
Nov 24 09:31:22 compute-0 sudo[109472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:31:22 compute-0 ceph-mon[74331]: pgmap v121: 353 pgs: 1 remapped+peering, 352 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 564 B/s rd, 0 op/s; 0 B/s, 1 objects/s recovering
Nov 24 09:31:22 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Nov 24 09:31:22 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:31:22.203359) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 09:31:22 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Nov 24 09:31:22 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976682203399, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 2877, "num_deletes": 251, "total_data_size": 7181296, "memory_usage": 7359512, "flush_reason": "Manual Compaction"}
Nov 24 09:31:22 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Nov 24 09:31:22 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976682234851, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 6760016, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7980, "largest_seqno": 10856, "table_properties": {"data_size": 6746169, "index_size": 8997, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3781, "raw_key_size": 33429, "raw_average_key_size": 22, "raw_value_size": 6716526, "raw_average_value_size": 4474, "num_data_blocks": 390, "num_entries": 1501, "num_filter_entries": 1501, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976580, "oldest_key_time": 1763976580, "file_creation_time": 1763976682, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "42aa12d2-c531-4ddc-8c4c-bc0b5971346b", "db_session_id": "RORHLERH15LC1QL8D0I4", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:31:22 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 31564 microseconds, and 14020 cpu microseconds.
Nov 24 09:31:22 compute-0 ceph-mon[74331]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:31:22 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:31:22.234914) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 6760016 bytes OK
Nov 24 09:31:22 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:31:22.234946) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Nov 24 09:31:22 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:31:22.235953) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Nov 24 09:31:22 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:31:22.235973) EVENT_LOG_v1 {"time_micros": 1763976682235967, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 09:31:22 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:31:22.235998) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 09:31:22 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 7168094, prev total WAL file size 7168094, number of live WAL files 2.
Nov 24 09:31:22 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:31:22 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:31:22.237519) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Nov 24 09:31:22 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 09:31:22 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(6601KB)], [23(10MB)]
Nov 24 09:31:22 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976682237557, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 18192732, "oldest_snapshot_seqno": -1}
Nov 24 09:31:22 compute-0 podman[109482]: 2025-11-24 09:31:22.282936205 +0000 UTC m=+0.071180202 container exec c1042f9aaa96d1cc7323d0bb263b746783ae7f616fd1b71ffa56027caf075582 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:31:22 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 4093 keys, 14280022 bytes, temperature: kUnknown
Nov 24 09:31:22 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976682323281, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 14280022, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14247276, "index_size": 21436, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10245, "raw_key_size": 104532, "raw_average_key_size": 25, "raw_value_size": 14166940, "raw_average_value_size": 3461, "num_data_blocks": 918, "num_entries": 4093, "num_filter_entries": 4093, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976305, "oldest_key_time": 0, "file_creation_time": 1763976682, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "42aa12d2-c531-4ddc-8c4c-bc0b5971346b", "db_session_id": "RORHLERH15LC1QL8D0I4", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:31:22 compute-0 ceph-mon[74331]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:31:22 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:31:22.323884) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 14280022 bytes
Nov 24 09:31:22 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:31:22.325357) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 211.3 rd, 165.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(6.4, 10.9 +0.0 blob) out(13.6 +0.0 blob), read-write-amplify(4.8) write-amplify(2.1) OK, records in: 4627, records dropped: 534 output_compression: NoCompression
Nov 24 09:31:22 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:31:22.325389) EVENT_LOG_v1 {"time_micros": 1763976682325374, "job": 8, "event": "compaction_finished", "compaction_time_micros": 86100, "compaction_time_cpu_micros": 31239, "output_level": 6, "num_output_files": 1, "total_output_size": 14280022, "num_input_records": 4627, "num_output_records": 4093, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 09:31:22 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:31:22 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976682326696, "job": 8, "event": "table_file_deletion", "file_number": 25}
Nov 24 09:31:22 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:31:22 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976682329136, "job": 8, "event": "table_file_deletion", "file_number": 23}
Nov 24 09:31:22 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:31:22.237464) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:31:22 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:31:22.329171) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:31:22 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:31:22.329177) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:31:22 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:31:22.329178) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:31:22 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:31:22.329180) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:31:22 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:31:22.329181) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:31:22 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:22 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab00001ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:22 compute-0 podman[109507]: 2025-11-24 09:31:22.362323183 +0000 UTC m=+0.060746168 container exec_died c1042f9aaa96d1cc7323d0bb263b746783ae7f616fd1b71ffa56027caf075582 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:31:22 compute-0 podman[109482]: 2025-11-24 09:31:22.367258159 +0000 UTC m=+0.155502166 container exec_died c1042f9aaa96d1cc7323d0bb263b746783ae7f616fd1b71ffa56027caf075582 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:31:22 compute-0 python3.9[109481]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:31:22 compute-0 sudo[109472]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:22 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:22 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab00001ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:22 compute-0 podman[109580]: 2025-11-24 09:31:22.627807062 +0000 UTC m=+0.053642794 container exec 3adc7e4dbfb76acd70b92bdc8783d49c26735889ac1576ee9a74ae48f52acf62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 24 09:31:22 compute-0 podman[109580]: 2025-11-24 09:31:22.6663021 +0000 UTC m=+0.092137842 container exec_died 3adc7e4dbfb76acd70b92bdc8783d49c26735889ac1576ee9a74ae48f52acf62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 09:31:22 compute-0 podman[109724]: 2025-11-24 09:31:22.940777673 +0000 UTC m=+0.103405345 container exec 6c3a81d73f056383702bf60c1dab3f213ae48261b4107ee30655cbadd5ed4114 (image=quay.io/ceph/haproxy:2.3, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf)
Nov 24 09:31:22 compute-0 podman[109724]: 2025-11-24 09:31:22.961845546 +0000 UTC m=+0.124473198 container exec_died 6c3a81d73f056383702bf60c1dab3f213ae48261b4107ee30655cbadd5ed4114 (image=quay.io/ceph/haproxy:2.3, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf)
Nov 24 09:31:22 compute-0 sudo[109793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlduhuynzdyvubmzjitdpakhuusnotvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976682.6752045-582-114415380485302/AnsiballZ_stat.py'
Nov 24 09:31:22 compute-0 sudo[109793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:31:22 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:22 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003ef0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:23 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:31:23 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Nov 24 09:31:23 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Nov 24 09:31:23 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Nov 24 09:31:23 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 149 pg[9.1f( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=5 ec=54/39 lis/c=147/101 les/c/f=148/102/0 sis=149) [0] r=0 lpr=149 pi=[101,149)/1 luod=0'0 crt=45'1130 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 24 09:31:23 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 149 pg[9.1f( v 45'1130 (0'0,45'1130] local-lis/les=0/0 n=5 ec=54/39 lis/c=147/101 les/c/f=148/102/0 sis=149) [0] r=0 lpr=149 pi=[101,149)/1 crt=45'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 09:31:23 compute-0 podman[109840]: 2025-11-24 09:31:23.156987862 +0000 UTC m=+0.055468963 container exec da5e2e82794b556dfcd8ea30635453752d519b3ce5ab3e77ac09ab6f644d0021 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-0-mglptr, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.28.2, name=keepalived, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, version=2.2.4, release=1793, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph.)
Nov 24 09:31:23 compute-0 python3.9[109803]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:31:23 compute-0 podman[109840]: 2025-11-24 09:31:23.170471963 +0000 UTC m=+0.068953054 container exec_died da5e2e82794b556dfcd8ea30635453752d519b3ce5ab3e77ac09ab6f644d0021 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-0-mglptr, vcs-type=git, distribution-scope=public, version=2.2.4, release=1793, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc.)
Nov 24 09:31:23 compute-0 ceph-mon[74331]: osdmap e148: 3 total, 3 up, 3 in
Nov 24 09:31:23 compute-0 ceph-mon[74331]: osdmap e149: 3 total, 3 up, 3 in
Nov 24 09:31:23 compute-0 sudo[109793]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:23 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:23 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000020s ======
Nov 24 09:31:23 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:23.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000020s
Nov 24 09:31:23 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v124: 353 pgs: 1 remapped+peering, 352 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 0 B/s, 1 objects/s recovering
Nov 24 09:31:23 compute-0 podman[109938]: 2025-11-24 09:31:23.360325005 +0000 UTC m=+0.044963468 container exec 333e8d52ac14c1ad2562a9b1108149f074ce2b54eb58b09f4ec22c7b717459e6 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:31:23 compute-0 podman[109938]: 2025-11-24 09:31:23.381065461 +0000 UTC m=+0.065703924 container exec_died 333e8d52ac14c1ad2562a9b1108149f074ce2b54eb58b09f4ec22c7b717459e6 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:31:23 compute-0 sudo[110011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpoghnqbccatoaphqwxjfulihoganjmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976682.6752045-582-114415380485302/AnsiballZ_file.py'
Nov 24 09:31:23 compute-0 sudo[110011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:31:23 compute-0 podman[110058]: 2025-11-24 09:31:23.564770363 +0000 UTC m=+0.049101718 container exec 64e58e60bc23a7d57cc9d528e4c0a82e4df02b33e046975aeb8ef22ad0995bf2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 24 09:31:23 compute-0 python3.9[110019]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:31:23 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:23 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:23 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:23.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:23 compute-0 sudo[110011]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:23 compute-0 podman[110058]: 2025-11-24 09:31:23.726877738 +0000 UTC m=+0.211209083 container exec_died 64e58e60bc23a7d57cc9d528e4c0a82e4df02b33e046975aeb8ef22ad0995bf2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 24 09:31:24 compute-0 podman[110195]: 2025-11-24 09:31:24.116351274 +0000 UTC m=+0.051382275 container exec 10beeaa631829ec8676854498a3516687cc150842a3e976767e7a8406d406beb (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:31:24 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Nov 24 09:31:24 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Nov 24 09:31:24 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Nov 24 09:31:24 compute-0 ceph-osd[82549]: osd.0 pg_epoch: 150 pg[9.1f( v 45'1130 (0'0,45'1130] local-lis/les=149/150 n=5 ec=54/39 lis/c=147/101 les/c/f=148/102/0 sis=149) [0] r=0 lpr=149 pi=[101,149)/1 crt=45'1130 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 09:31:24 compute-0 podman[110195]: 2025-11-24 09:31:24.150596641 +0000 UTC m=+0.085627652 container exec_died 10beeaa631829ec8676854498a3516687cc150842a3e976767e7a8406d406beb (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:31:24 compute-0 sudo[109089]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:24 compute-0 ceph-mon[74331]: pgmap v124: 353 pgs: 1 remapped+peering, 352 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 0 B/s, 1 objects/s recovering
Nov 24 09:31:24 compute-0 ceph-mon[74331]: osdmap e150: 3 total, 3 up, 3 in
Nov 24 09:31:24 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:31:24 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:31:24 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:31:24 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:31:24 compute-0 sudo[110237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:31:24 compute-0 sudo[110237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:31:24 compute-0 sudo[110237]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:24 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab14002400 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:24 compute-0 sudo[110262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:31:24 compute-0 sudo[110262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:31:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:24 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab00001ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:24 compute-0 sudo[110262]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:24 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:31:24 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:31:24 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:31:24 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:31:24 compute-0 sudo[110343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:31:24 compute-0 sudo[110343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:31:24 compute-0 sudo[110343]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:24 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab00001ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:25 compute-0 sudo[110395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 24 09:31:25 compute-0 sudo[110395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:31:25 compute-0 sudo[110491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrvliqeegeymwhisifadxgkawyppohbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976684.9011438-645-207684330300354/AnsiballZ_stat.py'
Nov 24 09:31:25 compute-0 sudo[110491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:31:25 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:31:25 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:31:25 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:31:25 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:31:25 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:31:25 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:31:25 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:31:25 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:31:25 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:31:25 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:25 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000021s ======
Nov 24 09:31:25 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:25.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Nov 24 09:31:25 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v126: 353 pgs: 1 remapped+peering, 352 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 491 B/s rd, 0 op/s; 0 B/s, 1 objects/s recovering
Nov 24 09:31:25 compute-0 python3.9[110493]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:31:25 compute-0 sudo[110491]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:25 compute-0 podman[110538]: 2025-11-24 09:31:25.452588401 +0000 UTC m=+0.039017550 container create 84326d625843b7c2fcfc74bca1923d09489842fc3d6f22623407e4a1027eff52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 24 09:31:25 compute-0 systemd[1]: Started libpod-conmon-84326d625843b7c2fcfc74bca1923d09489842fc3d6f22623407e4a1027eff52.scope.
Nov 24 09:31:25 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:31:25 compute-0 podman[110538]: 2025-11-24 09:31:25.435318199 +0000 UTC m=+0.021747378 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:31:25 compute-0 podman[110538]: 2025-11-24 09:31:25.570901815 +0000 UTC m=+0.157330984 container init 84326d625843b7c2fcfc74bca1923d09489842fc3d6f22623407e4a1027eff52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_mestorf, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:31:25 compute-0 podman[110538]: 2025-11-24 09:31:25.577884316 +0000 UTC m=+0.164313465 container start 84326d625843b7c2fcfc74bca1923d09489842fc3d6f22623407e4a1027eff52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_mestorf, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Nov 24 09:31:25 compute-0 podman[110538]: 2025-11-24 09:31:25.580907901 +0000 UTC m=+0.167337080 container attach 84326d625843b7c2fcfc74bca1923d09489842fc3d6f22623407e4a1027eff52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 09:31:25 compute-0 nice_mestorf[110578]: 167 167
Nov 24 09:31:25 compute-0 systemd[1]: libpod-84326d625843b7c2fcfc74bca1923d09489842fc3d6f22623407e4a1027eff52.scope: Deactivated successfully.
Nov 24 09:31:25 compute-0 conmon[110578]: conmon 84326d625843b7c2fcfc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-84326d625843b7c2fcfc74bca1923d09489842fc3d6f22623407e4a1027eff52.scope/container/memory.events
Nov 24 09:31:25 compute-0 podman[110538]: 2025-11-24 09:31:25.585561731 +0000 UTC m=+0.171990890 container died 84326d625843b7c2fcfc74bca1923d09489842fc3d6f22623407e4a1027eff52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:31:25 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:25 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000021s ======
Nov 24 09:31:25 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:25.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Nov 24 09:31:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-756050b8b2cb0b827ba1a02f4c2c92930fbf9fd710cce32820a53f6c410c12f9-merged.mount: Deactivated successfully.
Nov 24 09:31:25 compute-0 podman[110538]: 2025-11-24 09:31:25.622903134 +0000 UTC m=+0.209332293 container remove 84326d625843b7c2fcfc74bca1923d09489842fc3d6f22623407e4a1027eff52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_mestorf, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 09:31:25 compute-0 systemd[1]: libpod-conmon-84326d625843b7c2fcfc74bca1923d09489842fc3d6f22623407e4a1027eff52.scope: Deactivated successfully.
Nov 24 09:31:25 compute-0 podman[110601]: 2025-11-24 09:31:25.777416117 +0000 UTC m=+0.048136357 container create d6c5df2c26d6bb3287c0e43a90258c6399a59d7bbae3c1e1eeee4fb4bf11bece (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_northcutt, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True)
Nov 24 09:31:25 compute-0 systemd[1]: Started libpod-conmon-d6c5df2c26d6bb3287c0e43a90258c6399a59d7bbae3c1e1eeee4fb4bf11bece.scope.
Nov 24 09:31:25 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:31:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13b8a3ed1916bd8610168e3bfde36bd47a6c9612aa90ba7968e90e2c662f2f3b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:31:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13b8a3ed1916bd8610168e3bfde36bd47a6c9612aa90ba7968e90e2c662f2f3b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:31:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13b8a3ed1916bd8610168e3bfde36bd47a6c9612aa90ba7968e90e2c662f2f3b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:31:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13b8a3ed1916bd8610168e3bfde36bd47a6c9612aa90ba7968e90e2c662f2f3b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:31:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13b8a3ed1916bd8610168e3bfde36bd47a6c9612aa90ba7968e90e2c662f2f3b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:31:25 compute-0 podman[110601]: 2025-11-24 09:31:25.755962345 +0000 UTC m=+0.026682685 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:31:25 compute-0 podman[110601]: 2025-11-24 09:31:25.853733428 +0000 UTC m=+0.124453678 container init d6c5df2c26d6bb3287c0e43a90258c6399a59d7bbae3c1e1eeee4fb4bf11bece (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_northcutt, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:31:25 compute-0 podman[110601]: 2025-11-24 09:31:25.86731265 +0000 UTC m=+0.138032890 container start d6c5df2c26d6bb3287c0e43a90258c6399a59d7bbae3c1e1eeee4fb4bf11bece (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_northcutt, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:31:25 compute-0 podman[110601]: 2025-11-24 09:31:25.870787945 +0000 UTC m=+0.141508175 container attach d6c5df2c26d6bb3287c0e43a90258c6399a59d7bbae3c1e1eeee4fb4bf11bece (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 24 09:31:26 compute-0 practical_northcutt[110617]: --> passed data devices: 0 physical, 1 LVM
Nov 24 09:31:26 compute-0 practical_northcutt[110617]: --> All data devices are unavailable
Nov 24 09:31:26 compute-0 systemd[1]: libpod-d6c5df2c26d6bb3287c0e43a90258c6399a59d7bbae3c1e1eeee4fb4bf11bece.scope: Deactivated successfully.
Nov 24 09:31:26 compute-0 podman[110601]: 2025-11-24 09:31:26.19279671 +0000 UTC m=+0.463516950 container died d6c5df2c26d6bb3287c0e43a90258c6399a59d7bbae3c1e1eeee4fb4bf11bece (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_northcutt, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:31:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-13b8a3ed1916bd8610168e3bfde36bd47a6c9612aa90ba7968e90e2c662f2f3b-merged.mount: Deactivated successfully.
Nov 24 09:31:26 compute-0 podman[110601]: 2025-11-24 09:31:26.23045092 +0000 UTC m=+0.501171170 container remove d6c5df2c26d6bb3287c0e43a90258c6399a59d7bbae3c1e1eeee4fb4bf11bece (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_northcutt, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:31:26 compute-0 systemd[1]: libpod-conmon-d6c5df2c26d6bb3287c0e43a90258c6399a59d7bbae3c1e1eeee4fb4bf11bece.scope: Deactivated successfully.
Nov 24 09:31:26 compute-0 ceph-mon[74331]: pgmap v126: 353 pgs: 1 remapped+peering, 352 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 491 B/s rd, 0 op/s; 0 B/s, 1 objects/s recovering
Nov 24 09:31:26 compute-0 sudo[110395]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:26 compute-0 sudo[110697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:31:26 compute-0 sudo[110697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:31:26 compute-0 sudo[110697]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:26 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003ef0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:26 compute-0 sudo[110722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- lvm list --format json
Nov 24 09:31:26 compute-0 sudo[110722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:31:26 compute-0 sudo[110820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cstooaozknfaohfykuzwhhujajogvyek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976686.121153-684-264941507819058/AnsiballZ_getent.py'
Nov 24 09:31:26 compute-0 sudo[110820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:31:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:26 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab14003890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:26 compute-0 python3.9[110822]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 24 09:31:26 compute-0 sudo[110820]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:26 compute-0 podman[110865]: 2025-11-24 09:31:26.809435712 +0000 UTC m=+0.038586942 container create 8c6bd482c19d86b4027abfcc4a6744b31e349ea04c3f8dae83f44c40b50c1697 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_morse, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:31:26 compute-0 systemd[1]: Started libpod-conmon-8c6bd482c19d86b4027abfcc4a6744b31e349ea04c3f8dae83f44c40b50c1697.scope.
Nov 24 09:31:26 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:31:26 compute-0 podman[110865]: 2025-11-24 09:31:26.792559699 +0000 UTC m=+0.021710959 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:31:26 compute-0 podman[110865]: 2025-11-24 09:31:26.89542743 +0000 UTC m=+0.124578680 container init 8c6bd482c19d86b4027abfcc4a6744b31e349ea04c3f8dae83f44c40b50c1697 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_morse, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:31:26 compute-0 podman[110865]: 2025-11-24 09:31:26.903121316 +0000 UTC m=+0.132272546 container start 8c6bd482c19d86b4027abfcc4a6744b31e349ea04c3f8dae83f44c40b50c1697 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_morse, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Nov 24 09:31:26 compute-0 podman[110865]: 2025-11-24 09:31:26.906432477 +0000 UTC m=+0.135583737 container attach 8c6bd482c19d86b4027abfcc4a6744b31e349ea04c3f8dae83f44c40b50c1697 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:31:26 compute-0 competent_morse[110900]: 167 167
Nov 24 09:31:26 compute-0 systemd[1]: libpod-8c6bd482c19d86b4027abfcc4a6744b31e349ea04c3f8dae83f44c40b50c1697.scope: Deactivated successfully.
Nov 24 09:31:26 compute-0 podman[110865]: 2025-11-24 09:31:26.91024119 +0000 UTC m=+0.139392420 container died 8c6bd482c19d86b4027abfcc4a6744b31e349ea04c3f8dae83f44c40b50c1697 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_morse, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:31:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ab67021da00c2a5b72c4ccb102cbf2290e000406a67c07ae780be2a4240c597-merged.mount: Deactivated successfully.
Nov 24 09:31:26 compute-0 podman[110865]: 2025-11-24 09:31:26.946217354 +0000 UTC m=+0.175368584 container remove 8c6bd482c19d86b4027abfcc4a6744b31e349ea04c3f8dae83f44c40b50c1697 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default)
Nov 24 09:31:26 compute-0 systemd[1]: libpod-conmon-8c6bd482c19d86b4027abfcc4a6744b31e349ea04c3f8dae83f44c40b50c1697.scope: Deactivated successfully.
Nov 24 09:31:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:26 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab00001ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:27 compute-0 podman[110931]: 2025-11-24 09:31:27.107767927 +0000 UTC m=+0.058996599 container create f9d157f89e9baf1e899aa37081abcb3a6817161ba4a84e8dde911edb0344fcc7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_thompson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Nov 24 09:31:27 compute-0 systemd[1]: Started libpod-conmon-f9d157f89e9baf1e899aa37081abcb3a6817161ba4a84e8dde911edb0344fcc7.scope.
Nov 24 09:31:27 compute-0 podman[110931]: 2025-11-24 09:31:27.080130833 +0000 UTC m=+0.031359515 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:31:27 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:31:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9023cc4877bed49aa1d3c77ee5b58b3c637a8b3557d1f060e2b218d94521b165/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:31:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9023cc4877bed49aa1d3c77ee5b58b3c637a8b3557d1f060e2b218d94521b165/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:31:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9023cc4877bed49aa1d3c77ee5b58b3c637a8b3557d1f060e2b218d94521b165/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:31:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9023cc4877bed49aa1d3c77ee5b58b3c637a8b3557d1f060e2b218d94521b165/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:31:27 compute-0 podman[110931]: 2025-11-24 09:31:27.226604253 +0000 UTC m=+0.177832905 container init f9d157f89e9baf1e899aa37081abcb3a6817161ba4a84e8dde911edb0344fcc7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_thompson, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:31:27 compute-0 podman[110931]: 2025-11-24 09:31:27.236021396 +0000 UTC m=+0.187250028 container start f9d157f89e9baf1e899aa37081abcb3a6817161ba4a84e8dde911edb0344fcc7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 24 09:31:27 compute-0 podman[110931]: 2025-11-24 09:31:27.239540971 +0000 UTC m=+0.190769613 container attach f9d157f89e9baf1e899aa37081abcb3a6817161ba4a84e8dde911edb0344fcc7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_thompson, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:31:27 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:27 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:27 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:27.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:27 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v127: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Nov 24 09:31:27 compute-0 sudo[111083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iudckxsnukfynnxkqnnnmeddbcuaycmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976687.1820602-714-85951474881358/AnsiballZ_getent.py'
Nov 24 09:31:27 compute-0 sudo[111083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:31:27 compute-0 funny_thompson[110967]: {
Nov 24 09:31:27 compute-0 funny_thompson[110967]:     "0": [
Nov 24 09:31:27 compute-0 funny_thompson[110967]:         {
Nov 24 09:31:27 compute-0 funny_thompson[110967]:             "devices": [
Nov 24 09:31:27 compute-0 funny_thompson[110967]:                 "/dev/loop3"
Nov 24 09:31:27 compute-0 funny_thompson[110967]:             ],
Nov 24 09:31:27 compute-0 funny_thompson[110967]:             "lv_name": "ceph_lv0",
Nov 24 09:31:27 compute-0 funny_thompson[110967]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:31:27 compute-0 funny_thompson[110967]:             "lv_size": "21470642176",
Nov 24 09:31:27 compute-0 funny_thompson[110967]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=84a084c3-61a7-5de7-8207-1f88efa59a64,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 24 09:31:27 compute-0 funny_thompson[110967]:             "lv_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:31:27 compute-0 funny_thompson[110967]:             "name": "ceph_lv0",
Nov 24 09:31:27 compute-0 funny_thompson[110967]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:31:27 compute-0 funny_thompson[110967]:             "tags": {
Nov 24 09:31:27 compute-0 funny_thompson[110967]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:31:27 compute-0 funny_thompson[110967]:                 "ceph.block_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:31:27 compute-0 funny_thompson[110967]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 09:31:27 compute-0 funny_thompson[110967]:                 "ceph.cluster_fsid": "84a084c3-61a7-5de7-8207-1f88efa59a64",
Nov 24 09:31:27 compute-0 funny_thompson[110967]:                 "ceph.cluster_name": "ceph",
Nov 24 09:31:27 compute-0 funny_thompson[110967]:                 "ceph.crush_device_class": "",
Nov 24 09:31:27 compute-0 funny_thompson[110967]:                 "ceph.encrypted": "0",
Nov 24 09:31:27 compute-0 funny_thompson[110967]:                 "ceph.osd_fsid": "4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c",
Nov 24 09:31:27 compute-0 funny_thompson[110967]:                 "ceph.osd_id": "0",
Nov 24 09:31:27 compute-0 funny_thompson[110967]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 09:31:27 compute-0 funny_thompson[110967]:                 "ceph.type": "block",
Nov 24 09:31:27 compute-0 funny_thompson[110967]:                 "ceph.vdo": "0",
Nov 24 09:31:27 compute-0 funny_thompson[110967]:                 "ceph.with_tpm": "0"
Nov 24 09:31:27 compute-0 funny_thompson[110967]:             },
Nov 24 09:31:27 compute-0 funny_thompson[110967]:             "type": "block",
Nov 24 09:31:27 compute-0 funny_thompson[110967]:             "vg_name": "ceph_vg0"
Nov 24 09:31:27 compute-0 funny_thompson[110967]:         }
Nov 24 09:31:27 compute-0 funny_thompson[110967]:     ]
Nov 24 09:31:27 compute-0 funny_thompson[110967]: }
Nov 24 09:31:27 compute-0 systemd[1]: libpod-f9d157f89e9baf1e899aa37081abcb3a6817161ba4a84e8dde911edb0344fcc7.scope: Deactivated successfully.
Nov 24 09:31:27 compute-0 podman[110931]: 2025-11-24 09:31:27.555809473 +0000 UTC m=+0.507038115 container died f9d157f89e9baf1e899aa37081abcb3a6817161ba4a84e8dde911edb0344fcc7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_thompson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Nov 24 09:31:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-9023cc4877bed49aa1d3c77ee5b58b3c637a8b3557d1f060e2b218d94521b165-merged.mount: Deactivated successfully.
Nov 24 09:31:27 compute-0 podman[110931]: 2025-11-24 09:31:27.60218015 +0000 UTC m=+0.553408782 container remove f9d157f89e9baf1e899aa37081abcb3a6817161ba4a84e8dde911edb0344fcc7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_thompson, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:31:27 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:27 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000021s ======
Nov 24 09:31:27 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:27.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Nov 24 09:31:27 compute-0 systemd[1]: libpod-conmon-f9d157f89e9baf1e899aa37081abcb3a6817161ba4a84e8dde911edb0344fcc7.scope: Deactivated successfully.
Nov 24 09:31:27 compute-0 sudo[110722]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:27 compute-0 sudo[111097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:31:27 compute-0 sudo[111097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:31:27 compute-0 sudo[111097]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:27 compute-0 python3.9[111085]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 24 09:31:27 compute-0 sudo[111122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- raw list --format json
Nov 24 09:31:27 compute-0 sudo[111122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:31:27 compute-0 sudo[111083]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:28 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:31:28 compute-0 podman[111264]: 2025-11-24 09:31:28.201745494 +0000 UTC m=+0.048299519 container create 365bc468121f48ce0e42254d1d88233ae056ac7d4165080d34254e1a43f021d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_wilbur, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Nov 24 09:31:28 compute-0 systemd[1]: Started libpod-conmon-365bc468121f48ce0e42254d1d88233ae056ac7d4165080d34254e1a43f021d6.scope.
Nov 24 09:31:28 compute-0 podman[111264]: 2025-11-24 09:31:28.177512033 +0000 UTC m=+0.024066078 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:31:28 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:31:28 compute-0 podman[111264]: 2025-11-24 09:31:28.29360816 +0000 UTC m=+0.140162245 container init 365bc468121f48ce0e42254d1d88233ae056ac7d4165080d34254e1a43f021d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_wilbur, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:31:28 compute-0 podman[111264]: 2025-11-24 09:31:28.30198983 +0000 UTC m=+0.148543845 container start 365bc468121f48ce0e42254d1d88233ae056ac7d4165080d34254e1a43f021d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_wilbur, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:31:28 compute-0 podman[111264]: 2025-11-24 09:31:28.306267712 +0000 UTC m=+0.152821757 container attach 365bc468121f48ce0e42254d1d88233ae056ac7d4165080d34254e1a43f021d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_wilbur, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Nov 24 09:31:28 compute-0 intelligent_wilbur[111280]: 167 167
Nov 24 09:31:28 compute-0 systemd[1]: libpod-365bc468121f48ce0e42254d1d88233ae056ac7d4165080d34254e1a43f021d6.scope: Deactivated successfully.
Nov 24 09:31:28 compute-0 podman[111264]: 2025-11-24 09:31:28.310669057 +0000 UTC m=+0.157223092 container died 365bc468121f48ce0e42254d1d88233ae056ac7d4165080d34254e1a43f021d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_wilbur, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:31:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c9e47b4b679af1b22c79177b642641af03abe83ec076e4e433cf65440d19f3b-merged.mount: Deactivated successfully.
Nov 24 09:31:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:28 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab00001ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:28 compute-0 podman[111264]: 2025-11-24 09:31:28.352310692 +0000 UTC m=+0.198864727 container remove 365bc468121f48ce0e42254d1d88233ae056ac7d4165080d34254e1a43f021d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 09:31:28 compute-0 systemd[1]: libpod-conmon-365bc468121f48ce0e42254d1d88233ae056ac7d4165080d34254e1a43f021d6.scope: Deactivated successfully.
Nov 24 09:31:28 compute-0 ceph-mon[74331]: pgmap v127: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Nov 24 09:31:28 compute-0 podman[111350]: 2025-11-24 09:31:28.530954814 +0000 UTC m=+0.043220530 container create fc64b38683cee42c310be654ad33e9239262b22fc934ca1e8d747079f94485fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:31:28 compute-0 sudo[111389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkpwaigqxujgiukdcteghfvwrxfnhpvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976688.0433786-738-75404979770709/AnsiballZ_group.py'
Nov 24 09:31:28 compute-0 sudo[111389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:31:28 compute-0 systemd[1]: Started libpod-conmon-fc64b38683cee42c310be654ad33e9239262b22fc934ca1e8d747079f94485fe.scope.
Nov 24 09:31:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:28 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003ef0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:28 compute-0 podman[111350]: 2025-11-24 09:31:28.513785075 +0000 UTC m=+0.026050821 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:31:28 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:31:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b92c41aae7ecc8ed5e555c9a52b7e17b3637361ee34311650fe358593ede4964/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:31:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b92c41aae7ecc8ed5e555c9a52b7e17b3637361ee34311650fe358593ede4964/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:31:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b92c41aae7ecc8ed5e555c9a52b7e17b3637361ee34311650fe358593ede4964/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:31:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b92c41aae7ecc8ed5e555c9a52b7e17b3637361ee34311650fe358593ede4964/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:31:28 compute-0 podman[111350]: 2025-11-24 09:31:28.63491405 +0000 UTC m=+0.147179796 container init fc64b38683cee42c310be654ad33e9239262b22fc934ca1e8d747079f94485fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Nov 24 09:31:28 compute-0 podman[111350]: 2025-11-24 09:31:28.642757568 +0000 UTC m=+0.155023284 container start fc64b38683cee42c310be654ad33e9239262b22fc934ca1e8d747079f94485fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_jepsen, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:31:28 compute-0 podman[111350]: 2025-11-24 09:31:28.64607744 +0000 UTC m=+0.158343156 container attach fc64b38683cee42c310be654ad33e9239262b22fc934ca1e8d747079f94485fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_jepsen, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:31:28 compute-0 python3.9[111392]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 24 09:31:28 compute-0 sudo[111389]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:29 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:28 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab14003890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:29 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:29 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:29 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:29.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:29 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v128: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 143 B/s rd, 0 op/s; 15 B/s, 0 objects/s recovering
Nov 24 09:31:29 compute-0 lvm[111593]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 09:31:29 compute-0 lvm[111593]: VG ceph_vg0 finished
Nov 24 09:31:29 compute-0 quirky_jepsen[111395]: {}
Nov 24 09:31:29 compute-0 sudo[111623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtdrzqrmutvqbyhfwhecqlrmxjncpiut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976689.0934849-765-25777448354015/AnsiballZ_file.py'
Nov 24 09:31:29 compute-0 sudo[111623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:31:29 compute-0 systemd[1]: libpod-fc64b38683cee42c310be654ad33e9239262b22fc934ca1e8d747079f94485fe.scope: Deactivated successfully.
Nov 24 09:31:29 compute-0 systemd[1]: libpod-fc64b38683cee42c310be654ad33e9239262b22fc934ca1e8d747079f94485fe.scope: Consumed 1.206s CPU time.
Nov 24 09:31:29 compute-0 podman[111350]: 2025-11-24 09:31:29.415089588 +0000 UTC m=+0.927355304 container died fc64b38683cee42c310be654ad33e9239262b22fc934ca1e8d747079f94485fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_jepsen, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 09:31:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-b92c41aae7ecc8ed5e555c9a52b7e17b3637361ee34311650fe358593ede4964-merged.mount: Deactivated successfully.
Nov 24 09:31:29 compute-0 podman[111350]: 2025-11-24 09:31:29.456825126 +0000 UTC m=+0.969090842 container remove fc64b38683cee42c310be654ad33e9239262b22fc934ca1e8d747079f94485fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:31:29 compute-0 systemd[1]: libpod-conmon-fc64b38683cee42c310be654ad33e9239262b22fc934ca1e8d747079f94485fe.scope: Deactivated successfully.
Nov 24 09:31:29 compute-0 sudo[111122]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:29 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:31:29 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:31:29 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:31:29 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:31:29 compute-0 python3.9[111625]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 24 09:31:29 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:29 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:29 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:29.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:29 compute-0 sudo[111638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:31:29 compute-0 sudo[111638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:31:29 compute-0 sudo[111638]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:29 compute-0 sudo[111623]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:29 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/093129 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:31:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:30 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc004090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:30 compute-0 sudo[111813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obnjlrerrmpuzbvdhocwmzhjcghdnotg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976690.2343824-798-5425789841797/AnsiballZ_dnf.py'
Nov 24 09:31:30 compute-0 sudo[111813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:31:30 compute-0 ceph-mon[74331]: pgmap v128: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 143 B/s rd, 0 op/s; 15 B/s, 0 objects/s recovering
Nov 24 09:31:30 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:31:30 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:31:30 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:31:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:30 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab000045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:30 compute-0 python3.9[111815]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 09:31:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:31:30] "GET /metrics HTTP/1.1" 200 48243 "" "Prometheus/2.51.0"
Nov 24 09:31:30 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:31:30] "GET /metrics HTTP/1.1" 200 48243 "" "Prometheus/2.51.0"
Nov 24 09:31:31 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:31 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003f10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:31 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:31 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.002000042s ======
Nov 24 09:31:31 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:31.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000042s
Nov 24 09:31:31 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v129: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 250 B/s rd, 0 op/s; 13 B/s, 0 objects/s recovering
Nov 24 09:31:31 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:31 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000020s ======
Nov 24 09:31:31 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:31.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000020s
Nov 24 09:31:31 compute-0 sudo[111813]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:32 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:32 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab14003890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:32 compute-0 ceph-mon[74331]: pgmap v129: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 250 B/s rd, 0 op/s; 13 B/s, 0 objects/s recovering
Nov 24 09:31:32 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:32 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc0040b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:32 compute-0 sudo[111968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unvkivxtovlibdbqhhkjmcrckelpjacp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976692.5564291-822-181777095830724/AnsiballZ_file.py'
Nov 24 09:31:32 compute-0 sudo[111968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:31:33 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:33 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab000045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:33 compute-0 python3.9[111970]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:31:33 compute-0 sudo[111968]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:31:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000021s ======
Nov 24 09:31:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:33.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Nov 24 09:31:33 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v130: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 0 op/s; 10 B/s, 0 objects/s recovering
Nov 24 09:31:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000021s ======
Nov 24 09:31:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:33.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Nov 24 09:31:33 compute-0 sudo[112121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmhdvcxrzkbyczuhzetaqcuihsawuvoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976693.3749623-846-168558998233545/AnsiballZ_stat.py'
Nov 24 09:31:33 compute-0 sudo[112121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:31:33 compute-0 python3.9[112123]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:31:33 compute-0 sudo[112121]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:34 compute-0 sudo[112199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhptpxzbbovnnvpjdcbtlflrfsujtjim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976693.3749623-846-168558998233545/AnsiballZ_file.py'
Nov 24 09:31:34 compute-0 sudo[112199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:31:34 compute-0 python3.9[112201]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:31:34 compute-0 sudo[112199]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:34 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:34 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003f30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:34 compute-0 ceph-mon[74331]: pgmap v130: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 0 op/s; 10 B/s, 0 objects/s recovering
Nov 24 09:31:34 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:34 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab140045a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:35 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab140045a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:35 compute-0 sudo[112352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egclvijuildonwqvrmmvthddvirgxupf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976694.7680976-885-187169273288992/AnsiballZ_stat.py'
Nov 24 09:31:35 compute-0 sudo[112352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:31:35 compute-0 python3.9[112354]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:31:35 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:35 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000021s ======
Nov 24 09:31:35 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:35.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Nov 24 09:31:35 compute-0 sudo[112352]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:35 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v131: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 183 B/s rd, 0 op/s; 9 B/s, 0 objects/s recovering
Nov 24 09:31:35 compute-0 sudo[112431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pifiznjfaaoppgwtoxbbwjlheukaltei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976694.7680976-885-187169273288992/AnsiballZ_file.py'
Nov 24 09:31:35 compute-0 sudo[112431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:31:35 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:35 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:35 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:35.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:35 compute-0 python3.9[112433]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:31:35 compute-0 sudo[112431]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:36 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:36 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab140045a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:36 compute-0 ceph-mon[74331]: pgmap v131: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 183 B/s rd, 0 op/s; 9 B/s, 0 objects/s recovering
Nov 24 09:31:36 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:36 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab000045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:36 compute-0 sudo[112584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdaajbsciusjvmyexxmknqsozufcglhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976696.467077-930-175280253609519/AnsiballZ_dnf.py'
Nov 24 09:31:36 compute-0 sudo[112584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:31:36 compute-0 ceph-mgr[74626]: [dashboard INFO request] [192.168.122.100:39234] [POST] [200] [0.001s] [4.0B] [4d14fd93-7b78-44b9-a785-f5033e214d65] /api/prometheus_receiver
Nov 24 09:31:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:37 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab000045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:37 compute-0 python3.9[112586]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 09:31:37 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:37 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:37 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:37.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:37 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v132: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s; 9 B/s, 0 objects/s recovering
Nov 24 09:31:37 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:37 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000020s ======
Nov 24 09:31:37 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:37.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000020s
Nov 24 09:31:38 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:31:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:38 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:38 compute-0 sudo[112584]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:38 compute-0 ceph-mon[74331]: pgmap v132: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s; 9 B/s, 0 objects/s recovering
Nov 24 09:31:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:38 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab140045a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:39 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:39 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab000045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:39 compute-0 sudo[112690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:31:39 compute-0 sudo[112690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:31:39 compute-0 sudo[112690]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:39 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:39 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000021s ======
Nov 24 09:31:39 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:39.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Nov 24 09:31:39 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v133: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:31:39 compute-0 python3.9[112765]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:31:39 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:39 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:39 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:39.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:39 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:39 : epoch 6924254d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:31:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:40 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc004130 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:40 compute-0 python3.9[112918]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 24 09:31:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:40 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003f90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:40 compute-0 ceph-mon[74331]: pgmap v133: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:31:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:31:40] "GET /metrics HTTP/1.1" 200 48243 "" "Prometheus/2.51.0"
Nov 24 09:31:40 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:31:40] "GET /metrics HTTP/1.1" 200 48243 "" "Prometheus/2.51.0"
Nov 24 09:31:41 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:41 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003f90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:41 compute-0 python3.9[113068]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:31:41 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:41 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:41 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:41.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:41 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v134: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:31:41 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:41 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:41 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:41.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:42 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab000045e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:42 compute-0 sudo[113220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqmdkuumudxlbqtmtfvudofvrkfasiki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976701.967387-1053-184774815839969/AnsiballZ_systemd.py'
Nov 24 09:31:42 compute-0 sudo[113220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:31:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:42 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc004150 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:42 compute-0 ceph-mon[74331]: pgmap v134: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:31:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:42 : epoch 6924254d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:31:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:42 : epoch 6924254d : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:31:42 compute-0 python3.9[113222]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:31:42 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 24 09:31:43 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:43 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc004150 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:43 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Nov 24 09:31:43 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 24 09:31:43 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 24 09:31:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:31:43 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 24 09:31:43 compute-0 sudo[113220]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:43 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:43 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000021s ======
Nov 24 09:31:43 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:43.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Nov 24 09:31:43 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v135: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:31:43 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:43 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:43 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:43.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:44 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003fb0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:44 compute-0 python3.9[113387]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 24 09:31:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:44 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab000045e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:44 compute-0 ceph-mon[74331]: pgmap v135: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:31:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:45 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0000f30 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Optimize plan auto_2025-11-24_09:31:45
Nov 24 09:31:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 09:31:45 compute-0 ceph-mgr[74626]: [balancer INFO root] do_upmap
Nov 24 09:31:45 compute-0 ceph-mgr[74626]: [balancer INFO root] pools ['vms', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta', '.nfs', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', 'volumes', '.mgr', 'backups']
Nov 24 09:31:45 compute-0 ceph-mgr[74626]: [balancer INFO root] prepared 0/10 upmap changes
Nov 24 09:31:45 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:45 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:45 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:45.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:45 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v136: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:31:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 09:31:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:31:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 09:31:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:31:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:31:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:31:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:31:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:31:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:31:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:31:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:31:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:31:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 09:31:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:31:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:31:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:31:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 24 09:31:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:31:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 09:31:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:31:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:31:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:31:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 09:31:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:31:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 09:31:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:31:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:31:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:31:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:31:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 09:31:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:31:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:31:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:31:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:31:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:31:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:31:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 09:31:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:31:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:31:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:31:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:31:45 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:45 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:45 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:45.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:45 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:31:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:45 : epoch 6924254d : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:31:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:46 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc0041e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:46 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003fd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:46 compute-0 ceph-mon[74331]: pgmap v136: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:31:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:31:46.951Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:31:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:47 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab000045e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:47 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:47 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000021s ======
Nov 24 09:31:47 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:47.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Nov 24 09:31:47 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v137: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:31:47 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:47 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000021s ======
Nov 24 09:31:47 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:47.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Nov 24 09:31:48 compute-0 sudo[113540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqoxpxgwbtiuoownfssluorinspyhsat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976707.8027775-1224-248862810487601/AnsiballZ_systemd.py'
Nov 24 09:31:48 compute-0 sudo[113540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:31:48 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:31:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:48 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0000f30 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:48 compute-0 python3.9[113542]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:31:48 compute-0 sudo[113540]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:48 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc004200 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:48 compute-0 ceph-mon[74331]: pgmap v137: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:31:48 compute-0 sudo[113695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwahfkyicqisoccxhcssktpsefldxiqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976708.5674455-1224-141803762029189/AnsiballZ_systemd.py'
Nov 24 09:31:48 compute-0 sudo[113695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:31:49 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:49 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8003ff0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:49 compute-0 python3.9[113697]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:31:49 compute-0 sudo[113695]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:49 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:49 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:49 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:49.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:49 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v138: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:31:49 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:49 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000020s ======
Nov 24 09:31:49 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:49.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000020s
Nov 24 09:31:50 compute-0 sshd-session[105657]: Connection closed by 192.168.122.30 port 55434
Nov 24 09:31:50 compute-0 sshd-session[105630]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:31:50 compute-0 systemd[1]: session-39.scope: Deactivated successfully.
Nov 24 09:31:50 compute-0 systemd[1]: session-39.scope: Consumed 1min 4.207s CPU time.
Nov 24 09:31:50 compute-0 systemd-logind[822]: Session 39 logged out. Waiting for processes to exit.
Nov 24 09:31:50 compute-0 systemd-logind[822]: Removed session 39.
Nov 24 09:31:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:50 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab000045e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:50 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0000f30 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:50 compute-0 ceph-mon[74331]: pgmap v138: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:31:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:31:50] "GET /metrics HTTP/1.1" 200 48236 "" "Prometheus/2.51.0"
Nov 24 09:31:50 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:31:50] "GET /metrics HTTP/1.1" 200 48236 "" "Prometheus/2.51.0"
Nov 24 09:31:51 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:51 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc004220 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:51 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:51 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:51 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:51.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:51 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v139: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:31:51 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:51 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:51 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:51.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:51 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/093151 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:31:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:52 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8004010 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:52 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab000045e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:52 compute-0 ceph-mon[74331]: pgmap v139: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:31:53 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:53 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0000f30 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:53 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:31:53 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:53 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000021s ======
Nov 24 09:31:53 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:53.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Nov 24 09:31:53 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v140: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:31:53 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:53 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:31:53 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:53.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:31:54 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:54 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc004240 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:54 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:54 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae8004030 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:54 compute-0 ceph-mon[74331]: pgmap v140: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:31:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:55 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab000045e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:55 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:55 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:55 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:55.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:55 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v141: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:31:55 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:55 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:55 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:55.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:56 compute-0 ceph-mon[74331]: pgmap v141: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:31:56 compute-0 sshd-session[113731]: Accepted publickey for zuul from 192.168.122.30 port 39986 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 09:31:56 compute-0 systemd-logind[822]: New session 40 of user zuul.
Nov 24 09:31:56 compute-0 systemd[1]: Started Session 40 of User zuul.
Nov 24 09:31:56 compute-0 sshd-session[113731]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:31:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:56 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0003690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:56 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc004260 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:56 compute-0 python3.9[113885]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:31:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:31:56.952Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:31:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:57 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc004260 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:57 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:57 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:57 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:57.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:57 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v142: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:31:57 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:57 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:57 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:57.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:57 compute-0 sudo[114040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsbiuoihhtvclmlfmkndplqhfhhzhzmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976717.5589364-68-85224859480444/AnsiballZ_getent.py'
Nov 24 09:31:57 compute-0 sudo[114040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:31:58 compute-0 python3.9[114042]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 24 09:31:58 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:31:58 compute-0 sudo[114040]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:58 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab000045e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:58 compute-0 ceph-mon[74331]: pgmap v142: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:31:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:58 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0003690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:58 compute-0 sudo[114194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdrlrtkrwphgatdgotanwjiqpktypfsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976718.6285856-104-221880465005473/AnsiballZ_setup.py'
Nov 24 09:31:58 compute-0 sudo[114194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:31:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:31:59 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc004260 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:31:59 compute-0 python3.9[114196]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 09:31:59 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:59 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:31:59 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:31:59.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:31:59 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v143: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:31:59 compute-0 sudo[114204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:31:59 compute-0 sudo[114204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:31:59 compute-0 sudo[114204]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:59 compute-0 sudo[114194]: pam_unix(sudo:session): session closed for user root
Nov 24 09:31:59 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:31:59 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:31:59 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:31:59.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:31:59 compute-0 sudo[114304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sberpleuoxymlqyfhefyivntixnirqkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976718.6285856-104-221880465005473/AnsiballZ_dnf.py'
Nov 24 09:31:59 compute-0 sudo[114304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:00 compute-0 python3.9[114306]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 24 09:32:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:32:00 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc004260 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:00 compute-0 ceph-mon[74331]: pgmap v143: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:32:00 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:32:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:32:00 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab000045e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:32:00] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Nov 24 09:32:00 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:32:00] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Nov 24 09:32:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:32:01 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc004260 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:01 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:01 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:32:01 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:01.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:32:01 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v144: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:32:01 compute-0 sudo[114304]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:01 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:01 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:01 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:01.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:02 compute-0 sudo[114460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpvcfxnuusyzasyanscplikfreiqqjlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976721.8455808-146-48274934261001/AnsiballZ_dnf.py'
Nov 24 09:32:02 compute-0 sudo[114460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:02 compute-0 python3.9[114462]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 09:32:02 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:32:02 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae80040b0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:02 compute-0 ceph-mon[74331]: pgmap v144: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:32:02 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:32:02 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae80040b0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:32:03 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0003690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:03 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:32:03 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:03 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:03 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:03.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:03 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v145: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:32:03 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:03 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:32:03 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:03.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:32:03 compute-0 sudo[114460]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:04 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:32:04 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab000045e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:04 compute-0 sudo[114615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmqdjxdoouiorwxjpvvgvcvpkitvttdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976723.9220653-170-65774301882479/AnsiballZ_systemd.py'
Nov 24 09:32:04 compute-0 sudo[114615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:04 compute-0 ceph-mon[74331]: pgmap v145: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:32:04 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:32:04 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc0042a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:04 compute-0 python3.9[114617]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 09:32:04 compute-0 sudo[114615]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:32:05 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae80040b0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:05 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:05 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:32:05 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:05.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:32:05 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v146: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:32:05 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:05 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:05 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:05.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:05 compute-0 python3.9[114771]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:32:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:32:06 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0003690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:06 compute-0 ceph-mon[74331]: pgmap v146: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:32:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:32:06 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab000045e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:06 compute-0 sudo[114922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jaqjsefgtwmnvspydrlgpaoaznuywqnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976726.2123094-224-180050363802702/AnsiballZ_sefcontext.py'
Nov 24 09:32:06 compute-0 sudo[114922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:06 compute-0 python3.9[114924]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 24 09:32:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:32:06.953Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:32:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:32:07 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab000045e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:07 compute-0 sudo[114922]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:07 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v147: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:32:07 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:07 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:07 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:07.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:07 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:07 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:32:07 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:07.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:32:08 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:32:08 compute-0 python3.9[115075]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:32:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:32:08 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae80040b0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:08 compute-0 ceph-mon[74331]: pgmap v147: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:32:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:32:08 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaf0003690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:09 compute-0 sudo[115232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awoymmzhztqvwylefklkpxhdiyasveao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976728.7487934-278-158163219021734/AnsiballZ_dnf.py'
Nov 24 09:32:09 compute-0 sudo[115232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:32:09 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab000045e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:09 compute-0 python3.9[115234]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 09:32:09 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v148: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:32:09 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:09 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:09 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:09.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:09 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:09 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:09 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:09.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:32:10 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab000045e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:10 compute-0 ceph-mon[74331]: pgmap v148: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:32:10 compute-0 sudo[115232]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:32:10 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc004340 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:32:10] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Nov 24 09:32:10 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:32:10] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Nov 24 09:32:11 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:32:11 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faafc004340 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:11 compute-0 sudo[115389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppvksslyywpsgkgjsrcverqhuvypohvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976730.8807666-302-112839333710206/AnsiballZ_command.py'
Nov 24 09:32:11 compute-0 sudo[115389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:11 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v149: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:32:11 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:11 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:32:11 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:11.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:32:11 compute-0 python3.9[115391]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:32:11 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:11 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:11 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:11.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:12 compute-0 sudo[115389]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:12 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:32:12 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faae80040d0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:12 compute-0 ceph-mon[74331]: pgmap v149: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:32:12 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:32:12 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab20001320 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:13 compute-0 sudo[115677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzhlaxvrpzooalyhhpvamrotwdpzlfvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976732.550862-326-43239870235506/AnsiballZ_file.py'
Nov 24 09:32:13 compute-0 sudo[115677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:13 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:32:13 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab20001320 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:32:13 compute-0 python3.9[115679]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 24 09:32:13 compute-0 sudo[115677]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:13 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v150: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:32:13 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:13 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:32:13 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:13.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:32:13 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:13 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:13 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:13.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:14 compute-0 python3.9[115830]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:32:14 compute-0 kernel: ganesha.nfsd[115235]: segfault at 50 ip 00007fabca8ca32e sp 00007fab8fffe210 error 4 in libntirpc.so.5.8[7fabca8af000+2c000] likely on CPU 7 (core 0, socket 7)
Nov 24 09:32:14 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 24 09:32:14 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[96879]: 24/11/2025 09:32:14 : epoch 6924254d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fab20001320 fd 49 proxy ignored for local
Nov 24 09:32:14 compute-0 systemd[1]: Created slice Slice /system/systemd-coredump.
Nov 24 09:32:14 compute-0 systemd[1]: Started Process Core Dump (PID 115864/UID 0).
Nov 24 09:32:14 compute-0 ceph-mon[74331]: pgmap v150: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:32:14 compute-0 sudo[115985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyfaxrjlbyeafbowusgclruyclhrzfyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976734.4079852-374-136486938435603/AnsiballZ_dnf.py'
Nov 24 09:32:14 compute-0 sudo[115985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:14 compute-0 python3.9[115987]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 09:32:15 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v151: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:32:15 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:15 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:15 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:15.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:32:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:32:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:32:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:32:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:32:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:32:15 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:32:15 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:15 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:15 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:15.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:15 compute-0 systemd-coredump[115882]: Process 96883 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 69:
                                                    #0  0x00007fabca8ca32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 24 09:32:15 compute-0 systemd[1]: systemd-coredump@0-115864-0.service: Deactivated successfully.
Nov 24 09:32:15 compute-0 systemd[1]: systemd-coredump@0-115864-0.service: Consumed 1.342s CPU time.
Nov 24 09:32:15 compute-0 podman[115994]: 2025-11-24 09:32:15.933429404 +0000 UTC m=+0.036953010 container died 3adc7e4dbfb76acd70b92bdc8783d49c26735889ac1576ee9a74ae48f52acf62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1)
Nov 24 09:32:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3cdb7bdfdae5dcbcf1fe0536a4e1ce178bf9372983ea15fc13bc1f0a1a65f89-merged.mount: Deactivated successfully.
Nov 24 09:32:15 compute-0 podman[115994]: 2025-11-24 09:32:15.999243358 +0000 UTC m=+0.102766944 container remove 3adc7e4dbfb76acd70b92bdc8783d49c26735889ac1576ee9a74ae48f52acf62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:32:16 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.2.0.compute-0.ssprex.service: Main process exited, code=exited, status=139/n/a
Nov 24 09:32:16 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.2.0.compute-0.ssprex.service: Failed with result 'exit-code'.
Nov 24 09:32:16 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.2.0.compute-0.ssprex.service: Consumed 1.883s CPU time.
Nov 24 09:32:16 compute-0 sudo[115985]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:16 compute-0 ceph-mon[74331]: pgmap v151: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:32:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:32:16.954Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 09:32:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:32:16.954Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:32:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:32:16.955Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:32:16 compute-0 sudo[116187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gumuoglmpjevmhmemjhzvjcwrqapxaue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976736.6912107-401-194485465286371/AnsiballZ_dnf.py'
Nov 24 09:32:16 compute-0 sudo[116187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:17 compute-0 python3.9[116189]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 09:32:17 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v152: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:32:17 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:17 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:17 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:17.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:17 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:17 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:32:17 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:17.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:32:18 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:32:18 compute-0 sudo[116187]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:18 compute-0 ceph-mon[74331]: pgmap v152: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:32:19 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v153: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:32:19 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:19 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:19 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:19.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:19 compute-0 sudo[116343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udgqazdelsushavdqfdsniboqeopqqtf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976739.1084735-437-122826287729413/AnsiballZ_stat.py'
Nov 24 09:32:19 compute-0 sudo[116343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:19 compute-0 sudo[116346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:32:19 compute-0 sudo[116346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:32:19 compute-0 sudo[116346]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:19 compute-0 python3.9[116345]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:32:19 compute-0 sudo[116343]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:19 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:19 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:32:19 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:19.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:32:20 compute-0 sudo[116523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwylpeakilhtzwlkallxejrwcjywykmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976739.8799584-461-3630167790026/AnsiballZ_slurp.py'
Nov 24 09:32:20 compute-0 sudo[116523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/093220 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:32:20 compute-0 python3.9[116525]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Nov 24 09:32:20 compute-0 sudo[116523]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:20 compute-0 ceph-mon[74331]: pgmap v153: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:32:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:32:20] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Nov 24 09:32:20 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:32:20] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Nov 24 09:32:21 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v154: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:32:21 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:21 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:32:21 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:21.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:32:21 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:21 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:21 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:21.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:21 compute-0 sshd-session[113735]: Connection closed by 192.168.122.30 port 39986
Nov 24 09:32:21 compute-0 sshd-session[113731]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:32:21 compute-0 systemd[1]: session-40.scope: Deactivated successfully.
Nov 24 09:32:21 compute-0 systemd[1]: session-40.scope: Consumed 18.029s CPU time.
Nov 24 09:32:21 compute-0 systemd-logind[822]: Session 40 logged out. Waiting for processes to exit.
Nov 24 09:32:21 compute-0 systemd-logind[822]: Removed session 40.
Nov 24 09:32:22 compute-0 ceph-mon[74331]: pgmap v154: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:32:23 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:32:23 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v155: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:32:23 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:23 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:23 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:23.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:23 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:23 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:23 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:23.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:24 compute-0 ceph-mon[74331]: pgmap v155: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:32:25 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v156: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:32:25 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:25 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:32:25 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:25.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:32:25 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:25 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:25 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:25.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:26 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.2.0.compute-0.ssprex.service: Scheduled restart job, restart counter is at 1.
Nov 24 09:32:26 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ssprex for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:32:26 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.2.0.compute-0.ssprex.service: Consumed 1.883s CPU time.
Nov 24 09:32:26 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ssprex for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:32:26 compute-0 podman[116603]: 2025-11-24 09:32:26.665389134 +0000 UTC m=+0.042327999 container create 91116f2070f86f3d214da86a09fe74e8d242d995ea22ce4da683ead47b106935 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 09:32:26 compute-0 ceph-mon[74331]: pgmap v156: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:32:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffc80b4a79f47a05f054ed559a7695a704214d606723e81ad116f7852185cbae/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 24 09:32:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffc80b4a79f47a05f054ed559a7695a704214d606723e81ad116f7852185cbae/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:32:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffc80b4a79f47a05f054ed559a7695a704214d606723e81ad116f7852185cbae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:32:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffc80b4a79f47a05f054ed559a7695a704214d606723e81ad116f7852185cbae/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ssprex-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:32:26 compute-0 podman[116603]: 2025-11-24 09:32:26.730366717 +0000 UTC m=+0.107305612 container init 91116f2070f86f3d214da86a09fe74e8d242d995ea22ce4da683ead47b106935 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:32:26 compute-0 podman[116603]: 2025-11-24 09:32:26.735668725 +0000 UTC m=+0.112607600 container start 91116f2070f86f3d214da86a09fe74e8d242d995ea22ce4da683ead47b106935 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:32:26 compute-0 bash[116603]: 91116f2070f86f3d214da86a09fe74e8d242d995ea22ce4da683ead47b106935
Nov 24 09:32:26 compute-0 podman[116603]: 2025-11-24 09:32:26.644946212 +0000 UTC m=+0.021885107 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:32:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:26 : epoch 6924262a : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 24 09:32:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:26 : epoch 6924262a : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 24 09:32:26 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ssprex for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:32:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:26 : epoch 6924262a : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 24 09:32:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:26 : epoch 6924262a : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 24 09:32:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:26 : epoch 6924262a : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 24 09:32:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:26 : epoch 6924262a : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 24 09:32:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:26 : epoch 6924262a : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 24 09:32:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:26 : epoch 6924262a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:32:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:32:26.955Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:32:26 compute-0 sshd-session[116661]: Accepted publickey for zuul from 192.168.122.30 port 46636 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 09:32:27 compute-0 systemd-logind[822]: New session 41 of user zuul.
Nov 24 09:32:27 compute-0 systemd[1]: Started Session 41 of User zuul.
Nov 24 09:32:27 compute-0 sshd-session[116661]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:32:27 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v157: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:32:27 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:27 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:27 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:27.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:27 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:27 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:27 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:27.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:28 compute-0 python3.9[116815]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:32:28 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:32:28 compute-0 ceph-mon[74331]: pgmap v157: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:32:29 compute-0 python3.9[116970]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 09:32:29 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v158: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:32:29 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:29 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:32:29 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:29.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:32:29 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:29 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:29 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:29.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:29 compute-0 sudo[117062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:32:29 compute-0 sudo[117062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:32:29 compute-0 sudo[117062]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:29 compute-0 sudo[117116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 24 09:32:29 compute-0 sudo[117116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:32:30 compute-0 python3.9[117249]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:32:30 compute-0 podman[117289]: 2025-11-24 09:32:30.5517825 +0000 UTC m=+0.057670499 container exec 926e81c0f890a1c1ac5ebf5b0a3fc7d39273a3029701ecf933d5ab782a4c6bc4 (image=quay.io/ceph/ceph:v19, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:32:30 compute-0 podman[117289]: 2025-11-24 09:32:30.642918663 +0000 UTC m=+0.148806642 container exec_died 926e81c0f890a1c1ac5ebf5b0a3fc7d39273a3029701ecf933d5ab782a4c6bc4 (image=quay.io/ceph/ceph:v19, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Nov 24 09:32:30 compute-0 ceph-mon[74331]: pgmap v158: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:32:30 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:32:30 compute-0 sshd-session[116664]: Connection closed by 192.168.122.30 port 46636
Nov 24 09:32:30 compute-0 sshd-session[116661]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:32:30 compute-0 systemd[1]: session-41.scope: Deactivated successfully.
Nov 24 09:32:30 compute-0 systemd[1]: session-41.scope: Consumed 2.473s CPU time.
Nov 24 09:32:30 compute-0 systemd-logind[822]: Session 41 logged out. Waiting for processes to exit.
Nov 24 09:32:30 compute-0 systemd-logind[822]: Removed session 41.
Nov 24 09:32:30 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:32:30] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Nov 24 09:32:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:32:30] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Nov 24 09:32:31 compute-0 podman[117451]: 2025-11-24 09:32:31.223884451 +0000 UTC m=+0.066490341 container exec c1042f9aaa96d1cc7323d0bb263b746783ae7f616fd1b71ffa56027caf075582 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:32:31 compute-0 podman[117451]: 2025-11-24 09:32:31.262572151 +0000 UTC m=+0.105177951 container exec_died c1042f9aaa96d1cc7323d0bb263b746783ae7f616fd1b71ffa56027caf075582 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:32:31 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v159: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:32:31 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:31 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:32:31 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:31.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:32:31 compute-0 podman[117525]: 2025-11-24 09:32:31.532077116 +0000 UTC m=+0.064731738 container exec 91116f2070f86f3d214da86a09fe74e8d242d995ea22ce4da683ead47b106935 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 24 09:32:31 compute-0 podman[117525]: 2025-11-24 09:32:31.565977962 +0000 UTC m=+0.098632584 container exec_died 91116f2070f86f3d214da86a09fe74e8d242d995ea22ce4da683ead47b106935 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 24 09:32:31 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:31 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:32:31 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:31.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:32:31 compute-0 podman[117589]: 2025-11-24 09:32:31.775890142 +0000 UTC m=+0.052999036 container exec 6c3a81d73f056383702bf60c1dab3f213ae48261b4107ee30655cbadd5ed4114 (image=quay.io/ceph/haproxy:2.3, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf)
Nov 24 09:32:31 compute-0 podman[117589]: 2025-11-24 09:32:31.809577293 +0000 UTC m=+0.086686157 container exec_died 6c3a81d73f056383702bf60c1dab3f213ae48261b4107ee30655cbadd5ed4114 (image=quay.io/ceph/haproxy:2.3, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf)
Nov 24 09:32:31 compute-0 ceph-mon[74331]: pgmap v159: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:32:32 compute-0 podman[117656]: 2025-11-24 09:32:32.031041431 +0000 UTC m=+0.055424484 container exec da5e2e82794b556dfcd8ea30635453752d519b3ce5ab3e77ac09ab6f644d0021 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-0-mglptr, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, distribution-scope=public, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, architecture=x86_64, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Nov 24 09:32:32 compute-0 podman[117656]: 2025-11-24 09:32:32.044524696 +0000 UTC m=+0.068907739 container exec_died da5e2e82794b556dfcd8ea30635453752d519b3ce5ab3e77ac09ab6f644d0021 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-0-mglptr, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., description=keepalived for Ceph, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, io.buildah.version=1.28.2, io.openshift.expose-services=, com.redhat.component=keepalived-container, name=keepalived, distribution-scope=public)
Nov 24 09:32:32 compute-0 podman[117722]: 2025-11-24 09:32:32.265650776 +0000 UTC m=+0.052636898 container exec 333e8d52ac14c1ad2562a9b1108149f074ce2b54eb58b09f4ec22c7b717459e6 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:32:32 compute-0 podman[117722]: 2025-11-24 09:32:32.296603161 +0000 UTC m=+0.083589233 container exec_died 333e8d52ac14c1ad2562a9b1108149f074ce2b54eb58b09f4ec22c7b717459e6 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:32:32 compute-0 podman[117798]: 2025-11-24 09:32:32.534368652 +0000 UTC m=+0.052302280 container exec 64e58e60bc23a7d57cc9d528e4c0a82e4df02b33e046975aeb8ef22ad0995bf2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 24 09:32:32 compute-0 podman[117798]: 2025-11-24 09:32:32.709787182 +0000 UTC m=+0.227720790 container exec_died 64e58e60bc23a7d57cc9d528e4c0a82e4df02b33e046975aeb8ef22ad0995bf2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 24 09:32:32 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:32 : epoch 6924262a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:32:32 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:32 : epoch 6924262a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:32:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:32:33 compute-0 podman[117912]: 2025-11-24 09:32:33.203186973 +0000 UTC m=+0.090959920 container exec 10beeaa631829ec8676854498a3516687cc150842a3e976767e7a8406d406beb (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:32:33 compute-0 podman[117912]: 2025-11-24 09:32:33.260621585 +0000 UTC m=+0.148394532 container exec_died 10beeaa631829ec8676854498a3516687cc150842a3e976767e7a8406d406beb (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:32:33 compute-0 sudo[117116]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:33 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v160: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:32:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:32:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:32:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:33.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:32:33 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:32:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:32:33 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:32:33 compute-0 sudo[117958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:32:33 compute-0 sudo[117958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:32:33 compute-0 sudo[117958]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:33 compute-0 sudo[117983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:32:33 compute-0 sudo[117983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:32:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:32:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:33.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:32:34 compute-0 sudo[117983]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:34 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:32:34 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:32:34 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:32:34 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:32:34 compute-0 sudo[118041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:32:34 compute-0 sudo[118041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:32:34 compute-0 sudo[118041]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:34 compute-0 sudo[118066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 24 09:32:34 compute-0 sudo[118066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:32:34 compute-0 ceph-mon[74331]: pgmap v160: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:32:34 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:32:34 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:32:34 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:32:34 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:32:34 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:32:34 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:32:34 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:32:34 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:32:34 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:32:34 compute-0 podman[118131]: 2025-11-24 09:32:34.756348643 +0000 UTC m=+0.045895746 container create 816c1dd62badcec4fca2e487fb357dd2f27d3a720be526caf184e5e7db1ace8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 24 09:32:34 compute-0 systemd[1]: Started libpod-conmon-816c1dd62badcec4fca2e487fb357dd2f27d3a720be526caf184e5e7db1ace8e.scope.
Nov 24 09:32:34 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:32:34 compute-0 podman[118131]: 2025-11-24 09:32:34.736970336 +0000 UTC m=+0.026517469 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:32:34 compute-0 podman[118131]: 2025-11-24 09:32:34.838073088 +0000 UTC m=+0.127620201 container init 816c1dd62badcec4fca2e487fb357dd2f27d3a720be526caf184e5e7db1ace8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_mendeleev, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Nov 24 09:32:34 compute-0 podman[118131]: 2025-11-24 09:32:34.846076711 +0000 UTC m=+0.135623824 container start 816c1dd62badcec4fca2e487fb357dd2f27d3a720be526caf184e5e7db1ace8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Nov 24 09:32:34 compute-0 podman[118131]: 2025-11-24 09:32:34.84892613 +0000 UTC m=+0.138473263 container attach 816c1dd62badcec4fca2e487fb357dd2f27d3a720be526caf184e5e7db1ace8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Nov 24 09:32:34 compute-0 affectionate_mendeleev[118147]: 167 167
Nov 24 09:32:34 compute-0 systemd[1]: libpod-816c1dd62badcec4fca2e487fb357dd2f27d3a720be526caf184e5e7db1ace8e.scope: Deactivated successfully.
Nov 24 09:32:34 compute-0 podman[118131]: 2025-11-24 09:32:34.852134847 +0000 UTC m=+0.141681970 container died 816c1dd62badcec4fca2e487fb357dd2f27d3a720be526caf184e5e7db1ace8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_mendeleev, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:32:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-0310abd16137b03a7742f115fa440faafc83c02024792f6df3974224002d1e44-merged.mount: Deactivated successfully.
Nov 24 09:32:34 compute-0 podman[118131]: 2025-11-24 09:32:34.911804273 +0000 UTC m=+0.201351376 container remove 816c1dd62badcec4fca2e487fb357dd2f27d3a720be526caf184e5e7db1ace8e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_mendeleev, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Nov 24 09:32:34 compute-0 systemd[1]: libpod-conmon-816c1dd62badcec4fca2e487fb357dd2f27d3a720be526caf184e5e7db1ace8e.scope: Deactivated successfully.
Nov 24 09:32:35 compute-0 podman[118172]: 2025-11-24 09:32:35.111011575 +0000 UTC m=+0.066360008 container create 1a08323a010f17861e4c858e32cec4c55d8afa0ae3d134dd8d42bc2d1e0ea78b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_sanderson, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 09:32:35 compute-0 systemd[1]: Started libpod-conmon-1a08323a010f17861e4c858e32cec4c55d8afa0ae3d134dd8d42bc2d1e0ea78b.scope.
Nov 24 09:32:35 compute-0 podman[118172]: 2025-11-24 09:32:35.079905667 +0000 UTC m=+0.035254230 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:32:35 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:32:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bc302658b15bf2b76d729909bd98fa4d910c155902c6658a277f58be667b962/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:32:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bc302658b15bf2b76d729909bd98fa4d910c155902c6658a277f58be667b962/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:32:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bc302658b15bf2b76d729909bd98fa4d910c155902c6658a277f58be667b962/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:32:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bc302658b15bf2b76d729909bd98fa4d910c155902c6658a277f58be667b962/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:32:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bc302658b15bf2b76d729909bd98fa4d910c155902c6658a277f58be667b962/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:32:35 compute-0 podman[118172]: 2025-11-24 09:32:35.226506164 +0000 UTC m=+0.181854627 container init 1a08323a010f17861e4c858e32cec4c55d8afa0ae3d134dd8d42bc2d1e0ea78b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Nov 24 09:32:35 compute-0 podman[118172]: 2025-11-24 09:32:35.234570568 +0000 UTC m=+0.189919001 container start 1a08323a010f17861e4c858e32cec4c55d8afa0ae3d134dd8d42bc2d1e0ea78b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:32:35 compute-0 podman[118172]: 2025-11-24 09:32:35.23837303 +0000 UTC m=+0.193721483 container attach 1a08323a010f17861e4c858e32cec4c55d8afa0ae3d134dd8d42bc2d1e0ea78b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_sanderson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:32:35 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v161: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:32:35 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:35 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:35 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:35.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:35 compute-0 admiring_sanderson[118189]: --> passed data devices: 0 physical, 1 LVM
Nov 24 09:32:35 compute-0 admiring_sanderson[118189]: --> All data devices are unavailable
Nov 24 09:32:35 compute-0 systemd[1]: libpod-1a08323a010f17861e4c858e32cec4c55d8afa0ae3d134dd8d42bc2d1e0ea78b.scope: Deactivated successfully.
Nov 24 09:32:35 compute-0 podman[118172]: 2025-11-24 09:32:35.638769133 +0000 UTC m=+0.594117576 container died 1a08323a010f17861e4c858e32cec4c55d8afa0ae3d134dd8d42bc2d1e0ea78b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_sanderson, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:32:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-7bc302658b15bf2b76d729909bd98fa4d910c155902c6658a277f58be667b962-merged.mount: Deactivated successfully.
Nov 24 09:32:35 compute-0 podman[118172]: 2025-11-24 09:32:35.695324034 +0000 UTC m=+0.650672507 container remove 1a08323a010f17861e4c858e32cec4c55d8afa0ae3d134dd8d42bc2d1e0ea78b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 24 09:32:35 compute-0 systemd[1]: libpod-conmon-1a08323a010f17861e4c858e32cec4c55d8afa0ae3d134dd8d42bc2d1e0ea78b.scope: Deactivated successfully.
Nov 24 09:32:35 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:35 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:32:35 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:35.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:32:35 compute-0 sudo[118066]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:35 compute-0 sudo[118218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:32:35 compute-0 sudo[118218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:32:35 compute-0 sudo[118218]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:35 compute-0 sshd-session[118216]: Accepted publickey for zuul from 192.168.122.30 port 47824 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 09:32:35 compute-0 systemd-logind[822]: New session 42 of user zuul.
Nov 24 09:32:35 compute-0 systemd[1]: Started Session 42 of User zuul.
Nov 24 09:32:35 compute-0 sshd-session[118216]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:32:35 compute-0 sudo[118243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- lvm list --format json
Nov 24 09:32:35 compute-0 sudo[118243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:32:36 compute-0 podman[118362]: 2025-11-24 09:32:36.35891137 +0000 UTC m=+0.043575160 container create cc4ebb9df105a6bcf9441bc27d93a0bb505b85bfe522b6226349df954ed1c9b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_gates, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True)
Nov 24 09:32:36 compute-0 ceph-mon[74331]: pgmap v161: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:32:36 compute-0 systemd[1]: Started libpod-conmon-cc4ebb9df105a6bcf9441bc27d93a0bb505b85bfe522b6226349df954ed1c9b6.scope.
Nov 24 09:32:36 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:32:36 compute-0 podman[118362]: 2025-11-24 09:32:36.341012889 +0000 UTC m=+0.025676699 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:32:36 compute-0 podman[118362]: 2025-11-24 09:32:36.446978749 +0000 UTC m=+0.131642569 container init cc4ebb9df105a6bcf9441bc27d93a0bb505b85bfe522b6226349df954ed1c9b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_gates, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 24 09:32:36 compute-0 podman[118362]: 2025-11-24 09:32:36.456917528 +0000 UTC m=+0.141581318 container start cc4ebb9df105a6bcf9441bc27d93a0bb505b85bfe522b6226349df954ed1c9b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:32:36 compute-0 podman[118362]: 2025-11-24 09:32:36.461243252 +0000 UTC m=+0.145907042 container attach cc4ebb9df105a6bcf9441bc27d93a0bb505b85bfe522b6226349df954ed1c9b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_gates, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:32:36 compute-0 epic_gates[118416]: 167 167
Nov 24 09:32:36 compute-0 systemd[1]: libpod-cc4ebb9df105a6bcf9441bc27d93a0bb505b85bfe522b6226349df954ed1c9b6.scope: Deactivated successfully.
Nov 24 09:32:36 compute-0 podman[118362]: 2025-11-24 09:32:36.465193107 +0000 UTC m=+0.149856927 container died cc4ebb9df105a6bcf9441bc27d93a0bb505b85bfe522b6226349df954ed1c9b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_gates, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Nov 24 09:32:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-323dcb35a1c8ef496b41327860530c9803f3e1a95bfbe0003cb1822d0964db2e-merged.mount: Deactivated successfully.
Nov 24 09:32:36 compute-0 podman[118362]: 2025-11-24 09:32:36.513392856 +0000 UTC m=+0.198056646 container remove cc4ebb9df105a6bcf9441bc27d93a0bb505b85bfe522b6226349df954ed1c9b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_gates, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Nov 24 09:32:36 compute-0 systemd[1]: libpod-conmon-cc4ebb9df105a6bcf9441bc27d93a0bb505b85bfe522b6226349df954ed1c9b6.scope: Deactivated successfully.
Nov 24 09:32:36 compute-0 podman[118500]: 2025-11-24 09:32:36.696887642 +0000 UTC m=+0.054849532 container create 55f31574096a62cf90a69d474fc05a509802a97fce9d219c546f750ed28aba51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Nov 24 09:32:36 compute-0 systemd[1]: Started libpod-conmon-55f31574096a62cf90a69d474fc05a509802a97fce9d219c546f750ed28aba51.scope.
Nov 24 09:32:36 compute-0 podman[118500]: 2025-11-24 09:32:36.67398181 +0000 UTC m=+0.031943730 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:32:36 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:32:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe9beb5bdcd34c92a0e16492efc7280e05633a7790513706a84f35400158ddfc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:32:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe9beb5bdcd34c92a0e16492efc7280e05633a7790513706a84f35400158ddfc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:32:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe9beb5bdcd34c92a0e16492efc7280e05633a7790513706a84f35400158ddfc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:32:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe9beb5bdcd34c92a0e16492efc7280e05633a7790513706a84f35400158ddfc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:32:36 compute-0 podman[118500]: 2025-11-24 09:32:36.784778836 +0000 UTC m=+0.142740746 container init 55f31574096a62cf90a69d474fc05a509802a97fce9d219c546f750ed28aba51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_rhodes, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:32:36 compute-0 podman[118500]: 2025-11-24 09:32:36.792030721 +0000 UTC m=+0.149992611 container start 55f31574096a62cf90a69d474fc05a509802a97fce9d219c546f750ed28aba51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:32:36 compute-0 podman[118500]: 2025-11-24 09:32:36.795250848 +0000 UTC m=+0.153212758 container attach 55f31574096a62cf90a69d474fc05a509802a97fce9d219c546f750ed28aba51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Nov 24 09:32:36 compute-0 python3.9[118494]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:32:36 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:32:36.956Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:32:36 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:32:36.957Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:32:37 compute-0 priceless_rhodes[118516]: {
Nov 24 09:32:37 compute-0 priceless_rhodes[118516]:     "0": [
Nov 24 09:32:37 compute-0 priceless_rhodes[118516]:         {
Nov 24 09:32:37 compute-0 priceless_rhodes[118516]:             "devices": [
Nov 24 09:32:37 compute-0 priceless_rhodes[118516]:                 "/dev/loop3"
Nov 24 09:32:37 compute-0 priceless_rhodes[118516]:             ],
Nov 24 09:32:37 compute-0 priceless_rhodes[118516]:             "lv_name": "ceph_lv0",
Nov 24 09:32:37 compute-0 priceless_rhodes[118516]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:32:37 compute-0 priceless_rhodes[118516]:             "lv_size": "21470642176",
Nov 24 09:32:37 compute-0 priceless_rhodes[118516]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=84a084c3-61a7-5de7-8207-1f88efa59a64,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 24 09:32:37 compute-0 priceless_rhodes[118516]:             "lv_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:32:37 compute-0 priceless_rhodes[118516]:             "name": "ceph_lv0",
Nov 24 09:32:37 compute-0 priceless_rhodes[118516]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:32:37 compute-0 priceless_rhodes[118516]:             "tags": {
Nov 24 09:32:37 compute-0 priceless_rhodes[118516]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:32:37 compute-0 priceless_rhodes[118516]:                 "ceph.block_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:32:37 compute-0 priceless_rhodes[118516]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 09:32:37 compute-0 priceless_rhodes[118516]:                 "ceph.cluster_fsid": "84a084c3-61a7-5de7-8207-1f88efa59a64",
Nov 24 09:32:37 compute-0 priceless_rhodes[118516]:                 "ceph.cluster_name": "ceph",
Nov 24 09:32:37 compute-0 priceless_rhodes[118516]:                 "ceph.crush_device_class": "",
Nov 24 09:32:37 compute-0 priceless_rhodes[118516]:                 "ceph.encrypted": "0",
Nov 24 09:32:37 compute-0 priceless_rhodes[118516]:                 "ceph.osd_fsid": "4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c",
Nov 24 09:32:37 compute-0 priceless_rhodes[118516]:                 "ceph.osd_id": "0",
Nov 24 09:32:37 compute-0 priceless_rhodes[118516]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 09:32:37 compute-0 priceless_rhodes[118516]:                 "ceph.type": "block",
Nov 24 09:32:37 compute-0 priceless_rhodes[118516]:                 "ceph.vdo": "0",
Nov 24 09:32:37 compute-0 priceless_rhodes[118516]:                 "ceph.with_tpm": "0"
Nov 24 09:32:37 compute-0 priceless_rhodes[118516]:             },
Nov 24 09:32:37 compute-0 priceless_rhodes[118516]:             "type": "block",
Nov 24 09:32:37 compute-0 priceless_rhodes[118516]:             "vg_name": "ceph_vg0"
Nov 24 09:32:37 compute-0 priceless_rhodes[118516]:         }
Nov 24 09:32:37 compute-0 priceless_rhodes[118516]:     ]
Nov 24 09:32:37 compute-0 priceless_rhodes[118516]: }
Nov 24 09:32:37 compute-0 systemd[1]: libpod-55f31574096a62cf90a69d474fc05a509802a97fce9d219c546f750ed28aba51.scope: Deactivated successfully.
Nov 24 09:32:37 compute-0 podman[118500]: 2025-11-24 09:32:37.185739303 +0000 UTC m=+0.543701213 container died 55f31574096a62cf90a69d474fc05a509802a97fce9d219c546f750ed28aba51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_rhodes, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:32:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe9beb5bdcd34c92a0e16492efc7280e05633a7790513706a84f35400158ddfc-merged.mount: Deactivated successfully.
Nov 24 09:32:37 compute-0 podman[118500]: 2025-11-24 09:32:37.232731194 +0000 UTC m=+0.590693084 container remove 55f31574096a62cf90a69d474fc05a509802a97fce9d219c546f750ed28aba51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 24 09:32:37 compute-0 systemd[1]: libpod-conmon-55f31574096a62cf90a69d474fc05a509802a97fce9d219c546f750ed28aba51.scope: Deactivated successfully.
Nov 24 09:32:37 compute-0 sudo[118243]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:37 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v162: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:32:37 compute-0 sudo[118571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:32:37 compute-0 sudo[118571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:32:37 compute-0 sudo[118571]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:37 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:37 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:37 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:37.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:37 compute-0 sudo[118628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- raw list --format json
Nov 24 09:32:37 compute-0 sudo[118628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:32:37 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:37 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:37 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:37.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:37 compute-0 podman[118785]: 2025-11-24 09:32:37.794179982 +0000 UTC m=+0.035649748 container create 53914d35b8922f36db1020733599fecbafa7e3a151a538dcf8cd35520a2152ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_booth, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 24 09:32:37 compute-0 systemd[1]: Started libpod-conmon-53914d35b8922f36db1020733599fecbafa7e3a151a538dcf8cd35520a2152ba.scope.
Nov 24 09:32:37 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:32:37 compute-0 podman[118785]: 2025-11-24 09:32:37.873737546 +0000 UTC m=+0.115207332 container init 53914d35b8922f36db1020733599fecbafa7e3a151a538dcf8cd35520a2152ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:32:37 compute-0 podman[118785]: 2025-11-24 09:32:37.778898574 +0000 UTC m=+0.020368350 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:32:37 compute-0 podman[118785]: 2025-11-24 09:32:37.881388041 +0000 UTC m=+0.122857797 container start 53914d35b8922f36db1020733599fecbafa7e3a151a538dcf8cd35520a2152ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_booth, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Nov 24 09:32:37 compute-0 podman[118785]: 2025-11-24 09:32:37.884970547 +0000 UTC m=+0.126440323 container attach 53914d35b8922f36db1020733599fecbafa7e3a151a538dcf8cd35520a2152ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_booth, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:32:37 compute-0 systemd[1]: libpod-53914d35b8922f36db1020733599fecbafa7e3a151a538dcf8cd35520a2152ba.scope: Deactivated successfully.
Nov 24 09:32:37 compute-0 suspicious_booth[118801]: 167 167
Nov 24 09:32:37 compute-0 conmon[118801]: conmon 53914d35b8922f36db10 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-53914d35b8922f36db1020733599fecbafa7e3a151a538dcf8cd35520a2152ba.scope/container/memory.events
Nov 24 09:32:37 compute-0 podman[118785]: 2025-11-24 09:32:37.889508876 +0000 UTC m=+0.130978632 container died 53914d35b8922f36db1020733599fecbafa7e3a151a538dcf8cd35520a2152ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 09:32:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb99c3f93350234fed5f889f02b04c2d85824ff3173487ec68bd1e5d3a444c0f-merged.mount: Deactivated successfully.
Nov 24 09:32:37 compute-0 podman[118785]: 2025-11-24 09:32:37.924784325 +0000 UTC m=+0.166254081 container remove 53914d35b8922f36db1020733599fecbafa7e3a151a538dcf8cd35520a2152ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Nov 24 09:32:37 compute-0 systemd[1]: libpod-conmon-53914d35b8922f36db1020733599fecbafa7e3a151a538dcf8cd35520a2152ba.scope: Deactivated successfully.
Nov 24 09:32:38 compute-0 python3.9[118784]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:32:38 compute-0 podman[118829]: 2025-11-24 09:32:38.091121516 +0000 UTC m=+0.047862212 container create 78de2e5f8358f1d9a6f00c156beefc67c6968ad7e18c4e7544dc8465b2074887 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_mahavira, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Nov 24 09:32:38 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:32:38 compute-0 systemd[1]: Started libpod-conmon-78de2e5f8358f1d9a6f00c156beefc67c6968ad7e18c4e7544dc8465b2074887.scope.
Nov 24 09:32:38 compute-0 podman[118829]: 2025-11-24 09:32:38.068647766 +0000 UTC m=+0.025388482 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:32:38 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:32:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aad696844b77d23e97c690796e090ae8bdec8709a8c779147c5d4d93ea05f17/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:32:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aad696844b77d23e97c690796e090ae8bdec8709a8c779147c5d4d93ea05f17/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:32:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aad696844b77d23e97c690796e090ae8bdec8709a8c779147c5d4d93ea05f17/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:32:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aad696844b77d23e97c690796e090ae8bdec8709a8c779147c5d4d93ea05f17/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:32:38 compute-0 podman[118829]: 2025-11-24 09:32:38.186686806 +0000 UTC m=+0.143427512 container init 78de2e5f8358f1d9a6f00c156beefc67c6968ad7e18c4e7544dc8465b2074887 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_mahavira, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1)
Nov 24 09:32:38 compute-0 podman[118829]: 2025-11-24 09:32:38.194694428 +0000 UTC m=+0.151435104 container start 78de2e5f8358f1d9a6f00c156beefc67c6968ad7e18c4e7544dc8465b2074887 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_mahavira, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 24 09:32:38 compute-0 podman[118829]: 2025-11-24 09:32:38.197700511 +0000 UTC m=+0.154441187 container attach 78de2e5f8358f1d9a6f00c156beefc67c6968ad7e18c4e7544dc8465b2074887 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 09:32:38 compute-0 ceph-mon[74331]: pgmap v162: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:32:38 compute-0 sudo[119056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjzcailxxflptzimfouvauahpporlcsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976758.4408762-80-267832533307089/AnsiballZ_setup.py'
Nov 24 09:32:38 compute-0 sudo[119056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:38 compute-0 lvm[119073]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 09:32:38 compute-0 lvm[119073]: VG ceph_vg0 finished
Nov 24 09:32:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:38 : epoch 6924262a : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:32:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:38 : epoch 6924262a : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 24 09:32:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:38 : epoch 6924262a : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 24 09:32:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:38 : epoch 6924262a : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 24 09:32:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:38 : epoch 6924262a : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 24 09:32:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:38 : epoch 6924262a : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 24 09:32:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:38 : epoch 6924262a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 24 09:32:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:38 : epoch 6924262a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:32:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:38 : epoch 6924262a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:32:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:38 : epoch 6924262a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:32:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:38 : epoch 6924262a : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 24 09:32:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:38 : epoch 6924262a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:32:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:38 : epoch 6924262a : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 24 09:32:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:38 : epoch 6924262a : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 24 09:32:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:38 : epoch 6924262a : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 24 09:32:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:38 : epoch 6924262a : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 24 09:32:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:38 : epoch 6924262a : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 24 09:32:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:38 : epoch 6924262a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 24 09:32:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:38 : epoch 6924262a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 24 09:32:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:38 : epoch 6924262a : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 24 09:32:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:38 : epoch 6924262a : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 24 09:32:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:38 : epoch 6924262a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 24 09:32:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:38 : epoch 6924262a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 24 09:32:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:38 : epoch 6924262a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 24 09:32:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:38 : epoch 6924262a : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 09:32:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:38 : epoch 6924262a : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 24 09:32:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:38 : epoch 6924262a : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 09:32:38 compute-0 brave_mahavira[118847]: {}
Nov 24 09:32:38 compute-0 systemd[1]: libpod-78de2e5f8358f1d9a6f00c156beefc67c6968ad7e18c4e7544dc8465b2074887.scope: Deactivated successfully.
Nov 24 09:32:38 compute-0 podman[118829]: 2025-11-24 09:32:38.934546599 +0000 UTC m=+0.891287275 container died 78de2e5f8358f1d9a6f00c156beefc67c6968ad7e18c4e7544dc8465b2074887 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_mahavira, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 09:32:38 compute-0 systemd[1]: libpod-78de2e5f8358f1d9a6f00c156beefc67c6968ad7e18c4e7544dc8465b2074887.scope: Consumed 1.160s CPU time.
Nov 24 09:32:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-1aad696844b77d23e97c690796e090ae8bdec8709a8c779147c5d4d93ea05f17-merged.mount: Deactivated successfully.
Nov 24 09:32:38 compute-0 podman[118829]: 2025-11-24 09:32:38.981844106 +0000 UTC m=+0.938584782 container remove 78de2e5f8358f1d9a6f00c156beefc67c6968ad7e18c4e7544dc8465b2074887 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_mahavira, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:32:38 compute-0 systemd[1]: libpod-conmon-78de2e5f8358f1d9a6f00c156beefc67c6968ad7e18c4e7544dc8465b2074887.scope: Deactivated successfully.
Nov 24 09:32:39 compute-0 python3.9[119061]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 09:32:39 compute-0 sudo[118628]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:39 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:32:39 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:32:39 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:32:39 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:32:39 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:39 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51d0000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:39 compute-0 sudo[119104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:32:39 compute-0 sudo[119104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:32:39 compute-0 sudo[119104]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:39 compute-0 sudo[119056]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:39 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v163: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:32:39 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:39 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:39 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:39.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:39 compute-0 sudo[119158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:32:39 compute-0 sudo[119158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:32:39 compute-0 sudo[119158]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:39 compute-0 sudo[119233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-veyevlskabplbifaynliykyayvktdwak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976758.4408762-80-267832533307089/AnsiballZ_dnf.py'
Nov 24 09:32:39 compute-0 sudo[119233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:39 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:39 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:39 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:39.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:39 compute-0 python3.9[119235]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 09:32:40 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:32:40 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:32:40 compute-0 ceph-mon[74331]: pgmap v163: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:32:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:40 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51c40014d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:40 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51ac000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:32:40] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Nov 24 09:32:40 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:32:40] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Nov 24 09:32:41 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:41 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51b8000d00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:41 compute-0 sudo[119233]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:41 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v164: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:32:41 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:41 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:41 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:41.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:41 compute-0 sudo[119388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzqghujzmjptuemstuptyldddaitwgkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976761.3551447-116-278531519658122/AnsiballZ_setup.py'
Nov 24 09:32:41 compute-0 sudo[119388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:41 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:41 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:32:41 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:41.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:32:41 compute-0 python3.9[119390]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 09:32:42 compute-0 sudo[119388]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/093242 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:32:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:42 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51b0000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:42 compute-0 ceph-mon[74331]: pgmap v164: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:32:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:42 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51c40021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:43 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:43 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51b0000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:32:43 compute-0 sudo[119585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhshzsuzkeoedzwrxxrqxaxuhhwdqjah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976762.799205-149-211840481094786/AnsiballZ_file.py'
Nov 24 09:32:43 compute-0 sudo[119585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:43 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v165: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:32:43 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:43 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:32:43 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:43.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:32:43 compute-0 python3.9[119587]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:32:43 compute-0 sudo[119585]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:43 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:43 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:43 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:43.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:44 compute-0 sudo[119738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlefznshwprycihhjpsygeyaqahqdrgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976763.8307903-173-159308009854889/AnsiballZ_command.py'
Nov 24 09:32:44 compute-0 sudo[119738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:44 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51ac0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:44 compute-0 ceph-mon[74331]: pgmap v165: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:32:44 compute-0 python3.9[119740]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:32:44 compute-0 sudo[119738]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:44 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51b8001820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:45 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51c40021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Optimize plan auto_2025-11-24_09:32:45
Nov 24 09:32:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 09:32:45 compute-0 ceph-mgr[74626]: [balancer INFO root] do_upmap
Nov 24 09:32:45 compute-0 ceph-mgr[74626]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', '.nfs', 'default.rgw.meta', 'default.rgw.log', 'backups', 'vms', 'volumes', '.mgr', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control']
Nov 24 09:32:45 compute-0 ceph-mgr[74626]: [balancer INFO root] prepared 0/10 upmap changes
Nov 24 09:32:45 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v166: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:32:45 compute-0 sudo[119904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbvdcbdxbwixovyxsbbxujqafuupyxmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976764.928651-197-226391361669188/AnsiballZ_stat.py'
Nov 24 09:32:45 compute-0 sudo[119904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:45 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:45 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:32:45 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:45.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:32:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 09:32:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:32:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 09:32:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:32:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:32:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:32:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:32:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:32:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:32:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:32:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:32:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:32:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 09:32:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:32:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:32:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:32:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 24 09:32:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:32:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 09:32:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:32:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:32:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:32:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 09:32:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:32:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 09:32:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:32:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:32:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:32:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:32:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 09:32:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:32:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:32:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:32:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:32:45 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:32:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:32:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:32:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 09:32:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:32:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:32:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:32:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:32:45 compute-0 python3.9[119906]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:32:45 compute-0 sudo[119904]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:45 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:45 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:45 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:45.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:45 compute-0 sudo[119982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajhzyebdezskqmueavvkitdxdfjmuqex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976764.928651-197-226391361669188/AnsiballZ_file.py'
Nov 24 09:32:45 compute-0 sudo[119982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:45 compute-0 python3.9[119984]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:32:45 compute-0 sudo[119982]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:46 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51b0001f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:46 compute-0 ceph-mon[74331]: pgmap v166: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:32:46 compute-0 sudo[120135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsfulbjoepnwalemcdzvkbufkgleymcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976766.3525887-233-106030796354210/AnsiballZ_stat.py'
Nov 24 09:32:46 compute-0 sudo[120135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:46 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51ac0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:46 compute-0 python3.9[120137]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:32:46 compute-0 sudo[120135]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:32:46.958Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:32:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:32:46.958Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 09:32:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:32:46.959Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:32:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:47 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51b8002140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:47 compute-0 sudo[120213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwlwrqikyqrpywhxosipswojfsfchyvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976766.3525887-233-106030796354210/AnsiballZ_file.py'
Nov 24 09:32:47 compute-0 sudo[120213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:47 compute-0 python3.9[120215]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:32:47 compute-0 sudo[120213]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:47 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v167: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 2 op/s
Nov 24 09:32:47 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:47 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:32:47 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:47.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:32:47 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:47 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:32:47 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:47.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:32:48 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:32:48 compute-0 sudo[120367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbokutrpgvcntiqfmozehpwosqjkdzcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976767.7756977-272-87459270801121/AnsiballZ_ini_file.py'
Nov 24 09:32:48 compute-0 sudo[120367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:48 compute-0 python3.9[120369]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:32:48 compute-0 sudo[120367]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:48 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51c40021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:48 compute-0 ceph-mon[74331]: pgmap v167: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 2 op/s
Nov 24 09:32:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:48 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51b0001f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:48 compute-0 sudo[120519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dztalywjuibuedgjrcmzgwgqmywzntdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976768.5086133-272-176188149985331/AnsiballZ_ini_file.py'
Nov 24 09:32:48 compute-0 sudo[120519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:48 compute-0 python3.9[120521]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:32:48 compute-0 sudo[120519]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:49 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:49 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51ac0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:49 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/093249 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:32:49 compute-0 sudo[120672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlwgbzbrtekunsjhphabyssegngxotsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976769.0420089-272-108273561676634/AnsiballZ_ini_file.py'
Nov 24 09:32:49 compute-0 sudo[120672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:49 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v168: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:32:49 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:49 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:49 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:49.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:49 compute-0 python3.9[120674]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:32:49 compute-0 sudo[120672]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:49 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:49 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:49 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:49.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:49 compute-0 sudo[120824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjzgrcdvvgabmonwgsoubjlsfyglshzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976769.594825-272-213929105152651/AnsiballZ_ini_file.py'
Nov 24 09:32:49 compute-0 sudo[120824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:50 compute-0 python3.9[120826]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:32:50 compute-0 sudo[120824]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:50 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51b8002140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:50 compute-0 ceph-mon[74331]: pgmap v168: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:32:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:50 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51c40021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:32:50] "GET /metrics HTTP/1.1" 200 48260 "" "Prometheus/2.51.0"
Nov 24 09:32:50 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:32:50] "GET /metrics HTTP/1.1" 200 48260 "" "Prometheus/2.51.0"
Nov 24 09:32:51 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:51 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51b0001f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:51 compute-0 sudo[120978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptzwrzotnpzqjhjwhiyglocwtpaceryv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976770.9473772-365-251620199206296/AnsiballZ_dnf.py'
Nov 24 09:32:51 compute-0 sudo[120978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:51 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v169: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:32:51 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:51 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:32:51 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:51.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:32:51 compute-0 python3.9[120980]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 09:32:51 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:51 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:32:51 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:51.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:32:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:52 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51ac002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:52 compute-0 ceph-mon[74331]: pgmap v169: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:32:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:52 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51b8002140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:52 compute-0 sudo[120978]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:53 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:53 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51c40021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:53 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:32:53 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v170: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:32:53 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:53 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:53 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:53.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:53 compute-0 sudo[121133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmebrkmmwhabweadyjftasnhyzhlarup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976773.2712898-398-25323135061734/AnsiballZ_setup.py'
Nov 24 09:32:53 compute-0 sudo[121133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:53 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:53 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:53 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:53.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:53 compute-0 python3.9[121135]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:32:53 compute-0 sudo[121133]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:54 compute-0 sudo[121288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdujcmimmyeocubnftjexbfsrvocmmnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976774.0757434-422-125667584395884/AnsiballZ_stat.py'
Nov 24 09:32:54 compute-0 sudo[121288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:54 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:54 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51b0003340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:54 compute-0 python3.9[121290]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:32:54 compute-0 sudo[121288]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:54 compute-0 ceph-mon[74331]: pgmap v170: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:32:54 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:54 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51ac002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:55 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51b8003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:55 compute-0 sudo[121441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgosqybdevjiuubgsonmbwqlqemelclc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976775.0460815-449-267182854524044/AnsiballZ_stat.py'
Nov 24 09:32:55 compute-0 sudo[121441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:55 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v171: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:32:55 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:55 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:32:55 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:55.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:32:55 compute-0 python3.9[121443]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:32:55 compute-0 sudo[121441]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:55 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:55 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:55 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:55.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:56 compute-0 ceph-mon[74331]: pgmap v171: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:32:56 compute-0 sudo[121594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgxkexirxtmhbchrkrlcipwifuhptwjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976775.996027-479-28248518959859/AnsiballZ_command.py'
Nov 24 09:32:56 compute-0 sudo[121594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:56 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51c40021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:56 compute-0 python3.9[121596]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:32:56 compute-0 sudo[121594]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:56 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51b0003340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:32:56.960Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:32:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:57 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51ac002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:57 compute-0 sudo[121748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-reazhjzyfpmcathtwqjpsacuqaigixgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976776.9068367-509-139757145096244/AnsiballZ_service_facts.py'
Nov 24 09:32:57 compute-0 sudo[121748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:32:57 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v172: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:32:57 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:57 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:57 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:57.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:57 compute-0 python3.9[121750]: ansible-service_facts Invoked
Nov 24 09:32:57 compute-0 network[121767]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 09:32:57 compute-0 network[121768]: 'network-scripts' will be removed from distribution in near future.
Nov 24 09:32:57 compute-0 network[121769]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 09:32:57 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:57 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:57 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:57.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:58 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:32:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:58 : epoch 6924262a : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:32:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:58 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51b8003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:58 compute-0 ceph-mon[74331]: pgmap v172: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:32:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:58 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51c40021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:32:59 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51c40021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:32:59 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v173: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:32:59 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:59 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:59 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:32:59.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:32:59 compute-0 sudo[121855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:32:59 compute-0 sudo[121855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:32:59 compute-0 sudo[121855]: pam_unix(sudo:session): session closed for user root
Nov 24 09:32:59 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:32:59 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:32:59 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:32:59.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:00 compute-0 sudo[121748]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:00 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51ac003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:00 compute-0 ceph-mon[74331]: pgmap v173: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:33:00 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:33:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:00 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51b8003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:33:00] "GET /metrics HTTP/1.1" 200 48261 "" "Prometheus/2.51.0"
Nov 24 09:33:00 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:33:00] "GET /metrics HTTP/1.1" 200 48261 "" "Prometheus/2.51.0"
Nov 24 09:33:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:01 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51b0004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:01 compute-0 anacron[29927]: Job `cron.daily' started
Nov 24 09:33:01 compute-0 anacron[29927]: Job `cron.daily' terminated
Nov 24 09:33:01 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v174: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:33:01 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:01 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000999996s ======
Nov 24 09:33:01 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:01.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999996s
Nov 24 09:33:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:01 : epoch 6924262a : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:33:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:01 : epoch 6924262a : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:33:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:01 : epoch 6924262a : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:33:01 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:01 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:01 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:01.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:02 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:02 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51c40021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:02 compute-0 ceph-mon[74331]: pgmap v174: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:33:02 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:02 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51ac003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:03 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51b80041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:03 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:33:03 compute-0 sudo[122085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akavdgjicxvymvqvnlexbouzzsqoricj ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1763976782.935678-554-146333592758198/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1763976782.935678-554-146333592758198/args'
Nov 24 09:33:03 compute-0 sudo[122085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:03 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v175: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:33:03 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:03 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:03 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:03.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:03 compute-0 sudo[122085]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:03 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:03 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:03 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:03.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:04 compute-0 sudo[122253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcjdmcfapgitfivwsmczheyljkopxzfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976783.8973432-587-159602189968908/AnsiballZ_dnf.py'
Nov 24 09:33:04 compute-0 sudo[122253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:04 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:04 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51b0004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:04 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:04 : epoch 6924262a : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:33:04 compute-0 python3.9[122255]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 09:33:04 compute-0 ceph-mon[74331]: pgmap v175: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:33:04 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:04 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51c40021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:05 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51c40021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:05 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v176: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:33:05 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:05 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:05 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:05.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:05 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:05 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000999997s ======
Nov 24 09:33:05 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:05.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999997s
Nov 24 09:33:05 compute-0 sudo[122253]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:06 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51b80041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:06 compute-0 ceph-mon[74331]: pgmap v176: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:33:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:06 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51b0004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:06 compute-0 sudo[122408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmiatkvfyolmvtywxjnxabmnxwqzynmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976786.3245828-626-276345063429690/AnsiballZ_package_facts.py'
Nov 24 09:33:06 compute-0 sudo[122408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:33:06.961Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:33:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:07 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51ac003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:07 compute-0 python3.9[122410]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 24 09:33:07 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v177: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:33:07 compute-0 sudo[122408]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:07 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:07 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000999997s ======
Nov 24 09:33:07 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:07.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999997s
Nov 24 09:33:07 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:07 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:07 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:07.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:08 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:33:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:08 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51c40021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:08 compute-0 sudo[122562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlsqdvxpyezzdiqsrqqeefsnxmdamumk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976788.091701-656-155101517841561/AnsiballZ_stat.py'
Nov 24 09:33:08 compute-0 sudo[122562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:08 compute-0 ceph-mon[74331]: pgmap v177: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:33:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:08 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51b80041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:08 compute-0 python3.9[122564]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:33:08 compute-0 sudo[122562]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:08 compute-0 sshd-session[122566]: Connection closed by 209.38.206.249 port 38362
Nov 24 09:33:09 compute-0 sudo[122643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsekjvnxpxgnehkkzfxukuxntewjzxyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976788.091701-656-155101517841561/AnsiballZ_file.py'
Nov 24 09:33:09 compute-0 sudo[122643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:09 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51b0004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:09 compute-0 python3.9[122645]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:33:09 compute-0 sudo[122643]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:09 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v178: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:33:09 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:09 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000999996s ======
Nov 24 09:33:09 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:09.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999996s
Nov 24 09:33:09 compute-0 sshd-session[122619]: Connection closed by authenticating user root 209.38.206.249 port 38378 [preauth]
Nov 24 09:33:09 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:09 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000999997s ======
Nov 24 09:33:09 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:09.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999997s
Nov 24 09:33:09 compute-0 sudo[122796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvbqrlfzifyvmkimdoxxuzjsahmqmvdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976789.630005-692-11815730085903/AnsiballZ_stat.py'
Nov 24 09:33:09 compute-0 sudo[122796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:10 compute-0 python3.9[122798]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:33:10 compute-0 sudo[122796]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:10 compute-0 sudo[122875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwxddajltvgpsfzxmphaxuncseukahgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976789.630005-692-11815730085903/AnsiballZ_file.py'
Nov 24 09:33:10 compute-0 sudo[122875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:10 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51ac003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:10 compute-0 python3.9[122877]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:33:10 compute-0 sudo[122875]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:10 compute-0 ceph-mon[74331]: pgmap v178: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:33:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:10 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51c40021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:33:10] "GET /metrics HTTP/1.1" 200 48261 "" "Prometheus/2.51.0"
Nov 24 09:33:10 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:33:10] "GET /metrics HTTP/1.1" 200 48261 "" "Prometheus/2.51.0"
Nov 24 09:33:11 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:11 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f519c000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:11 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/093311 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:33:11 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v179: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:33:11 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:11 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:11 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:11.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:11 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:11 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000999997s ======
Nov 24 09:33:11 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:11.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999997s
Nov 24 09:33:12 compute-0 sudo[123030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgvkbackngpsnlldqkgfgttfhjqyfwip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976791.648808-746-85277111920737/AnsiballZ_lineinfile.py'
Nov 24 09:33:12 compute-0 sudo[123030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:12 compute-0 python3.9[123032]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:33:12 compute-0 sudo[123030]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:12 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:12 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51a4000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:12 compute-0 ceph-mon[74331]: pgmap v179: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:33:12 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:12 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51ac003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:13 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:13 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51c40021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:33:13 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v180: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:33:13 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:13 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:13 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:13.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:13 compute-0 sudo[123184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqzcataqwtjtqdtcvbncdbwdtkbzoljy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976793.3811018-791-210992545471838/AnsiballZ_setup.py'
Nov 24 09:33:13 compute-0 sudo[123184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:13 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:13 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000999997s ======
Nov 24 09:33:13 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:13.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999997s
Nov 24 09:33:13 compute-0 python3.9[123186]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 09:33:14 compute-0 sudo[123184]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:14 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:14 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f519c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:14 compute-0 ceph-mon[74331]: pgmap v180: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:33:14 compute-0 sudo[123269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usjfkvrqicergrsmdljbmoaxcqodgryx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976793.3811018-791-210992545471838/AnsiballZ_systemd.py'
Nov 24 09:33:14 compute-0 sudo[123269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:14 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:14 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51a40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:14 compute-0 systemd[92737]: Created slice User Background Tasks Slice.
Nov 24 09:33:14 compute-0 systemd[92737]: Starting Cleanup of User's Temporary Files and Directories...
Nov 24 09:33:14 compute-0 systemd[92737]: Finished Cleanup of User's Temporary Files and Directories.
Nov 24 09:33:14 compute-0 python3.9[123271]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:33:14 compute-0 sudo[123269]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:15 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51ac003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:15 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v181: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:33:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:33:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:33:15 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:15 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000999997s ======
Nov 24 09:33:15 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:15.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999997s
Nov 24 09:33:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:33:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:33:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:33:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:33:15 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:33:15 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:15 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:15 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:15.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:16 compute-0 sshd-session[118267]: Connection closed by 192.168.122.30 port 47824
Nov 24 09:33:16 compute-0 sshd-session[118216]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:33:16 compute-0 systemd[1]: session-42.scope: Deactivated successfully.
Nov 24 09:33:16 compute-0 systemd[1]: session-42.scope: Consumed 23.398s CPU time.
Nov 24 09:33:16 compute-0 systemd-logind[822]: Session 42 logged out. Waiting for processes to exit.
Nov 24 09:33:16 compute-0 systemd-logind[822]: Removed session 42.
Nov 24 09:33:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:16 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51c40021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:16 compute-0 ceph-mon[74331]: pgmap v181: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:33:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:16 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f519c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:33:16.962Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 09:33:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:33:16.962Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:33:16 compute-0 sshd-session[123301]: Invalid user postgres from 209.38.206.249 port 38400
Nov 24 09:33:17 compute-0 sshd-session[123301]: Connection closed by invalid user postgres 209.38.206.249 port 38400 [preauth]
Nov 24 09:33:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:17 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51a40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:17 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v182: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Nov 24 09:33:17 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:17 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000999996s ======
Nov 24 09:33:17 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:17.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999996s
Nov 24 09:33:17 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:17 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:17 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:17.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:18 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:33:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:18 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51ac003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:18 compute-0 ceph-mon[74331]: pgmap v182: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Nov 24 09:33:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:18 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51ac003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:19 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:19 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51ac003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:19 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v183: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:33:19 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:19 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:19 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:19.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:19 compute-0 sudo[123306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:33:19 compute-0 sudo[123306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:33:19 compute-0 sudo[123306]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:19 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:19 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:19 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:19.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:20 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51a40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:20 compute-0 ceph-mon[74331]: pgmap v183: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:33:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:20 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51c40021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:33:20] "GET /metrics HTTP/1.1" 200 48261 "" "Prometheus/2.51.0"
Nov 24 09:33:20 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:33:20] "GET /metrics HTTP/1.1" 200 48261 "" "Prometheus/2.51.0"
Nov 24 09:33:21 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:21 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51c40021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:21 compute-0 sshd-session[123332]: Accepted publickey for zuul from 192.168.122.30 port 41372 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 09:33:21 compute-0 systemd-logind[822]: New session 43 of user zuul.
Nov 24 09:33:21 compute-0 systemd[1]: Started Session 43 of User zuul.
Nov 24 09:33:21 compute-0 sshd-session[123332]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:33:21 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v184: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:33:21 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:21 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000999997s ======
Nov 24 09:33:21 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:21.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999997s
Nov 24 09:33:21 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:21 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000999997s ======
Nov 24 09:33:21 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:21.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999997s
Nov 24 09:33:21 compute-0 sudo[123486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxdcpichnzbakrllbmfakizxoikugacd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976801.3724015-26-53970836958133/AnsiballZ_file.py'
Nov 24 09:33:21 compute-0 sudo[123486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:22 compute-0 python3.9[123488]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:33:22 compute-0 sudo[123486]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:22 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:22 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51c40021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:22 compute-0 sudo[123639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nialfsobopxuorinckscrzqgixruatma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976802.262646-62-142677506919327/AnsiballZ_stat.py'
Nov 24 09:33:22 compute-0 sudo[123639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:22 compute-0 ceph-mon[74331]: pgmap v184: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:33:22 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:22 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51a4002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:22 compute-0 python3.9[123641]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:33:22 compute-0 sudo[123639]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:23 compute-0 sudo[123717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhotefzbxhotbskzblqlzwrbiasvwboj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976802.262646-62-142677506919327/AnsiballZ_file.py'
Nov 24 09:33:23 compute-0 sudo[123717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:23 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51ac003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:23 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:33:23 compute-0 python3.9[123719]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:33:23 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v185: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:33:23 compute-0 sudo[123717]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:23 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:23 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000999997s ======
Nov 24 09:33:23 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:23.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999997s
Nov 24 09:33:23 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:23 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000999997s ======
Nov 24 09:33:23 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:23.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999997s
Nov 24 09:33:23 compute-0 sshd-session[123336]: Connection closed by 192.168.122.30 port 41372
Nov 24 09:33:23 compute-0 sshd-session[123332]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:33:23 compute-0 systemd[1]: session-43.scope: Deactivated successfully.
Nov 24 09:33:23 compute-0 systemd[1]: session-43.scope: Consumed 1.463s CPU time.
Nov 24 09:33:23 compute-0 systemd-logind[822]: Session 43 logged out. Waiting for processes to exit.
Nov 24 09:33:23 compute-0 systemd-logind[822]: Removed session 43.
Nov 24 09:33:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:24 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51ac003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:24 compute-0 ceph-mon[74331]: pgmap v185: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:33:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:24 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51c40021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:25 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51c40021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:25 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v186: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:33:25 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:25 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:25 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:25.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:25 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:25 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000999997s ======
Nov 24 09:33:25 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:25.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999997s
Nov 24 09:33:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:26 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f519c002720 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:26 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51ac003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:26 compute-0 ceph-mon[74331]: pgmap v186: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:33:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:33:26.964Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:33:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:27 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51ac003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:27 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v187: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:33:27 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:27 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000999997s ======
Nov 24 09:33:27 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:27.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999997s
Nov 24 09:33:27 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:27 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:27 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:27.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:28 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:33:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:28 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51c40021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:28 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f519c003040 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:28 compute-0 ceph-mon[74331]: pgmap v187: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:33:29 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:29 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51ac003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:29 compute-0 sshd-session[123751]: Accepted publickey for zuul from 192.168.122.30 port 53432 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 09:33:29 compute-0 systemd-logind[822]: New session 44 of user zuul.
Nov 24 09:33:29 compute-0 systemd[1]: Started Session 44 of User zuul.
Nov 24 09:33:29 compute-0 sshd-session[123751]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:33:29 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v188: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:33:29 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:29 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:29 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:29.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:29 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:29 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:29 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:29.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:30 compute-0 python3.9[123904]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:33:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:30 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51a4003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:30 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51c40021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:30 compute-0 ceph-mon[74331]: pgmap v188: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:33:30 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:33:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:33:30] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Nov 24 09:33:30 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:33:30] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Nov 24 09:33:31 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:31 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f519c003040 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:31 compute-0 sudo[124060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xraobuzevcsffinegjahusyhawuekyuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976810.9138296-59-18930887026021/AnsiballZ_file.py'
Nov 24 09:33:31 compute-0 sudo[124060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:31 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v189: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:33:31 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:31 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:33:31 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:31.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:33:31 compute-0 python3.9[124062]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:33:31 compute-0 sudo[124060]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:31 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:31 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:31 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:31.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:32 compute-0 sudo[124236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exerptrbgmemvmyifqygwtpbrtvzsmui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976811.8085024-83-251744439724599/AnsiballZ_stat.py'
Nov 24 09:33:32 compute-0 sudo[124236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:32 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:32 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51ac003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:32 compute-0 python3.9[124238]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:33:32 compute-0 sudo[124236]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:32 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:32 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51ac003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:32 compute-0 ceph-mon[74331]: pgmap v189: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:33:33 compute-0 sudo[124314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrykwesgmpufvyiatxaivapqonjiftqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976811.8085024-83-251744439724599/AnsiballZ_file.py'
Nov 24 09:33:33 compute-0 sudo[124314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:33:33 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:33 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51c40021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:33 compute-0 python3.9[124316]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.egwf_q48 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:33:33 compute-0 sudo[124314]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:33 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v190: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:33:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:33.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:33.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:34 compute-0 sudo[124467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlriytdrmjgletehwmmogojlzsgbsjnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976813.7759957-143-47478505467082/AnsiballZ_stat.py'
Nov 24 09:33:34 compute-0 sudo[124467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:34 compute-0 python3.9[124469]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:33:34 compute-0 sudo[124467]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:34 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:34 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f519c003040 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:34 compute-0 sudo[124546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rptptakwcfdinisnrzxvfrxykhfmbdpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976813.7759957-143-47478505467082/AnsiballZ_file.py'
Nov 24 09:33:34 compute-0 sudo[124546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:34 compute-0 python3.9[124548]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.dyhyqw4a recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:33:34 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:34 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51a4003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:34 compute-0 sudo[124546]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:34 compute-0 ceph-mon[74331]: pgmap v190: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:33:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:35 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51ac003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:35 compute-0 sudo[124699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlqsbbzfsumwoptrvxcxctfiuabykzvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976815.0367897-182-190854454542535/AnsiballZ_file.py'
Nov 24 09:33:35 compute-0 sudo[124699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:35 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v191: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:33:35 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:35 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:35 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:35.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:35 compute-0 python3.9[124701]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:33:35 compute-0 sudo[124699]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:35 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:35 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:35 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:35.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:36 compute-0 sudo[124851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fiycyqnnpwzmarmadbdbciuyokndwgvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976815.8428376-206-170886300085031/AnsiballZ_stat.py'
Nov 24 09:33:36 compute-0 sudo[124851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:36 compute-0 python3.9[124854]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:33:36 compute-0 sudo[124851]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:36 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:36 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51c40021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:36 compute-0 sudo[124930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjjjeiunwaumihlnmyuduxhtmzurmfgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976815.8428376-206-170886300085031/AnsiballZ_file.py'
Nov 24 09:33:36 compute-0 sudo[124930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:36 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:36 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f519c003040 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:36 compute-0 python3.9[124932]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:33:36 compute-0 sudo[124930]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:36 compute-0 ceph-mon[74331]: pgmap v191: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:33:36 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:33:36.964Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:33:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:37 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51a4003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:37 compute-0 sudo[125082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrlyziupazoongetfvsytkwooowyqnlc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976816.9106674-206-88707363588750/AnsiballZ_stat.py'
Nov 24 09:33:37 compute-0 sudo[125082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:37 compute-0 python3.9[125085]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:33:37 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v192: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:33:37 compute-0 sudo[125082]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:37 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:37 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:33:37 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:37.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:33:37 compute-0 sshd-session[125083]: Invalid user maria from 209.38.206.249 port 49446
Nov 24 09:33:37 compute-0 sshd-session[125083]: Connection closed by invalid user maria 209.38.206.249 port 49446 [preauth]
Nov 24 09:33:37 compute-0 sudo[125163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odcwtjcdhveadxpysnoaruyosqzcyxqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976816.9106674-206-88707363588750/AnsiballZ_file.py'
Nov 24 09:33:37 compute-0 sudo[125163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:37 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:37 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:37 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:37.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:37 compute-0 python3.9[125165]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:33:38 compute-0 sudo[125163]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:38 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:33:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:38 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51ac003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:38 compute-0 sudo[125316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnhqzadcjjmqljqvgheynqliwjkgclqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976818.288101-275-139112657321672/AnsiballZ_file.py'
Nov 24 09:33:38 compute-0 sudo[125316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:38 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51c40021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:38 compute-0 python3.9[125318]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:33:38 compute-0 sudo[125316]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:38 compute-0 ceph-mon[74331]: pgmap v192: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:33:39 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:39 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51a4003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:39 compute-0 sudo[125469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oftflybbrlnxcxshcohojwffbnfrtpjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976819.0517735-299-249633738097587/AnsiballZ_stat.py'
Nov 24 09:33:39 compute-0 sudo[125469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:39 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v193: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:33:39 compute-0 sudo[125472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:33:39 compute-0 sudo[125472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:33:39 compute-0 sudo[125472]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:39 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:39 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:39 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:39.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:39 compute-0 sudo[125497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:33:39 compute-0 sudo[125497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:33:39 compute-0 python3.9[125471]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:33:39 compute-0 sudo[125469]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:39 compute-0 sudo[125611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkvhsfdowetffdadqadqjcitixjcccdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976819.0517735-299-249633738097587/AnsiballZ_file.py'
Nov 24 09:33:39 compute-0 sudo[125611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:39 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:39 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:39 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:39.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:39 compute-0 sudo[125619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:33:39 compute-0 sudo[125619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:33:39 compute-0 sudo[125619]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:39 compute-0 sudo[125497]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:39 compute-0 python3.9[125613]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:33:39 compute-0 sudo[125611]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:40 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:33:40 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:33:40 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:33:40 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:33:40 compute-0 sudo[125681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:33:40 compute-0 sudo[125681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:33:40 compute-0 sudo[125681]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:40 compute-0 sudo[125706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 24 09:33:40 compute-0 sudo[125706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:33:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:40 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f519c003040 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:40 compute-0 sudo[125890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtlpseatbtmatdsmkupxwxdaosakhxsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976820.3198538-335-1282341059145/AnsiballZ_stat.py'
Nov 24 09:33:40 compute-0 sudo[125890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:40 compute-0 podman[125901]: 2025-11-24 09:33:40.680572094 +0000 UTC m=+0.054580048 container create dad5169bed3ba68a35b66b3a805f8cb136dccf9bb18b9bf9a02177d847451c02 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_curie, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:33:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:40 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f519c003040 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:40 compute-0 systemd[1]: Started libpod-conmon-dad5169bed3ba68a35b66b3a805f8cb136dccf9bb18b9bf9a02177d847451c02.scope.
Nov 24 09:33:40 compute-0 podman[125901]: 2025-11-24 09:33:40.654182212 +0000 UTC m=+0.028190156 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:33:40 compute-0 python3.9[125898]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:33:40 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:33:40 compute-0 podman[125901]: 2025-11-24 09:33:40.795494564 +0000 UTC m=+0.169502568 container init dad5169bed3ba68a35b66b3a805f8cb136dccf9bb18b9bf9a02177d847451c02 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_curie, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 24 09:33:40 compute-0 podman[125901]: 2025-11-24 09:33:40.805323 +0000 UTC m=+0.179330914 container start dad5169bed3ba68a35b66b3a805f8cb136dccf9bb18b9bf9a02177d847451c02 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_curie, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:33:40 compute-0 recursing_curie[125917]: 167 167
Nov 24 09:33:40 compute-0 systemd[1]: libpod-dad5169bed3ba68a35b66b3a805f8cb136dccf9bb18b9bf9a02177d847451c02.scope: Deactivated successfully.
Nov 24 09:33:40 compute-0 podman[125901]: 2025-11-24 09:33:40.818870134 +0000 UTC m=+0.192878058 container attach dad5169bed3ba68a35b66b3a805f8cb136dccf9bb18b9bf9a02177d847451c02 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_curie, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 24 09:33:40 compute-0 conmon[125917]: conmon dad5169bed3ba68a35b6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dad5169bed3ba68a35b66b3a805f8cb136dccf9bb18b9bf9a02177d847451c02.scope/container/memory.events
Nov 24 09:33:40 compute-0 podman[125901]: 2025-11-24 09:33:40.821349694 +0000 UTC m=+0.195357618 container died dad5169bed3ba68a35b66b3a805f8cb136dccf9bb18b9bf9a02177d847451c02 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_curie, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:33:40 compute-0 ceph-mon[74331]: pgmap v193: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:33:40 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:33:40 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:33:40 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:33:40 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:33:40 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:33:40 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:33:40 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:33:40 compute-0 sudo[125890]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-62df7d14ebd8ee3f5e68494663a9bf451fe0c7eead9df72783787ecf63fd386d-merged.mount: Deactivated successfully.
Nov 24 09:33:40 compute-0 podman[125901]: 2025-11-24 09:33:40.869955967 +0000 UTC m=+0.243963881 container remove dad5169bed3ba68a35b66b3a805f8cb136dccf9bb18b9bf9a02177d847451c02 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:33:40 compute-0 systemd[1]: libpod-conmon-dad5169bed3ba68a35b66b3a805f8cb136dccf9bb18b9bf9a02177d847451c02.scope: Deactivated successfully.
Nov 24 09:33:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:33:40] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Nov 24 09:33:40 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:33:40] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Nov 24 09:33:41 compute-0 podman[125991]: 2025-11-24 09:33:41.044704959 +0000 UTC m=+0.045203882 container create 61aa1bee74b417dfabc272724d380a051ab8ccbccdb52d499b20b0a17ce406b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_thompson, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:33:41 compute-0 sudo[126031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-druivzkeamfrjjdhnnjnqbucwdbtuhdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976820.3198538-335-1282341059145/AnsiballZ_file.py'
Nov 24 09:33:41 compute-0 systemd[1]: Started libpod-conmon-61aa1bee74b417dfabc272724d380a051ab8ccbccdb52d499b20b0a17ce406b8.scope.
Nov 24 09:33:41 compute-0 sudo[126031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:41 compute-0 podman[125991]: 2025-11-24 09:33:41.026162716 +0000 UTC m=+0.026661669 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:33:41 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:33:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3d3b40c6997721ede624eaebef5ae7e8dbf72c642e405851989af8bd1070534/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:33:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3d3b40c6997721ede624eaebef5ae7e8dbf72c642e405851989af8bd1070534/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:33:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3d3b40c6997721ede624eaebef5ae7e8dbf72c642e405851989af8bd1070534/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:33:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3d3b40c6997721ede624eaebef5ae7e8dbf72c642e405851989af8bd1070534/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:33:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3d3b40c6997721ede624eaebef5ae7e8dbf72c642e405851989af8bd1070534/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:33:41 compute-0 podman[125991]: 2025-11-24 09:33:41.150053632 +0000 UTC m=+0.150552585 container init 61aa1bee74b417dfabc272724d380a051ab8ccbccdb52d499b20b0a17ce406b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_thompson, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 24 09:33:41 compute-0 podman[125991]: 2025-11-24 09:33:41.159072407 +0000 UTC m=+0.159571350 container start 61aa1bee74b417dfabc272724d380a051ab8ccbccdb52d499b20b0a17ce406b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_thompson, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Nov 24 09:33:41 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:41 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51c40021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:41 compute-0 podman[125991]: 2025-11-24 09:33:41.162437107 +0000 UTC m=+0.162936040 container attach 61aa1bee74b417dfabc272724d380a051ab8ccbccdb52d499b20b0a17ce406b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_thompson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:33:41 compute-0 sshd-session[125924]: Invalid user git from 209.38.206.249 port 49462
Nov 24 09:33:41 compute-0 python3.9[126037]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:33:41 compute-0 sudo[126031]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:41 compute-0 sshd-session[125924]: Connection closed by invalid user git 209.38.206.249 port 49462 [preauth]
Nov 24 09:33:41 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v194: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:33:41 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:41 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:41 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:41.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:41 compute-0 friendly_thompson[126035]: --> passed data devices: 0 physical, 1 LVM
Nov 24 09:33:41 compute-0 friendly_thompson[126035]: --> All data devices are unavailable
Nov 24 09:33:41 compute-0 systemd[1]: libpod-61aa1bee74b417dfabc272724d380a051ab8ccbccdb52d499b20b0a17ce406b8.scope: Deactivated successfully.
Nov 24 09:33:41 compute-0 podman[125991]: 2025-11-24 09:33:41.482373975 +0000 UTC m=+0.482872898 container died 61aa1bee74b417dfabc272724d380a051ab8ccbccdb52d499b20b0a17ce406b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_thompson, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:33:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3d3b40c6997721ede624eaebef5ae7e8dbf72c642e405851989af8bd1070534-merged.mount: Deactivated successfully.
Nov 24 09:33:41 compute-0 podman[125991]: 2025-11-24 09:33:41.529027842 +0000 UTC m=+0.529526775 container remove 61aa1bee74b417dfabc272724d380a051ab8ccbccdb52d499b20b0a17ce406b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 24 09:33:41 compute-0 systemd[1]: libpod-conmon-61aa1bee74b417dfabc272724d380a051ab8ccbccdb52d499b20b0a17ce406b8.scope: Deactivated successfully.
Nov 24 09:33:41 compute-0 sudo[125706]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:41 compute-0 sudo[126093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:33:41 compute-0 sudo[126093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:33:41 compute-0 sudo[126093]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:41 compute-0 sudo[126149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- lvm list --format json
Nov 24 09:33:41 compute-0 sudo[126149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:33:41 compute-0 sshd-session[126071]: Invalid user www from 209.38.206.249 port 49474
Nov 24 09:33:41 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:41 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:41 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:41.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:41 compute-0 sshd-session[126071]: Connection closed by invalid user www 209.38.206.249 port 49474 [preauth]
Nov 24 09:33:42 compute-0 podman[126233]: 2025-11-24 09:33:42.039502931 +0000 UTC m=+0.033510994 container create 437f7b888cd935813df9001a111e7e10edf76e8e1f8f59d455c9eed9c766c46a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_colden, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 24 09:33:42 compute-0 systemd[1]: Started libpod-conmon-437f7b888cd935813df9001a111e7e10edf76e8e1f8f59d455c9eed9c766c46a.scope.
Nov 24 09:33:42 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:33:42 compute-0 podman[126233]: 2025-11-24 09:33:42.116909623 +0000 UTC m=+0.110917706 container init 437f7b888cd935813df9001a111e7e10edf76e8e1f8f59d455c9eed9c766c46a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_colden, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:33:42 compute-0 podman[126233]: 2025-11-24 09:33:42.026225632 +0000 UTC m=+0.020233715 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:33:42 compute-0 podman[126233]: 2025-11-24 09:33:42.124048924 +0000 UTC m=+0.118056987 container start 437f7b888cd935813df9001a111e7e10edf76e8e1f8f59d455c9eed9c766c46a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_colden, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:33:42 compute-0 podman[126233]: 2025-11-24 09:33:42.12724004 +0000 UTC m=+0.121248103 container attach 437f7b888cd935813df9001a111e7e10edf76e8e1f8f59d455c9eed9c766c46a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_colden, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:33:42 compute-0 cool_colden[126257]: 167 167
Nov 24 09:33:42 compute-0 systemd[1]: libpod-437f7b888cd935813df9001a111e7e10edf76e8e1f8f59d455c9eed9c766c46a.scope: Deactivated successfully.
Nov 24 09:33:42 compute-0 podman[126233]: 2025-11-24 09:33:42.129566106 +0000 UTC m=+0.123574189 container died 437f7b888cd935813df9001a111e7e10edf76e8e1f8f59d455c9eed9c766c46a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_colden, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:33:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-a93bcf99d97ca4fdac04293e419ca154784e3ee363fb34ec10f8063f1ac0ab07-merged.mount: Deactivated successfully.
Nov 24 09:33:42 compute-0 podman[126233]: 2025-11-24 09:33:42.167708059 +0000 UTC m=+0.161716122 container remove 437f7b888cd935813df9001a111e7e10edf76e8e1f8f59d455c9eed9c766c46a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_colden, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:33:42 compute-0 systemd[1]: libpod-conmon-437f7b888cd935813df9001a111e7e10edf76e8e1f8f59d455c9eed9c766c46a.scope: Deactivated successfully.
Nov 24 09:33:42 compute-0 podman[126297]: 2025-11-24 09:33:42.321019789 +0000 UTC m=+0.046032983 container create 048bdb5d2deb17da586694a68d3f71b540edbc0ccd76e93c3e498d719135030d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:33:42 compute-0 systemd[1]: Started libpod-conmon-048bdb5d2deb17da586694a68d3f71b540edbc0ccd76e93c3e498d719135030d.scope.
Nov 24 09:33:42 compute-0 podman[126297]: 2025-11-24 09:33:42.297900295 +0000 UTC m=+0.022913539 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:33:42 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:33:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c623629fd5dc4e1a739c64b57e020b4f9587b6804e7813cc2207d6770fa97ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:33:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c623629fd5dc4e1a739c64b57e020b4f9587b6804e7813cc2207d6770fa97ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:33:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c623629fd5dc4e1a739c64b57e020b4f9587b6804e7813cc2207d6770fa97ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:33:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c623629fd5dc4e1a739c64b57e020b4f9587b6804e7813cc2207d6770fa97ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:33:42 compute-0 podman[126297]: 2025-11-24 09:33:42.413944092 +0000 UTC m=+0.138957286 container init 048bdb5d2deb17da586694a68d3f71b540edbc0ccd76e93c3e498d719135030d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:33:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:42 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f519c003040 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:42 compute-0 podman[126297]: 2025-11-24 09:33:42.424051234 +0000 UTC m=+0.149064428 container start 048bdb5d2deb17da586694a68d3f71b540edbc0ccd76e93c3e498d719135030d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_heyrovsky, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Nov 24 09:33:42 compute-0 podman[126297]: 2025-11-24 09:33:42.42763641 +0000 UTC m=+0.152649594 container attach 048bdb5d2deb17da586694a68d3f71b540edbc0ccd76e93c3e498d719135030d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Nov 24 09:33:42 compute-0 sudo[126370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fguhmmlzcwbjcaazbevvqevpztoolggb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976821.612975-371-269684274075937/AnsiballZ_systemd.py'
Nov 24 09:33:42 compute-0 sudo[126370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:42 compute-0 festive_heyrovsky[126319]: {
Nov 24 09:33:42 compute-0 festive_heyrovsky[126319]:     "0": [
Nov 24 09:33:42 compute-0 festive_heyrovsky[126319]:         {
Nov 24 09:33:42 compute-0 festive_heyrovsky[126319]:             "devices": [
Nov 24 09:33:42 compute-0 festive_heyrovsky[126319]:                 "/dev/loop3"
Nov 24 09:33:42 compute-0 festive_heyrovsky[126319]:             ],
Nov 24 09:33:42 compute-0 festive_heyrovsky[126319]:             "lv_name": "ceph_lv0",
Nov 24 09:33:42 compute-0 festive_heyrovsky[126319]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:33:42 compute-0 festive_heyrovsky[126319]:             "lv_size": "21470642176",
Nov 24 09:33:42 compute-0 festive_heyrovsky[126319]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=84a084c3-61a7-5de7-8207-1f88efa59a64,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 24 09:33:42 compute-0 festive_heyrovsky[126319]:             "lv_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:33:42 compute-0 festive_heyrovsky[126319]:             "name": "ceph_lv0",
Nov 24 09:33:42 compute-0 festive_heyrovsky[126319]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:33:42 compute-0 festive_heyrovsky[126319]:             "tags": {
Nov 24 09:33:42 compute-0 festive_heyrovsky[126319]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:33:42 compute-0 festive_heyrovsky[126319]:                 "ceph.block_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:33:42 compute-0 festive_heyrovsky[126319]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 09:33:42 compute-0 festive_heyrovsky[126319]:                 "ceph.cluster_fsid": "84a084c3-61a7-5de7-8207-1f88efa59a64",
Nov 24 09:33:42 compute-0 festive_heyrovsky[126319]:                 "ceph.cluster_name": "ceph",
Nov 24 09:33:42 compute-0 festive_heyrovsky[126319]:                 "ceph.crush_device_class": "",
Nov 24 09:33:42 compute-0 festive_heyrovsky[126319]:                 "ceph.encrypted": "0",
Nov 24 09:33:42 compute-0 festive_heyrovsky[126319]:                 "ceph.osd_fsid": "4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c",
Nov 24 09:33:42 compute-0 festive_heyrovsky[126319]:                 "ceph.osd_id": "0",
Nov 24 09:33:42 compute-0 festive_heyrovsky[126319]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 09:33:42 compute-0 festive_heyrovsky[126319]:                 "ceph.type": "block",
Nov 24 09:33:42 compute-0 festive_heyrovsky[126319]:                 "ceph.vdo": "0",
Nov 24 09:33:42 compute-0 festive_heyrovsky[126319]:                 "ceph.with_tpm": "0"
Nov 24 09:33:42 compute-0 festive_heyrovsky[126319]:             },
Nov 24 09:33:42 compute-0 festive_heyrovsky[126319]:             "type": "block",
Nov 24 09:33:42 compute-0 festive_heyrovsky[126319]:             "vg_name": "ceph_vg0"
Nov 24 09:33:42 compute-0 festive_heyrovsky[126319]:         }
Nov 24 09:33:42 compute-0 festive_heyrovsky[126319]:     ]
Nov 24 09:33:42 compute-0 festive_heyrovsky[126319]: }
Nov 24 09:33:42 compute-0 systemd[1]: libpod-048bdb5d2deb17da586694a68d3f71b540edbc0ccd76e93c3e498d719135030d.scope: Deactivated successfully.
Nov 24 09:33:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:42 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51b8001d20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:42 compute-0 podman[126378]: 2025-11-24 09:33:42.752114307 +0000 UTC m=+0.022477929 container died 048bdb5d2deb17da586694a68d3f71b540edbc0ccd76e93c3e498d719135030d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:33:42 compute-0 python3.9[126372]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:33:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c623629fd5dc4e1a739c64b57e020b4f9587b6804e7813cc2207d6770fa97ae-merged.mount: Deactivated successfully.
Nov 24 09:33:42 compute-0 systemd[1]: Reloading.
Nov 24 09:33:42 compute-0 podman[126378]: 2025-11-24 09:33:42.793958699 +0000 UTC m=+0.064322301 container remove 048bdb5d2deb17da586694a68d3f71b540edbc0ccd76e93c3e498d719135030d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_heyrovsky, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Nov 24 09:33:42 compute-0 ceph-mon[74331]: pgmap v194: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:33:42 compute-0 sudo[126149]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:42 compute-0 systemd-rc-local-generator[126417]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:33:42 compute-0 systemd-sysv-generator[126422]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:33:43 compute-0 systemd[1]: libpod-conmon-048bdb5d2deb17da586694a68d3f71b540edbc0ccd76e93c3e498d719135030d.scope: Deactivated successfully.
Nov 24 09:33:43 compute-0 sudo[126428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:33:43 compute-0 sudo[126428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:33:43 compute-0 sudo[126428]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:43 compute-0 sudo[126370]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:43 compute-0 sudo[126455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- raw list --format json
Nov 24 09:33:43 compute-0 sudo[126455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:33:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:33:43 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:43 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51b8001d20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:43 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v195: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:33:43 compute-0 sshd-session[126216]: Invalid user kali from 209.38.206.249 port 49488
Nov 24 09:33:43 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:43 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:33:43 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:43.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:33:43 compute-0 podman[126620]: 2025-11-24 09:33:43.485747367 +0000 UTC m=+0.045820898 container create c76bb9a9b0b4a053a5ec2878bca49eff1f530670207628566c3e11f0e21e7fbc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_cartwright, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Nov 24 09:33:43 compute-0 sshd-session[126216]: Connection closed by invalid user kali 209.38.206.249 port 49488 [preauth]
Nov 24 09:33:43 compute-0 systemd[1]: Started libpod-conmon-c76bb9a9b0b4a053a5ec2878bca49eff1f530670207628566c3e11f0e21e7fbc.scope.
Nov 24 09:33:43 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:33:43 compute-0 sudo[126689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imxecusetbcphfqbgrxhhfylzibkwzws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976823.3037143-395-271870890996415/AnsiballZ_stat.py'
Nov 24 09:33:43 compute-0 sudo[126689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:43 compute-0 podman[126620]: 2025-11-24 09:33:43.462989201 +0000 UTC m=+0.023062812 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:33:43 compute-0 podman[126620]: 2025-11-24 09:33:43.56194106 +0000 UTC m=+0.122014601 container init c76bb9a9b0b4a053a5ec2878bca49eff1f530670207628566c3e11f0e21e7fbc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_cartwright, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:33:43 compute-0 podman[126620]: 2025-11-24 09:33:43.569495512 +0000 UTC m=+0.129569043 container start c76bb9a9b0b4a053a5ec2878bca49eff1f530670207628566c3e11f0e21e7fbc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:33:43 compute-0 podman[126620]: 2025-11-24 09:33:43.574954781 +0000 UTC m=+0.135028312 container attach c76bb9a9b0b4a053a5ec2878bca49eff1f530670207628566c3e11f0e21e7fbc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_cartwright, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Nov 24 09:33:43 compute-0 sweet_cartwright[126687]: 167 167
Nov 24 09:33:43 compute-0 systemd[1]: libpod-c76bb9a9b0b4a053a5ec2878bca49eff1f530670207628566c3e11f0e21e7fbc.scope: Deactivated successfully.
Nov 24 09:33:43 compute-0 podman[126620]: 2025-11-24 09:33:43.577986835 +0000 UTC m=+0.138060366 container died c76bb9a9b0b4a053a5ec2878bca49eff1f530670207628566c3e11f0e21e7fbc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_cartwright, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:33:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-48fea4c7c2e828f701bbc8b1e51f224f2104799ab9b2c6b5d4be4312075962c6-merged.mount: Deactivated successfully.
Nov 24 09:33:43 compute-0 podman[126620]: 2025-11-24 09:33:43.615368069 +0000 UTC m=+0.175441600 container remove c76bb9a9b0b4a053a5ec2878bca49eff1f530670207628566c3e11f0e21e7fbc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:33:43 compute-0 systemd[1]: libpod-conmon-c76bb9a9b0b4a053a5ec2878bca49eff1f530670207628566c3e11f0e21e7fbc.scope: Deactivated successfully.
Nov 24 09:33:43 compute-0 python3.9[126692]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:33:43 compute-0 podman[126715]: 2025-11-24 09:33:43.761311983 +0000 UTC m=+0.041151627 container create 33c76f009c51be9a82a254541162a266999658e0f8a051dd382f39cf9cd65cc0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:33:43 compute-0 sudo[126689]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:43 compute-0 systemd[1]: Started libpod-conmon-33c76f009c51be9a82a254541162a266999658e0f8a051dd382f39cf9cd65cc0.scope.
Nov 24 09:33:43 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:43 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:43 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:43.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:43 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:33:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e84d5d7853b77f22ee1b5e8aecc3205022c2ebb6739fb9e39b65d2b3a5b0776/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:33:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e84d5d7853b77f22ee1b5e8aecc3205022c2ebb6739fb9e39b65d2b3a5b0776/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:33:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e84d5d7853b77f22ee1b5e8aecc3205022c2ebb6739fb9e39b65d2b3a5b0776/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:33:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e84d5d7853b77f22ee1b5e8aecc3205022c2ebb6739fb9e39b65d2b3a5b0776/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:33:43 compute-0 podman[126715]: 2025-11-24 09:33:43.74285695 +0000 UTC m=+0.022696614 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:33:43 compute-0 podman[126715]: 2025-11-24 09:33:43.848367466 +0000 UTC m=+0.128207160 container init 33c76f009c51be9a82a254541162a266999658e0f8a051dd382f39cf9cd65cc0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:33:43 compute-0 podman[126715]: 2025-11-24 09:33:43.856965262 +0000 UTC m=+0.136804906 container start 33c76f009c51be9a82a254541162a266999658e0f8a051dd382f39cf9cd65cc0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_dirac, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:33:43 compute-0 podman[126715]: 2025-11-24 09:33:43.860317292 +0000 UTC m=+0.140156936 container attach 33c76f009c51be9a82a254541162a266999658e0f8a051dd382f39cf9cd65cc0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_dirac, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:33:43 compute-0 sudo[126811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ramnhqrqrftzknweyfqcglwaztxsxihj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976823.3037143-395-271870890996415/AnsiballZ_file.py'
Nov 24 09:33:43 compute-0 sshd-session[126705]: Invalid user teste from 209.38.206.249 port 49494
Nov 24 09:33:43 compute-0 sudo[126811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:44 compute-0 sshd-session[126705]: Connection closed by invalid user teste 209.38.206.249 port 49494 [preauth]
Nov 24 09:33:44 compute-0 python3.9[126813]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:33:44 compute-0 sudo[126811]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:44 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51b00023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:44 compute-0 lvm[126909]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 09:33:44 compute-0 lvm[126909]: VG ceph_vg0 finished
Nov 24 09:33:44 compute-0 boring_dirac[126734]: {}
Nov 24 09:33:44 compute-0 systemd[1]: libpod-33c76f009c51be9a82a254541162a266999658e0f8a051dd382f39cf9cd65cc0.scope: Deactivated successfully.
Nov 24 09:33:44 compute-0 systemd[1]: libpod-33c76f009c51be9a82a254541162a266999658e0f8a051dd382f39cf9cd65cc0.scope: Consumed 1.028s CPU time.
Nov 24 09:33:44 compute-0 podman[126715]: 2025-11-24 09:33:44.571819362 +0000 UTC m=+0.851659026 container died 33c76f009c51be9a82a254541162a266999658e0f8a051dd382f39cf9cd65cc0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_dirac, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:33:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e84d5d7853b77f22ee1b5e8aecc3205022c2ebb6739fb9e39b65d2b3a5b0776-merged.mount: Deactivated successfully.
Nov 24 09:33:44 compute-0 podman[126715]: 2025-11-24 09:33:44.616399329 +0000 UTC m=+0.896238973 container remove 33c76f009c51be9a82a254541162a266999658e0f8a051dd382f39cf9cd65cc0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:33:44 compute-0 systemd[1]: libpod-conmon-33c76f009c51be9a82a254541162a266999658e0f8a051dd382f39cf9cd65cc0.scope: Deactivated successfully.
Nov 24 09:33:44 compute-0 sudo[126455]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:44 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:33:44 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:33:44 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:33:44 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:33:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:44 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51ac003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:44 compute-0 sudo[126997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:33:44 compute-0 sudo[126997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:33:44 compute-0 sudo[126997]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:44 compute-0 sudo[127072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycpfwybgonxlwtqrvoebjdawvbesviqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976824.575154-431-95265481197236/AnsiballZ_stat.py'
Nov 24 09:33:44 compute-0 sudo[127072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:44 compute-0 ceph-mon[74331]: pgmap v195: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:33:44 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:33:44 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:33:45 compute-0 python3.9[127074]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:33:45 compute-0 sudo[127072]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:45 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51c40021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:45 compute-0 sudo[127151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmonotdccdzuclrzbhyzhshgtjoirfbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976824.575154-431-95265481197236/AnsiballZ_file.py'
Nov 24 09:33:45 compute-0 sudo[127151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Optimize plan auto_2025-11-24_09:33:45
Nov 24 09:33:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 09:33:45 compute-0 ceph-mgr[74626]: [balancer INFO root] do_upmap
Nov 24 09:33:45 compute-0 ceph-mgr[74626]: [balancer INFO root] pools ['.nfs', '.mgr', 'default.rgw.log', 'images', 'cephfs.cephfs.meta', 'backups', 'default.rgw.control', 'vms', '.rgw.root', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.data']
Nov 24 09:33:45 compute-0 ceph-mgr[74626]: [balancer INFO root] prepared 0/10 upmap changes
Nov 24 09:33:45 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v196: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:33:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 09:33:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:33:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 09:33:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:33:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:33:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:33:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:33:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:33:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:33:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:33:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:33:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:33:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 09:33:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:33:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:33:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:33:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 24 09:33:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:33:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 09:33:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:33:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:33:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:33:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 09:33:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:33:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 09:33:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:33:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:33:45 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:45 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:45 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:45.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:45 compute-0 python3.9[127153]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:33:45 compute-0 sudo[127151]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 09:33:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:33:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:33:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:33:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:33:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:33:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:33:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:33:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:33:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 09:33:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:33:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:33:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:33:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:33:45 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:45 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:45 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:45.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:45 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:33:46 compute-0 sudo[127303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgcdpafelplassubhektwgxorsvxlhdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976825.858433-467-201152727983382/AnsiballZ_systemd.py'
Nov 24 09:33:46 compute-0 sudo[127303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:46 compute-0 python3.9[127305]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:33:46 compute-0 systemd[1]: Reloading.
Nov 24 09:33:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:46 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51b8001d20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:46 compute-0 systemd-rc-local-generator[127332]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:33:46 compute-0 systemd-sysv-generator[127335]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:33:46 compute-0 systemd[1]: Starting Create netns directory...
Nov 24 09:33:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:46 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51b0002ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:46 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 24 09:33:46 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 24 09:33:46 compute-0 systemd[1]: Finished Create netns directory.
Nov 24 09:33:46 compute-0 sudo[127303]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:46 compute-0 ceph-mon[74331]: pgmap v196: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:33:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:33:46.964Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:33:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:47 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51ac003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:47 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v197: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:33:47 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:47 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:47 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:47.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:47 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:47 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:47 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:47.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:47 compute-0 python3.9[127497]: ansible-ansible.builtin.service_facts Invoked
Nov 24 09:33:47 compute-0 network[127514]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 09:33:47 compute-0 network[127515]: 'network-scripts' will be removed from distribution in near future.
Nov 24 09:33:47 compute-0 network[127516]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 09:33:48 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:33:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:48 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51c40021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:48 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51c40021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:48 compute-0 ceph-mon[74331]: pgmap v197: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:33:49 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:49 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51c40021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:49 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v198: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:33:49 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:49 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:49 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:49.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:49 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:49 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:49 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:49.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:50 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51ac003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:50 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51b0002ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:50 compute-0 ceph-mon[74331]: pgmap v198: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:33:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:33:50] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Nov 24 09:33:50 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:33:50] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Nov 24 09:33:51 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:51 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51b0002ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:51 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v199: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:33:51 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:51 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:51 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:51.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:51 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:51 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:51 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:51.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:52 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51b8002e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:52 compute-0 sudo[127783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-joedojzzyakifdnuiswivnhrigajkbqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976832.2401083-545-119722493344176/AnsiballZ_stat.py'
Nov 24 09:33:52 compute-0 sudo[127783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:52 compute-0 python3.9[127785]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:33:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:52 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51ac003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:52 compute-0 sudo[127783]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:52 compute-0 ceph-mon[74331]: pgmap v199: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:33:52 compute-0 sudo[127861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckxakzetblgnjimfjmvblwjjfcitchjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976832.2401083-545-119722493344176/AnsiballZ_file.py'
Nov 24 09:33:52 compute-0 sudo[127861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:53 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:33:53 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:53 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51b0002ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:53 compute-0 python3.9[127863]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:33:53 compute-0 sudo[127861]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:53 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v200: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:33:53 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:53 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:53 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:53.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:53 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:53 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:33:53 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:53.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:33:53 compute-0 sudo[128014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knwybdzbtwrbovryvlklzelzlngmfmun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976833.6179228-584-124911987952630/AnsiballZ_file.py'
Nov 24 09:33:53 compute-0 sudo[128014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:54 compute-0 python3.9[128016]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:33:54 compute-0 sudo[128014]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:54 compute-0 sshd-session[127540]: Invalid user odoo18 from 209.38.206.249 port 49504
Nov 24 09:33:54 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:54 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51c40021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:54 compute-0 sshd-session[127540]: Connection closed by invalid user odoo18 209.38.206.249 port 49504 [preauth]
Nov 24 09:33:54 compute-0 sudo[128169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqjvhmgzekonzbgsvcxwbahdhfzyqmoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976834.315771-608-262161880096511/AnsiballZ_stat.py'
Nov 24 09:33:54 compute-0 sudo[128169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:54 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:54 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51b8002e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:54 compute-0 python3.9[128171]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:33:54 compute-0 sudo[128169]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:54 compute-0 sshd-session[128141]: Invalid user grafana from 209.38.206.249 port 33162
Nov 24 09:33:54 compute-0 ceph-mon[74331]: pgmap v200: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:33:55 compute-0 sshd-session[128141]: Connection closed by invalid user grafana 209.38.206.249 port 33162 [preauth]
Nov 24 09:33:55 compute-0 sudo[128247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxsjmjtuiktnwepnvxynukemaqqouknp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976834.315771-608-262161880096511/AnsiballZ_file.py'
Nov 24 09:33:55 compute-0 sudo[128247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:55 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51ac003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:55 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v201: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:33:55 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:55 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:55 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:55.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:55 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:55 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:55 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:55.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:56 compute-0 python3.9[128249]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:33:56 compute-0 sudo[128247]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:56 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51b0003680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:56 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51c40021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:33:56.965Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 09:33:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:33:56.966Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:33:57 compute-0 sudo[128401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbistzqxntfhsoduebnnowejhceovccg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976836.5943325-653-256818173584042/AnsiballZ_timezone.py'
Nov 24 09:33:57 compute-0 sudo[128401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:57 compute-0 ceph-mon[74331]: pgmap v201: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:33:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:57 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51b8002e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:57 compute-0 python3.9[128403]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 24 09:33:57 compute-0 systemd[1]: Starting Time & Date Service...
Nov 24 09:33:57 compute-0 systemd[1]: Started Time & Date Service.
Nov 24 09:33:57 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v202: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:33:57 compute-0 sudo[128401]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:57 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:57 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:57 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:57.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:57 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:57 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:57 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:57.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:58 compute-0 sudo[128558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zljopflrqhsbmtkpcdwfciqpxwubshoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976837.799492-680-86208843308209/AnsiballZ_file.py'
Nov 24 09:33:58 compute-0 sudo[128558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:58 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:33:58 compute-0 python3.9[128560]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:33:58 compute-0 sudo[128558]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:58 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51ac003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:58 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51b0003680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:59 compute-0 sudo[128711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbxpdmdmeezcfcfqaqbzzrseumnupyqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976838.6074636-704-166995400735153/AnsiballZ_stat.py'
Nov 24 09:33:59 compute-0 sudo[128711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:33:59 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51c4003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:33:59 compute-0 ceph-mon[74331]: pgmap v202: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:33:59 compute-0 python3.9[128713]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:33:59 compute-0 sudo[128711]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:59 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v203: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:33:59 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:59 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:59 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:33:59.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:33:59 compute-0 sudo[128790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjmwmsmhtpvijjjesglxcupbcndeljot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976838.6074636-704-166995400735153/AnsiballZ_file.py'
Nov 24 09:33:59 compute-0 sudo[128790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:33:59 compute-0 python3.9[128792]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:33:59 compute-0 sudo[128790]: pam_unix(sudo:session): session closed for user root
Nov 24 09:33:59 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:33:59 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:33:59 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:33:59.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:00 compute-0 sudo[128869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:34:00 compute-0 sudo[128869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:34:00 compute-0 sudo[128869]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:00 compute-0 sudo[128968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmxrqnjzugyjnlpdhltmstitcjskwmmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976839.9095116-740-276835040421216/AnsiballZ_stat.py'
Nov 24 09:34:00 compute-0 sudo[128968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:00 compute-0 ceph-mon[74331]: pgmap v203: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:34:00 compute-0 python3.9[128970]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:34:00 compute-0 sudo[128968]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:34:00 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51b8002e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:00 compute-0 sudo[129046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsvgergxfkcukohllvabjjywocvfcggj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976839.9095116-740-276835040421216/AnsiballZ_file.py'
Nov 24 09:34:00 compute-0 sudo[129046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:34:00 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51b0003680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:00 compute-0 python3.9[129048]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.bv9lulof recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:34:00 compute-0 sudo[129046]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:34:00] "GET /metrics HTTP/1.1" 200 48261 "" "Prometheus/2.51.0"
Nov 24 09:34:00 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:34:00] "GET /metrics HTTP/1.1" 200 48261 "" "Prometheus/2.51.0"
Nov 24 09:34:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:34:01 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51ac003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:01 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:34:01 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v204: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:34:01 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:01 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:01 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:01.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:01 compute-0 sudo[129199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bczghssoejawbozkmlpffyujkjgfuoco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976841.1729348-776-141448830905040/AnsiballZ_stat.py'
Nov 24 09:34:01 compute-0 sudo[129199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:01 compute-0 python3.9[129201]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:34:01 compute-0 sudo[129199]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:01 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:01 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:01 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:01.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:02 compute-0 sudo[129277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-antqkrsmnluoqfisjruzsrxoqjvjgsix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976841.1729348-776-141448830905040/AnsiballZ_file.py'
Nov 24 09:34:02 compute-0 sudo[129277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:02 compute-0 ceph-mon[74331]: pgmap v204: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:34:02 compute-0 python3.9[129279]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:34:02 compute-0 sudo[129277]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:02 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:34:02 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51c4003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:02 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:34:02 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51b8002fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:03 compute-0 sudo[129430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbmvtyuykqksaobgaszomygtwnrbwqux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976842.6095629-815-188952485267678/AnsiballZ_command.py'
Nov 24 09:34:03 compute-0 sudo[129430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:03 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:34:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:34:03 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51b0003680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:03 compute-0 python3.9[129432]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:34:03 compute-0 sudo[129430]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:03 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v205: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:34:03 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:03 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:03 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:03.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:03 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:03 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:03 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:03.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:03 compute-0 sudo[129584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efdrdifljqabkxpwghnrbhritlzqihpu ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763976843.4952188-839-34294536122891/AnsiballZ_edpm_nftables_from_files.py'
Nov 24 09:34:03 compute-0 sudo[129584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:04 compute-0 python3[129586]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 24 09:34:04 compute-0 sudo[129584]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:04 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:34:04 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51ac003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:04 compute-0 ceph-mon[74331]: pgmap v205: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:34:04 compute-0 sudo[129737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdxwvaudoyvrcxdljzswrervmszfqftz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976844.3477697-863-74365983821575/AnsiballZ_stat.py'
Nov 24 09:34:04 compute-0 sudo[129737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:04 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:34:04 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51c4003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:04 compute-0 python3.9[129739]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:34:04 compute-0 sudo[129737]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:34:05 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51b8002fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:05 compute-0 sudo[129816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxcmifdgnkvtlahdwkhgforyuhbobmxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976844.3477697-863-74365983821575/AnsiballZ_file.py'
Nov 24 09:34:05 compute-0 sudo[129816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:05 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v206: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:34:05 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:05 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:34:05 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:05.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:34:05 compute-0 python3.9[129818]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:34:05 compute-0 sudo[129816]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:05 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:05 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:05 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:05.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:06 compute-0 sudo[129968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwwoeznnlvvuxgfyjybqfxsughefdjrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976845.6781223-899-65089965052514/AnsiballZ_stat.py'
Nov 24 09:34:06 compute-0 sudo[129968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:06 compute-0 python3.9[129970]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:34:06 compute-0 sudo[129968]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:34:06 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51b0003680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:06 compute-0 sudo[130047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rawdqlmohccxeitayouqfspcrgjyriol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976845.6781223-899-65089965052514/AnsiballZ_file.py'
Nov 24 09:34:06 compute-0 sudo[130047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:06 compute-0 ceph-mon[74331]: pgmap v206: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:34:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:34:06 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51ac003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:06 compute-0 python3.9[130049]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:34:06 compute-0 sudo[130047]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:34:06.967Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:34:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:34:07 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51c4003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:07 compute-0 sudo[130200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqfbkzrnvbeskaqrrbjfrugiisxvxbuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976847.0407345-935-36210613218284/AnsiballZ_stat.py'
Nov 24 09:34:07 compute-0 sudo[130200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:07 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v207: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:34:07 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:07 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:34:07 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:07.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:34:07 compute-0 python3.9[130202]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:34:07 compute-0 sudo[130200]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:07 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:07 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:07 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:07.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:07 compute-0 sudo[130278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stekjjkpdnkietzozzweiscqzgjtclcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976847.0407345-935-36210613218284/AnsiballZ_file.py'
Nov 24 09:34:07 compute-0 sudo[130278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:08 compute-0 python3.9[130280]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:34:08 compute-0 sudo[130278]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:08 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:34:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:34:08 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51b8002fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:08 compute-0 sudo[130431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcepisqeuogjkqofrfqynaurbjkmxjje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976848.2878604-971-254357003933861/AnsiballZ_stat.py'
Nov 24 09:34:08 compute-0 sudo[130431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:08 compute-0 ceph-mon[74331]: pgmap v207: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:34:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:34:08 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51b0003680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:08 compute-0 python3.9[130433]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:34:08 compute-0 sudo[130431]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:09 compute-0 sudo[130509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwclxildidxiqachhimznsdajeprcozv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976848.2878604-971-254357003933861/AnsiballZ_file.py'
Nov 24 09:34:09 compute-0 sudo[130509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:34:09 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51b0003680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:09 compute-0 python3.9[130511]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:34:09 compute-0 sudo[130509]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:09 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v208: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:34:09 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:09 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:09 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:09.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:09 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:09 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:09 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:09.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:09 compute-0 sudo[130662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdjvrimxriajumpqxttltzsmfddtqcvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976849.5588596-1007-27240310578353/AnsiballZ_stat.py'
Nov 24 09:34:09 compute-0 sudo[130662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:10 compute-0 python3.9[130664]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:34:10 compute-0 sudo[130662]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:34:10 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51c4003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:10 compute-0 sudo[130741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vheyhshpotudmprqznojcvmwrccmkikz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976849.5588596-1007-27240310578353/AnsiballZ_file.py'
Nov 24 09:34:10 compute-0 sudo[130741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:10 compute-0 ceph-mon[74331]: pgmap v208: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:34:10 compute-0 python3.9[130743]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:34:10 compute-0 sudo[130741]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:34:10 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51b8002fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:34:10] "GET /metrics HTTP/1.1" 200 48261 "" "Prometheus/2.51.0"
Nov 24 09:34:10 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:34:10] "GET /metrics HTTP/1.1" 200 48261 "" "Prometheus/2.51.0"
Nov 24 09:34:11 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:34:11 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51ac003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:11 compute-0 sudo[130894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bovokctqcjsermqhgpvkvjwljwqvueql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976850.9590764-1046-234524473470230/AnsiballZ_command.py'
Nov 24 09:34:11 compute-0 sudo[130894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:11 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v209: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:34:11 compute-0 python3.9[130896]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:34:11 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:11 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:34:11 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:11.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:34:11 compute-0 sudo[130894]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:11 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:11 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:11 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:11.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:12 compute-0 sudo[131050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsxqtigkolgezyslckgedysgkqojgzto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976851.7576542-1070-49084967821290/AnsiballZ_blockinfile.py'
Nov 24 09:34:12 compute-0 sudo[131050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:12 compute-0 python3.9[131052]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:34:12 compute-0 sudo[131050]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:12 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 09:34:12 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:34:12 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51b0003680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:12 compute-0 ceph-mon[74331]: pgmap v209: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:34:12 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:34:12 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51c4003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:12 compute-0 sudo[131203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwzpycqfcmfgipwglnkbyqzkelgndrer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976852.714547-1097-261568567484631/AnsiballZ_file.py'
Nov 24 09:34:12 compute-0 sudo[131203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:34:13 compute-0 python3.9[131205]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:34:13 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:34:13 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51b8002fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:13 compute-0 sudo[131203]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:13 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v210: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:34:13 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:13 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:34:13 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:13.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:34:13 compute-0 sudo[131356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmjlisidjddifdwcrhmfweojqschqizn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976853.3419328-1097-36992355926704/AnsiballZ_file.py'
Nov 24 09:34:13 compute-0 sudo[131356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:13 compute-0 python3.9[131358]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:34:13 compute-0 sudo[131356]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:13 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:13 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:13 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:13.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:14 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:34:14 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51ac003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:14 compute-0 ceph-mon[74331]: pgmap v210: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:34:14 compute-0 sudo[131511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqtyxswmebkzyrvkiwtxnmdfrrfooztz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976854.309507-1142-272325254527333/AnsiballZ_mount.py'
Nov 24 09:34:14 compute-0 sudo[131511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:14 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:34:14 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51b0003680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:14 compute-0 python3.9[131513]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 24 09:34:14 compute-0 sudo[131511]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:34:15 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f519c001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:15 compute-0 sudo[131664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrnhftaeiovkjmjyextdfgmumzeuvfzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976855.1036847-1142-90489567764161/AnsiballZ_mount.py'
Nov 24 09:34:15 compute-0 sudo[131664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:15 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v211: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:34:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:34:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:34:15 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:15 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:15 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:15.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:34:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:34:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:34:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:34:15 compute-0 python3.9[131666]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 24 09:34:15 compute-0 sudo[131664]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:15 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:34:15 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:15 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:15 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:15.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:16 compute-0 sshd-session[123754]: Connection closed by 192.168.122.30 port 53432
Nov 24 09:34:16 compute-0 sshd-session[123751]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:34:16 compute-0 systemd[1]: session-44.scope: Deactivated successfully.
Nov 24 09:34:16 compute-0 systemd[1]: session-44.scope: Consumed 29.038s CPU time.
Nov 24 09:34:16 compute-0 systemd-logind[822]: Session 44 logged out. Waiting for processes to exit.
Nov 24 09:34:16 compute-0 systemd-logind[822]: Removed session 44.
Nov 24 09:34:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:34:16 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51b8002fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:34:16 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51ac003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:16 compute-0 ceph-mon[74331]: pgmap v211: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:34:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:34:16.968Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:34:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:34:17 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51b0003680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:17 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v212: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:34:17 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:17 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:34:17 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:17.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:34:17 compute-0 sshd-session[131692]: Invalid user vyos from 209.38.206.249 port 35114
Nov 24 09:34:17 compute-0 sshd-session[131692]: Connection closed by invalid user vyos 209.38.206.249 port 35114 [preauth]
Nov 24 09:34:17 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:17 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:17 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:17.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:18 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:34:18 compute-0 kernel: ganesha.nfsd[126381]: segfault at 50 ip 00007f527a9c632e sp 00007f5232ffc210 error 4 in libntirpc.so.5.8[7f527a9ab000+2c000] likely on CPU 7 (core 0, socket 7)
Nov 24 09:34:18 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 24 09:34:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[116619]: 24/11/2025 09:34:18 : epoch 6924262a : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f51b0003680 fd 39 proxy ignored for local
Nov 24 09:34:18 compute-0 systemd[1]: Started Process Core Dump (PID 131696/UID 0).
Nov 24 09:34:18 compute-0 ceph-mon[74331]: pgmap v212: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:34:19 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v213: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:34:19 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:19 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:19 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:19.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:19 compute-0 systemd-coredump[131697]: Process 116623 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 59:
                                                    #0  0x00007f527a9c632e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 24 09:34:19 compute-0 systemd[1]: systemd-coredump@1-131696-0.service: Deactivated successfully.
Nov 24 09:34:19 compute-0 systemd[1]: systemd-coredump@1-131696-0.service: Consumed 1.151s CPU time.
Nov 24 09:34:19 compute-0 podman[131703]: 2025-11-24 09:34:19.777525047 +0000 UTC m=+0.028075969 container died 91116f2070f86f3d214da86a09fe74e8d242d995ea22ce4da683ead47b106935 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:34:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-ffc80b4a79f47a05f054ed559a7695a704214d606723e81ad116f7852185cbae-merged.mount: Deactivated successfully.
Nov 24 09:34:19 compute-0 podman[131703]: 2025-11-24 09:34:19.82696603 +0000 UTC m=+0.077516942 container remove 91116f2070f86f3d214da86a09fe74e8d242d995ea22ce4da683ead47b106935 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:34:19 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.2.0.compute-0.ssprex.service: Main process exited, code=exited, status=139/n/a
Nov 24 09:34:19 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:19 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:34:19 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:19.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:34:19 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.2.0.compute-0.ssprex.service: Failed with result 'exit-code'.
Nov 24 09:34:19 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.2.0.compute-0.ssprex.service: Consumed 1.388s CPU time.
Nov 24 09:34:20 compute-0 sudo[131746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:34:20 compute-0 sudo[131746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:34:20 compute-0 sudo[131746]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:20 compute-0 ceph-mon[74331]: pgmap v213: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:34:20 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:34:20] "GET /metrics HTTP/1.1" 200 48260 "" "Prometheus/2.51.0"
Nov 24 09:34:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:34:20] "GET /metrics HTTP/1.1" 200 48260 "" "Prometheus/2.51.0"
Nov 24 09:34:21 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v214: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:34:21 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:21 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:21 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:21.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:21 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:21 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:21 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:21.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:21 compute-0 sshd-session[131773]: Accepted publickey for zuul from 192.168.122.30 port 49946 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 09:34:21 compute-0 systemd-logind[822]: New session 45 of user zuul.
Nov 24 09:34:21 compute-0 systemd[1]: Started Session 45 of User zuul.
Nov 24 09:34:21 compute-0 sshd-session[131773]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:34:22 compute-0 sudo[131927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iiaojjtezrucspmspnuwwjoaiczkitny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976862.0546877-18-277555979335557/AnsiballZ_tempfile.py'
Nov 24 09:34:22 compute-0 sudo[131927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:22 compute-0 python3.9[131929]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 24 09:34:22 compute-0 sudo[131927]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:22 compute-0 ceph-mon[74331]: pgmap v214: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:34:23 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:34:23 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v215: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:34:23 compute-0 sudo[132080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-peeywjjqenzbfnqpkpuhqjplpynwwchg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976862.9813263-54-29832138293578/AnsiballZ_stat.py'
Nov 24 09:34:23 compute-0 sudo[132080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:23 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:23 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:23 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:23.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:23 compute-0 python3.9[132082]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:34:23 compute-0 sudo[132080]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:23 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:23 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:34:23 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:23.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:34:24 compute-0 sudo[132235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvqndqxlknbjfwrpovtystmkpweipdde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976863.8447516-78-144484217344636/AnsiballZ_slurp.py'
Nov 24 09:34:24 compute-0 sudo[132235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/093424 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:34:24 compute-0 python3.9[132237]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Nov 24 09:34:24 compute-0 sudo[132235]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:24 compute-0 ceph-mon[74331]: pgmap v215: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:34:24 compute-0 sudo[132387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxxsmplcqodlzhaygrovojzkbrxicfsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976864.705434-102-59213446414553/AnsiballZ_stat.py'
Nov 24 09:34:24 compute-0 sudo[132387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:25 compute-0 python3.9[132389]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.vntip_vo follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:34:25 compute-0 sudo[132387]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:25 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v216: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:34:25 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:25 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:34:25 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:25.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:34:25 compute-0 sudo[132513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcgixikcxckxinsxbpfnzfkmvtgyqvvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976864.705434-102-59213446414553/AnsiballZ_copy.py'
Nov 24 09:34:25 compute-0 sudo[132513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:25 compute-0 python3.9[132515]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.vntip_vo mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763976864.705434-102-59213446414553/.source.vntip_vo _original_basename=.n15_1v7e follow=False checksum=f51461b6f6171622d95e6dfd4bfc1927ea303d6e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:34:25 compute-0 sudo[132513]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:25 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:25 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:25 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:25.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:26 compute-0 sshd-session[71229]: Received disconnect from 38.129.56.127 port 43098:11: disconnected by user
Nov 24 09:34:26 compute-0 sshd-session[71229]: Disconnected from user zuul 38.129.56.127 port 43098
Nov 24 09:34:26 compute-0 sshd-session[71226]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:34:26 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Nov 24 09:34:26 compute-0 systemd[1]: session-18.scope: Consumed 1min 36.782s CPU time.
Nov 24 09:34:26 compute-0 systemd-logind[822]: Session 18 logged out. Waiting for processes to exit.
Nov 24 09:34:26 compute-0 systemd-logind[822]: Removed session 18.
Nov 24 09:34:26 compute-0 sudo[132666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpkbsmmvggqxlppxgufpxatvljjkcsui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976866.1637723-147-187843343440738/AnsiballZ_setup.py'
Nov 24 09:34:26 compute-0 sudo[132666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:26 compute-0 ceph-mon[74331]: pgmap v216: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:34:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:34:26.970Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 09:34:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:34:26.971Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:34:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:34:26.971Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:34:27 compute-0 python3.9[132668]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:34:27 compute-0 sudo[132666]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:27 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v217: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:34:27 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 24 09:34:27 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:27 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:27 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:27.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:27 compute-0 sudo[132821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yupdgxjkorlessjwgtkwwowvlezkqzfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976867.3308046-172-45060322192044/AnsiballZ_blockinfile.py'
Nov 24 09:34:27 compute-0 sudo[132821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:27 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:27 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:27 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:27.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:27 compute-0 python3.9[132823]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDnPh2FYKCqB5Rxe2d73LAea+vmvipLFksP43GM8QFNtdkL9UXsBFKIlbvhCArQ0+q5/EXcOy13rEWVabeuzYdek35bvnCWnqrlaoEFqEV7Y7SDrutMHxHvnLthse/1jj4AvtjvQXG0bKruDgtz2CBksRaKWTEHPZHLOYOwWLGogWVazacOPagjlMQ9UdpYvwfqgKnjMpl6sHCvQC7C0kTNvrYrrhUZqReUWyggx/XcC/YJvSYvMW1wNRhYmypPzEXu8QXt0ywHvCucILZcZqBE1/lKAUCLqDEkB/xpMnKiZ/EmDtyv8AP7H231WeEoaU4BziaD2jSd/H6lr2JJwpKBlrGkti8gQpJHtDytAtbVtrLD5fW+1GkobqN/2GXjNnvzuLB36OhT4nysfJ6BPP3sgaaZ2RJSzP5hI3jfFVn/NYjbaRIoo+tOB50PJeIPj6c5uMX+Qcb2V6EOUwogIRhtwN7A1XHh8dQPCUVYCUmNIq1K7NZ3Hxf+BqhVsSj6SK0=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINu5/fR7YXhb91kwrOd7U+mnimdcm+o61ru6zTYmFIZO
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJFgzeIWa1Ve+dIxs7Pjz8TnBGpgkm/KAIeb7PoVU+QfPqP68TrTBJjwgq/5DOilENFVsFmr+3WdERS0uMWfxXo=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCyBn9mTS8EhHsIKYO0tLgGtKOo5KK33vyjqFzXOs43ZcW8GNKmSQ7DXnq80OCGGkDE9aL5uVEQ82MaYpYE8rZVZGrTF1heqhLe2ModNgcaUA+dBOzScRYEm5JAsj6ajcAc7fiPseazHiC80XQlEo+bwF6XHf/i9t7MHMqQCKdM+qnsEd6JeYe+Zy6X7Web4mN4mbvDaHxjBAdxuR0g0bKoYRjFeeNQyQQ/2Fpsa/i/ZqFVU59TrQ1vm9wLk9wJQd7mBQsdxizekzHGMkE5Ub8VdN43iscVyKKhZWeUOyEK2HASt+n/fHjIsFD65a4GLiHFuJ8DJ4CrWFrwt1RIXLkNFOImjH5kiMO55d/Qogf5F33Mkto3ntPQP/tShtBEDIzc9JCE7vYLFjk/bMSUcK9/u41E8suBkZBHnzXC8+eB6XCoYYNxA+cowaSg5+YCSxL6yON9u34LV+i3jZosNYNivLHjOmOsyGEs/Az6NLkHYzxYCHY042etu9Py2/lONrk=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDX1cMQF3siye3qNUS07EBS+iX+poG1/aIqFR51WsltV
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFy78zaPxoZwc0f5pE0EdJcb6EwSlQGeMhelmYFBlrBeD2fH3vCrxrTbbmmM9DSQFtIo8sNV7/s7CV9dvbvMOzQ=
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCYj9G0Ft/Psyl/13EAEebfB7qR7surocLwWTVKKcclTBPrKIFnHkxuGFUee1a6DQGup+ENEdhJN2MOXFv/jskxJUsoILDHuvx17jHKFvMSR7ycfe+1umEqgfKCHGxlLXobZjj7t2PzAveNkTk+zeX8pqLH1q86LI01fH0n3jdSksqEXvxbiDLMspPTM3alGxNI4pztPvN3i+0qfCPD5SL9dhFsP4C8IVTBWAM4g7Qd6LyKhx+MVoEVecLL6jsM8z+zArVsZKFcZOKFpl0MTeWdpNR0b4u0ILO59y38D/dVoM45NRDpIi7HyoS7TsD0XpP+3zP8hGo4M35QU+a9YRmdCaUChLmqjfUprjnQrusAuQfP406rQ3JlgWs3YAwF0IPhvHv57pPWm3xGwKPFpO0Jguw5cQdZZvYk4tS9JvlCz5+Yyfm3+9T+k1KLfcZ+zlvOYKz+BXNiPfk1bF9ML7/KEIyJjGf32o5nEp0H1sH24wrSIroXa+woila4KBTffe8=
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFQe/vdPzZywzEntIohbfJ9grfNBp30Atbg8qy8BeQ3c
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPhaUxRkg9RrudtznCKCcwWhf1hoSfCyCfTHlGI62beVEpMD4en9bzfcuYnvB/Qm3vgzgUVMpS53KCL9bmqBfT8=
                                              create=True mode=0644 path=/tmp/ansible.vntip_vo state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:34:27 compute-0 sudo[132821]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:28 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:34:28 compute-0 sudo[132974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmxflhraogkfmfgtxyexiylvlavrlfxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976868.163539-196-92047594046298/AnsiballZ_command.py'
Nov 24 09:34:28 compute-0 sudo[132974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:28 compute-0 python3.9[132976]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.vntip_vo' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:34:28 compute-0 sudo[132974]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:28 compute-0 ceph-mon[74331]: pgmap v217: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:34:29 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v218: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:34:29 compute-0 sudo[133129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqlntassgymepceqninoqugpzsuuzuii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976869.0344326-220-42643940913477/AnsiballZ_file.py'
Nov 24 09:34:29 compute-0 sudo[133129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:29 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:29 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:29 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:29.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:29 compute-0 python3.9[133131]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.vntip_vo state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:34:29 compute-0 sudo[133129]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:29 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:29 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:29 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:29.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:30 compute-0 sshd-session[131776]: Connection closed by 192.168.122.30 port 49946
Nov 24 09:34:30 compute-0 sshd-session[131773]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:34:30 compute-0 systemd[1]: session-45.scope: Deactivated successfully.
Nov 24 09:34:30 compute-0 systemd[1]: session-45.scope: Consumed 5.146s CPU time.
Nov 24 09:34:30 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.2.0.compute-0.ssprex.service: Scheduled restart job, restart counter is at 2.
Nov 24 09:34:30 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ssprex for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:34:30 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.2.0.compute-0.ssprex.service: Consumed 1.388s CPU time.
Nov 24 09:34:30 compute-0 systemd-logind[822]: Session 45 logged out. Waiting for processes to exit.
Nov 24 09:34:30 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ssprex for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:34:30 compute-0 systemd-logind[822]: Removed session 45.
Nov 24 09:34:30 compute-0 podman[133203]: 2025-11-24 09:34:30.391618755 +0000 UTC m=+0.046680337 container create df9e80bc1955751465649d3c32cf2badcbe991afb82a69a98d6dbf4f4064c0aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid)
Nov 24 09:34:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dc5bed0541814efe3dddc6359c8ad3c2d9239e94e5fc2a9330a444612bde598/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 24 09:34:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dc5bed0541814efe3dddc6359c8ad3c2d9239e94e5fc2a9330a444612bde598/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:34:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dc5bed0541814efe3dddc6359c8ad3c2d9239e94e5fc2a9330a444612bde598/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:34:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dc5bed0541814efe3dddc6359c8ad3c2d9239e94e5fc2a9330a444612bde598/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ssprex-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:34:30 compute-0 podman[133203]: 2025-11-24 09:34:30.36656104 +0000 UTC m=+0.021622642 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:34:30 compute-0 podman[133203]: 2025-11-24 09:34:30.463428807 +0000 UTC m=+0.118490409 container init df9e80bc1955751465649d3c32cf2badcbe991afb82a69a98d6dbf4f4064c0aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 24 09:34:30 compute-0 podman[133203]: 2025-11-24 09:34:30.468574023 +0000 UTC m=+0.123635605 container start df9e80bc1955751465649d3c32cf2badcbe991afb82a69a98d6dbf4f4064c0aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 24 09:34:30 compute-0 bash[133203]: df9e80bc1955751465649d3c32cf2badcbe991afb82a69a98d6dbf4f4064c0aa
Nov 24 09:34:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:30 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 24 09:34:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:30 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 24 09:34:30 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ssprex for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:34:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:30 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 24 09:34:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:30 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 24 09:34:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:30 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 24 09:34:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:30 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 24 09:34:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:30 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 24 09:34:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:30 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:34:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:34:30] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Nov 24 09:34:30 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:34:30] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Nov 24 09:34:30 compute-0 ceph-mon[74331]: pgmap v218: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:34:30 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:34:31 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v219: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:34:31 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:31 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:31 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:31.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:31 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:31 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:34:31 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:31.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:34:33 compute-0 ceph-mon[74331]: pgmap v219: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:34:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:34:33 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v220: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:34:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:33.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:34:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:33.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:34:35 compute-0 ceph-mon[74331]: pgmap v220: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:34:35 compute-0 sshd-session[133264]: Accepted publickey for zuul from 192.168.122.30 port 58196 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 09:34:35 compute-0 systemd-logind[822]: New session 46 of user zuul.
Nov 24 09:34:35 compute-0 systemd[1]: Started Session 46 of User zuul.
Nov 24 09:34:35 compute-0 sshd-session[133264]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:34:35 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v221: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:34:35 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:35 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:35 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:35.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:35 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:35 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:34:35 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:35.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:34:36 compute-0 python3.9[133418]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:34:36 compute-0 sshd-session[133424]: Invalid user jenkins from 209.38.206.249 port 37488
Nov 24 09:34:36 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:36 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:34:36 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:36 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:34:36 compute-0 sshd-session[133424]: Connection closed by invalid user jenkins 209.38.206.249 port 37488 [preauth]
Nov 24 09:34:36 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:34:36.971Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:34:37 compute-0 ceph-mon[74331]: pgmap v221: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:34:37 compute-0 sudo[133576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhwhcjjoftjdujyrplekpgqfgkndgohx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976876.6270056-56-92573685155404/AnsiballZ_systemd.py'
Nov 24 09:34:37 compute-0 sudo[133576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:37 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v222: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:34:37 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:37 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:34:37 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:37.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:34:37 compute-0 python3.9[133578]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 24 09:34:37 compute-0 sudo[133576]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:37 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:37 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:37 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:37.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:38 compute-0 sudo[133732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htgoaxoqjggsljbhlznkhfxcewpcxtqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976877.7541783-80-275490830959380/AnsiballZ_systemd.py'
Nov 24 09:34:38 compute-0 sudo[133732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:38 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:34:38 compute-0 python3.9[133734]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 09:34:38 compute-0 sudo[133732]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:39 compute-0 ceph-mon[74331]: pgmap v222: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:34:39 compute-0 sudo[133886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkkkeyjbifyoqplabuvxttmpbqoexalf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976878.6955678-107-207006673082168/AnsiballZ_command.py'
Nov 24 09:34:39 compute-0 sudo[133886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:39 compute-0 python3.9[133888]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:34:39 compute-0 sudo[133886]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:39 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v223: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:34:39 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:39 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:39 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:39.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:39 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:39 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:39 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:39.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:39 compute-0 sudo[134040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbjsqhnlhngjgedhpeomuzcozsmkvejr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976879.5411813-131-196430041072530/AnsiballZ_stat.py'
Nov 24 09:34:39 compute-0 sudo[134040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:40 compute-0 python3.9[134042]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:34:40 compute-0 sudo[134040]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:40 compute-0 sudo[134044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:34:40 compute-0 sudo[134044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:34:40 compute-0 sudo[134044]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:40 compute-0 sudo[134218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmmuolgjteqbktvuuugihljnbmqjbvhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976880.4082103-158-233683355395813/AnsiballZ_file.py'
Nov 24 09:34:40 compute-0 sudo[134218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:34:40] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Nov 24 09:34:40 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:34:40] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Nov 24 09:34:41 compute-0 python3.9[134220]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:34:41 compute-0 sudo[134218]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:41 compute-0 ceph-mon[74331]: pgmap v223: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:34:41 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v224: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:34:41 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:41 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:41 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:41.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:41 compute-0 sshd-session[133267]: Connection closed by 192.168.122.30 port 58196
Nov 24 09:34:41 compute-0 sshd-session[133264]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:34:41 compute-0 systemd[1]: session-46.scope: Deactivated successfully.
Nov 24 09:34:41 compute-0 systemd[1]: session-46.scope: Consumed 3.788s CPU time.
Nov 24 09:34:41 compute-0 systemd-logind[822]: Session 46 logged out. Waiting for processes to exit.
Nov 24 09:34:41 compute-0 systemd-logind[822]: Removed session 46.
Nov 24 09:34:41 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:41 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:34:41 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:41.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:34:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:42 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:34:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:42 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 24 09:34:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:42 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 24 09:34:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:42 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 24 09:34:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:42 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 24 09:34:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:42 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 24 09:34:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:42 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 24 09:34:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:42 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:34:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:42 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:34:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:42 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:34:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:42 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 24 09:34:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:42 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:34:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:42 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 24 09:34:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:42 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 24 09:34:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:42 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 24 09:34:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:42 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 24 09:34:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:42 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 24 09:34:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:42 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 24 09:34:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:42 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 24 09:34:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:42 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 24 09:34:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:42 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 24 09:34:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:42 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 24 09:34:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:42 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 24 09:34:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:42 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 24 09:34:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:42 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 09:34:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:42 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 24 09:34:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:42 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 09:34:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:42 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feeac000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:43 compute-0 ceph-mon[74331]: pgmap v224: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:34:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:34:43 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:43 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feea0001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:43 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v225: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:34:43 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:43 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:43 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:43.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:43 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:43 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:34:43 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:43.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:34:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:44 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee88000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:44 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:45 compute-0 sudo[134265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:34:45 compute-0 sudo[134265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:34:45 compute-0 sudo[134265]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:45 compute-0 sudo[134290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:34:45 compute-0 sudo[134290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:34:45 compute-0 ceph-mon[74331]: pgmap v225: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:34:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:45 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Optimize plan auto_2025-11-24_09:34:45
Nov 24 09:34:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 09:34:45 compute-0 ceph-mgr[74626]: [balancer INFO root] do_upmap
Nov 24 09:34:45 compute-0 ceph-mgr[74626]: [balancer INFO root] pools ['default.rgw.log', 'images', 'cephfs.cephfs.data', '.rgw.root', 'vms', '.mgr', 'cephfs.cephfs.meta', 'backups', 'volumes', 'default.rgw.control', '.nfs', 'default.rgw.meta']
Nov 24 09:34:45 compute-0 ceph-mgr[74626]: [balancer INFO root] prepared 0/10 upmap changes
Nov 24 09:34:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:34:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:34:45 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v226: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:34:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 09:34:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:34:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 09:34:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:34:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:34:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:34:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:34:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:34:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:34:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:34:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:34:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:34:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 09:34:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:34:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:34:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:34:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 24 09:34:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:34:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 09:34:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:34:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:34:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:34:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 09:34:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:34:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 09:34:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:34:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:34:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:34:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:34:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 09:34:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:34:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:34:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:34:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:34:45 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:45 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:45 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:45.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 09:34:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:34:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:34:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:34:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:34:45 compute-0 sudo[134290]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:45 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:34:45 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:34:45 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:34:45 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:34:45 compute-0 sudo[134349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:34:45 compute-0 sudo[134349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:34:45 compute-0 sudo[134349]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:45 compute-0 sudo[134374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 24 09:34:45 compute-0 sudo[134374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:34:45 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:45 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:34:45 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:45.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:34:46 compute-0 ceph-mon[74331]: pgmap v226: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:34:46 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:34:46 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:34:46 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:34:46 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:34:46 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:34:46 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:34:46 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:34:46 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:34:46 compute-0 podman[134441]: 2025-11-24 09:34:46.324159698 +0000 UTC m=+0.055194636 container create df58df03813bde82fef205a6e55478bc3e16d41585a66f967de8be89e5e92f7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:34:46 compute-0 systemd[1]: Started libpod-conmon-df58df03813bde82fef205a6e55478bc3e16d41585a66f967de8be89e5e92f7d.scope.
Nov 24 09:34:46 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:34:46 compute-0 podman[134441]: 2025-11-24 09:34:46.301975294 +0000 UTC m=+0.033010242 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:34:46 compute-0 podman[134441]: 2025-11-24 09:34:46.41189463 +0000 UTC m=+0.142929568 container init df58df03813bde82fef205a6e55478bc3e16d41585a66f967de8be89e5e92f7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_ritchie, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:34:46 compute-0 podman[134441]: 2025-11-24 09:34:46.418891891 +0000 UTC m=+0.149926809 container start df58df03813bde82fef205a6e55478bc3e16d41585a66f967de8be89e5e92f7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:34:46 compute-0 podman[134441]: 2025-11-24 09:34:46.423014863 +0000 UTC m=+0.154049811 container attach df58df03813bde82fef205a6e55478bc3e16d41585a66f967de8be89e5e92f7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Nov 24 09:34:46 compute-0 wonderful_ritchie[134458]: 167 167
Nov 24 09:34:46 compute-0 systemd[1]: libpod-df58df03813bde82fef205a6e55478bc3e16d41585a66f967de8be89e5e92f7d.scope: Deactivated successfully.
Nov 24 09:34:46 compute-0 conmon[134458]: conmon df58df03813bde82fef2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-df58df03813bde82fef205a6e55478bc3e16d41585a66f967de8be89e5e92f7d.scope/container/memory.events
Nov 24 09:34:46 compute-0 podman[134441]: 2025-11-24 09:34:46.42698406 +0000 UTC m=+0.158018978 container died df58df03813bde82fef205a6e55478bc3e16d41585a66f967de8be89e5e92f7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:34:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:46 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feea0001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/093446 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:34:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ecc8da42e5c41bbc9d8f60cfcef36ea31c4ace71cb9f84ffa796e2a21e5a70b-merged.mount: Deactivated successfully.
Nov 24 09:34:46 compute-0 podman[134441]: 2025-11-24 09:34:46.479505419 +0000 UTC m=+0.210540337 container remove df58df03813bde82fef205a6e55478bc3e16d41585a66f967de8be89e5e92f7d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_ritchie, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:34:46 compute-0 systemd[1]: libpod-conmon-df58df03813bde82fef205a6e55478bc3e16d41585a66f967de8be89e5e92f7d.scope: Deactivated successfully.
Nov 24 09:34:46 compute-0 podman[134483]: 2025-11-24 09:34:46.650036683 +0000 UTC m=+0.059988063 container create 3be3af48f658856f8f774ca7d5c5d7e547a3b8e3ab187ddc9033886fb1149199 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_dhawan, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 09:34:46 compute-0 systemd[1]: Started libpod-conmon-3be3af48f658856f8f774ca7d5c5d7e547a3b8e3ab187ddc9033886fb1149199.scope.
Nov 24 09:34:46 compute-0 podman[134483]: 2025-11-24 09:34:46.614840298 +0000 UTC m=+0.024791698 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:34:46 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:34:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53985a8b3ceebdea3f60b21f71fe53b817138d4b92b53ddf14b296ff68f84724/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:34:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53985a8b3ceebdea3f60b21f71fe53b817138d4b92b53ddf14b296ff68f84724/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:34:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53985a8b3ceebdea3f60b21f71fe53b817138d4b92b53ddf14b296ff68f84724/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:34:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53985a8b3ceebdea3f60b21f71fe53b817138d4b92b53ddf14b296ff68f84724/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:34:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53985a8b3ceebdea3f60b21f71fe53b817138d4b92b53ddf14b296ff68f84724/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:34:46 compute-0 podman[134483]: 2025-11-24 09:34:46.738656236 +0000 UTC m=+0.148607636 container init 3be3af48f658856f8f774ca7d5c5d7e547a3b8e3ab187ddc9033886fb1149199 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:34:46 compute-0 podman[134483]: 2025-11-24 09:34:46.747629026 +0000 UTC m=+0.157580406 container start 3be3af48f658856f8f774ca7d5c5d7e547a3b8e3ab187ddc9033886fb1149199 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_dhawan, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:34:46 compute-0 podman[134483]: 2025-11-24 09:34:46.757867068 +0000 UTC m=+0.167818448 container attach 3be3af48f658856f8f774ca7d5c5d7e547a3b8e3ab187ddc9033886fb1149199 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_dhawan, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 09:34:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:46 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee880016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:34:46.972Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 09:34:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:34:46.972Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:34:47 compute-0 sshd-session[134510]: Accepted publickey for zuul from 192.168.122.30 port 54418 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 09:34:47 compute-0 angry_dhawan[134500]: --> passed data devices: 0 physical, 1 LVM
Nov 24 09:34:47 compute-0 angry_dhawan[134500]: --> All data devices are unavailable
Nov 24 09:34:47 compute-0 systemd-logind[822]: New session 47 of user zuul.
Nov 24 09:34:47 compute-0 systemd[1]: Started Session 47 of User zuul.
Nov 24 09:34:47 compute-0 sshd-session[134510]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:34:47 compute-0 systemd[1]: libpod-3be3af48f658856f8f774ca7d5c5d7e547a3b8e3ab187ddc9033886fb1149199.scope: Deactivated successfully.
Nov 24 09:34:47 compute-0 podman[134483]: 2025-11-24 09:34:47.128550071 +0000 UTC m=+0.538501501 container died 3be3af48f658856f8f774ca7d5c5d7e547a3b8e3ab187ddc9033886fb1149199 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_dhawan, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:34:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-53985a8b3ceebdea3f60b21f71fe53b817138d4b92b53ddf14b296ff68f84724-merged.mount: Deactivated successfully.
Nov 24 09:34:47 compute-0 podman[134483]: 2025-11-24 09:34:47.182426223 +0000 UTC m=+0.592377603 container remove 3be3af48f658856f8f774ca7d5c5d7e547a3b8e3ab187ddc9033886fb1149199 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:34:47 compute-0 systemd[1]: libpod-conmon-3be3af48f658856f8f774ca7d5c5d7e547a3b8e3ab187ddc9033886fb1149199.scope: Deactivated successfully.
Nov 24 09:34:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:47 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee800016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:47 compute-0 sudo[134374]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:47 compute-0 sudo[134562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:34:47 compute-0 sudo[134562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:34:47 compute-0 sudo[134562]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:47 compute-0 sudo[134608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- lvm list --format json
Nov 24 09:34:47 compute-0 sudo[134608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:34:47 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v227: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:34:47 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:47 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:47 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:47.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:47 compute-0 sshd-session[133623]: Invalid user backup from 209.38.206.249 port 44288
Nov 24 09:34:47 compute-0 sshd-session[133623]: Connection closed by invalid user backup 209.38.206.249 port 44288 [preauth]
Nov 24 09:34:47 compute-0 podman[134728]: 2025-11-24 09:34:47.784156195 +0000 UTC m=+0.051264259 container create a31fe15fd622f33ff67e20198ad43d9a73a1057dd79f7f9af4660bfd9d4d7f0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_neumann, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:34:47 compute-0 systemd[1]: Started libpod-conmon-a31fe15fd622f33ff67e20198ad43d9a73a1057dd79f7f9af4660bfd9d4d7f0a.scope.
Nov 24 09:34:47 compute-0 podman[134728]: 2025-11-24 09:34:47.759834298 +0000 UTC m=+0.026942412 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:34:47 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:34:47 compute-0 podman[134728]: 2025-11-24 09:34:47.879873493 +0000 UTC m=+0.146981567 container init a31fe15fd622f33ff67e20198ad43d9a73a1057dd79f7f9af4660bfd9d4d7f0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_neumann, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 24 09:34:47 compute-0 podman[134728]: 2025-11-24 09:34:47.889040967 +0000 UTC m=+0.156149031 container start a31fe15fd622f33ff67e20198ad43d9a73a1057dd79f7f9af4660bfd9d4d7f0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:34:47 compute-0 podman[134728]: 2025-11-24 09:34:47.892949304 +0000 UTC m=+0.160057538 container attach a31fe15fd622f33ff67e20198ad43d9a73a1057dd79f7f9af4660bfd9d4d7f0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_neumann, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:34:47 compute-0 sharp_neumann[134791]: 167 167
Nov 24 09:34:47 compute-0 systemd[1]: libpod-a31fe15fd622f33ff67e20198ad43d9a73a1057dd79f7f9af4660bfd9d4d7f0a.scope: Deactivated successfully.
Nov 24 09:34:47 compute-0 podman[134728]: 2025-11-24 09:34:47.896297586 +0000 UTC m=+0.163405660 container died a31fe15fd622f33ff67e20198ad43d9a73a1057dd79f7f9af4660bfd9d4d7f0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_neumann, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:34:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-24070bdb8297c53c6d1c69f4a3558ef16a2a2e56a7ded42efc946bddc6b76095-merged.mount: Deactivated successfully.
Nov 24 09:34:47 compute-0 podman[134728]: 2025-11-24 09:34:47.937702091 +0000 UTC m=+0.204810155 container remove a31fe15fd622f33ff67e20198ad43d9a73a1057dd79f7f9af4660bfd9d4d7f0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 24 09:34:47 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:47 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:47 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:47.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:47 compute-0 systemd[1]: libpod-conmon-a31fe15fd622f33ff67e20198ad43d9a73a1057dd79f7f9af4660bfd9d4d7f0a.scope: Deactivated successfully.
Nov 24 09:34:48 compute-0 podman[134814]: 2025-11-24 09:34:48.133514815 +0000 UTC m=+0.059574062 container create d6195da1b738ef72476123d9086ad260bb1045c85a92ba8cf1585bff8931767c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_hopper, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 24 09:34:48 compute-0 python3.9[134790]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:34:48 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:34:48 compute-0 systemd[1]: Started libpod-conmon-d6195da1b738ef72476123d9086ad260bb1045c85a92ba8cf1585bff8931767c.scope.
Nov 24 09:34:48 compute-0 podman[134814]: 2025-11-24 09:34:48.107092727 +0000 UTC m=+0.033152094 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:34:48 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Nov 24 09:34:48 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:34:48.204579) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 09:34:48 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Nov 24 09:34:48 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976888204627, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 1992, "num_deletes": 252, "total_data_size": 4018325, "memory_usage": 4093288, "flush_reason": "Manual Compaction"}
Nov 24 09:34:48 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Nov 24 09:34:48 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:34:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b419d808e6b31d35394bed60acc53f5945f7a39836885663579de5d4d620123/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:34:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b419d808e6b31d35394bed60acc53f5945f7a39836885663579de5d4d620123/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:34:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b419d808e6b31d35394bed60acc53f5945f7a39836885663579de5d4d620123/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:34:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b419d808e6b31d35394bed60acc53f5945f7a39836885663579de5d4d620123/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:34:48 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976888228473, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 2326024, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10857, "largest_seqno": 12848, "table_properties": {"data_size": 2319512, "index_size": 3391, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16211, "raw_average_key_size": 20, "raw_value_size": 2305312, "raw_average_value_size": 2870, "num_data_blocks": 151, "num_entries": 803, "num_filter_entries": 803, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976683, "oldest_key_time": 1763976683, "file_creation_time": 1763976888, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "42aa12d2-c531-4ddc-8c4c-bc0b5971346b", "db_session_id": "RORHLERH15LC1QL8D0I4", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:34:48 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 23989 microseconds, and 7496 cpu microseconds.
Nov 24 09:34:48 compute-0 ceph-mon[74331]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:34:48 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:34:48.228554) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 2326024 bytes OK
Nov 24 09:34:48 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:34:48.228594) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Nov 24 09:34:48 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:34:48.233174) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Nov 24 09:34:48 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:34:48.233211) EVENT_LOG_v1 {"time_micros": 1763976888233203, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 09:34:48 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:34:48.233243) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 09:34:48 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 4010285, prev total WAL file size 4010285, number of live WAL files 2.
Nov 24 09:34:48 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:34:48 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:34:48.234264) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323533' seq:0, type:0; will stop at (end)
Nov 24 09:34:48 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 09:34:48 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(2271KB)], [26(13MB)]
Nov 24 09:34:48 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976888234301, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 16606046, "oldest_snapshot_seqno": -1}
Nov 24 09:34:48 compute-0 podman[134814]: 2025-11-24 09:34:48.240580642 +0000 UTC m=+0.166639919 container init d6195da1b738ef72476123d9086ad260bb1045c85a92ba8cf1585bff8931767c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:34:48 compute-0 podman[134814]: 2025-11-24 09:34:48.250227208 +0000 UTC m=+0.176286455 container start d6195da1b738ef72476123d9086ad260bb1045c85a92ba8cf1585bff8931767c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_hopper, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:34:48 compute-0 podman[134814]: 2025-11-24 09:34:48.257889777 +0000 UTC m=+0.183949024 container attach d6195da1b738ef72476123d9086ad260bb1045c85a92ba8cf1585bff8931767c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_hopper, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:34:48 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 4471 keys, 14776044 bytes, temperature: kUnknown
Nov 24 09:34:48 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976888352409, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 14776044, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14741780, "index_size": 21994, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11205, "raw_key_size": 112640, "raw_average_key_size": 25, "raw_value_size": 14655925, "raw_average_value_size": 3277, "num_data_blocks": 942, "num_entries": 4471, "num_filter_entries": 4471, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976305, "oldest_key_time": 0, "file_creation_time": 1763976888, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "42aa12d2-c531-4ddc-8c4c-bc0b5971346b", "db_session_id": "RORHLERH15LC1QL8D0I4", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:34:48 compute-0 ceph-mon[74331]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:34:48 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:34:48.352676) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 14776044 bytes
Nov 24 09:34:48 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:34:48.356989) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 140.5 rd, 125.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 13.6 +0.0 blob) out(14.1 +0.0 blob), read-write-amplify(13.5) write-amplify(6.4) OK, records in: 4896, records dropped: 425 output_compression: NoCompression
Nov 24 09:34:48 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:34:48.357007) EVENT_LOG_v1 {"time_micros": 1763976888356998, "job": 10, "event": "compaction_finished", "compaction_time_micros": 118180, "compaction_time_cpu_micros": 34934, "output_level": 6, "num_output_files": 1, "total_output_size": 14776044, "num_input_records": 4896, "num_output_records": 4471, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 09:34:48 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:34:48 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976888357521, "job": 10, "event": "table_file_deletion", "file_number": 28}
Nov 24 09:34:48 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:34:48 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976888359726, "job": 10, "event": "table_file_deletion", "file_number": 26}
Nov 24 09:34:48 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:34:48.234206) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:34:48 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:34:48.359754) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:34:48 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:34:48.359758) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:34:48 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:34:48.359759) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:34:48 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:34:48.359761) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:34:48 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:34:48.359763) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:34:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:48 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee800016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:48 compute-0 ceph-mon[74331]: pgmap v227: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:34:48 compute-0 intelligent_hopper[134837]: {
Nov 24 09:34:48 compute-0 intelligent_hopper[134837]:     "0": [
Nov 24 09:34:48 compute-0 intelligent_hopper[134837]:         {
Nov 24 09:34:48 compute-0 intelligent_hopper[134837]:             "devices": [
Nov 24 09:34:48 compute-0 intelligent_hopper[134837]:                 "/dev/loop3"
Nov 24 09:34:48 compute-0 intelligent_hopper[134837]:             ],
Nov 24 09:34:48 compute-0 intelligent_hopper[134837]:             "lv_name": "ceph_lv0",
Nov 24 09:34:48 compute-0 intelligent_hopper[134837]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:34:48 compute-0 intelligent_hopper[134837]:             "lv_size": "21470642176",
Nov 24 09:34:48 compute-0 intelligent_hopper[134837]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=84a084c3-61a7-5de7-8207-1f88efa59a64,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 24 09:34:48 compute-0 intelligent_hopper[134837]:             "lv_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:34:48 compute-0 intelligent_hopper[134837]:             "name": "ceph_lv0",
Nov 24 09:34:48 compute-0 intelligent_hopper[134837]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:34:48 compute-0 intelligent_hopper[134837]:             "tags": {
Nov 24 09:34:48 compute-0 intelligent_hopper[134837]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:34:48 compute-0 intelligent_hopper[134837]:                 "ceph.block_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:34:48 compute-0 intelligent_hopper[134837]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 09:34:48 compute-0 intelligent_hopper[134837]:                 "ceph.cluster_fsid": "84a084c3-61a7-5de7-8207-1f88efa59a64",
Nov 24 09:34:48 compute-0 intelligent_hopper[134837]:                 "ceph.cluster_name": "ceph",
Nov 24 09:34:48 compute-0 intelligent_hopper[134837]:                 "ceph.crush_device_class": "",
Nov 24 09:34:48 compute-0 intelligent_hopper[134837]:                 "ceph.encrypted": "0",
Nov 24 09:34:48 compute-0 intelligent_hopper[134837]:                 "ceph.osd_fsid": "4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c",
Nov 24 09:34:48 compute-0 intelligent_hopper[134837]:                 "ceph.osd_id": "0",
Nov 24 09:34:48 compute-0 intelligent_hopper[134837]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 09:34:48 compute-0 intelligent_hopper[134837]:                 "ceph.type": "block",
Nov 24 09:34:48 compute-0 intelligent_hopper[134837]:                 "ceph.vdo": "0",
Nov 24 09:34:48 compute-0 intelligent_hopper[134837]:                 "ceph.with_tpm": "0"
Nov 24 09:34:48 compute-0 intelligent_hopper[134837]:             },
Nov 24 09:34:48 compute-0 intelligent_hopper[134837]:             "type": "block",
Nov 24 09:34:48 compute-0 intelligent_hopper[134837]:             "vg_name": "ceph_vg0"
Nov 24 09:34:48 compute-0 intelligent_hopper[134837]:         }
Nov 24 09:34:48 compute-0 intelligent_hopper[134837]:     ]
Nov 24 09:34:48 compute-0 intelligent_hopper[134837]: }
Nov 24 09:34:48 compute-0 systemd[1]: libpod-d6195da1b738ef72476123d9086ad260bb1045c85a92ba8cf1585bff8931767c.scope: Deactivated successfully.
Nov 24 09:34:48 compute-0 podman[134814]: 2025-11-24 09:34:48.596446853 +0000 UTC m=+0.522506180 container died d6195da1b738ef72476123d9086ad260bb1045c85a92ba8cf1585bff8931767c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_hopper, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:34:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b419d808e6b31d35394bed60acc53f5945f7a39836885663579de5d4d620123-merged.mount: Deactivated successfully.
Nov 24 09:34:48 compute-0 podman[134814]: 2025-11-24 09:34:48.750351058 +0000 UTC m=+0.676410305 container remove d6195da1b738ef72476123d9086ad260bb1045c85a92ba8cf1585bff8931767c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_hopper, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:34:48 compute-0 systemd[1]: libpod-conmon-d6195da1b738ef72476123d9086ad260bb1045c85a92ba8cf1585bff8931767c.scope: Deactivated successfully.
Nov 24 09:34:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:48 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:48 compute-0 sudo[134608]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:48 compute-0 sudo[134945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:34:48 compute-0 sudo[134945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:34:48 compute-0 sudo[134945]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:48 compute-0 sudo[134982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- raw list --format json
Nov 24 09:34:48 compute-0 sudo[134982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:34:49 compute-0 sudo[135056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgjggshembculcfysipjwwfgesvzwnvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976888.7136066-62-58229788695551/AnsiballZ_setup.py'
Nov 24 09:34:49 compute-0 sudo[135056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:49 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:49 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee800016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:49 compute-0 python3.9[135058]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 09:34:49 compute-0 podman[135104]: 2025-11-24 09:34:49.371172248 +0000 UTC m=+0.045413345 container create f0fc4cdb317d113450720fbacab88c5bc65d96229d68f2a9937d50ecdb842637 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_heyrovsky, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 24 09:34:49 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v228: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:34:49 compute-0 systemd[1]: Started libpod-conmon-f0fc4cdb317d113450720fbacab88c5bc65d96229d68f2a9937d50ecdb842637.scope.
Nov 24 09:34:49 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:34:49 compute-0 podman[135104]: 2025-11-24 09:34:49.349133957 +0000 UTC m=+0.023375084 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:34:49 compute-0 podman[135104]: 2025-11-24 09:34:49.450605557 +0000 UTC m=+0.124846684 container init f0fc4cdb317d113450720fbacab88c5bc65d96229d68f2a9937d50ecdb842637 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_heyrovsky, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Nov 24 09:34:49 compute-0 podman[135104]: 2025-11-24 09:34:49.460288935 +0000 UTC m=+0.134530042 container start f0fc4cdb317d113450720fbacab88c5bc65d96229d68f2a9937d50ecdb842637 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:34:49 compute-0 podman[135104]: 2025-11-24 09:34:49.466467706 +0000 UTC m=+0.140708833 container attach f0fc4cdb317d113450720fbacab88c5bc65d96229d68f2a9937d50ecdb842637 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_heyrovsky, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Nov 24 09:34:49 compute-0 charming_heyrovsky[135121]: 167 167
Nov 24 09:34:49 compute-0 systemd[1]: libpod-f0fc4cdb317d113450720fbacab88c5bc65d96229d68f2a9937d50ecdb842637.scope: Deactivated successfully.
Nov 24 09:34:49 compute-0 podman[135104]: 2025-11-24 09:34:49.469286275 +0000 UTC m=+0.143527382 container died f0fc4cdb317d113450720fbacab88c5bc65d96229d68f2a9937d50ecdb842637 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_heyrovsky, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:34:49 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:49 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:49 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:49.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-bdebc67ef1e04672c5a9039c8d2f0a576f789149967afcef8c9f7b2773d05246-merged.mount: Deactivated successfully.
Nov 24 09:34:49 compute-0 podman[135104]: 2025-11-24 09:34:49.514192687 +0000 UTC m=+0.188433794 container remove f0fc4cdb317d113450720fbacab88c5bc65d96229d68f2a9937d50ecdb842637 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_heyrovsky, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:34:49 compute-0 systemd[1]: libpod-conmon-f0fc4cdb317d113450720fbacab88c5bc65d96229d68f2a9937d50ecdb842637.scope: Deactivated successfully.
Nov 24 09:34:49 compute-0 sudo[135056]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:49 compute-0 podman[135148]: 2025-11-24 09:34:49.691411384 +0000 UTC m=+0.056500737 container create 208132835b62a21026c3862754b6a2735e48fa46951085b570a96da9cba1a828 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_euclid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:34:49 compute-0 systemd[1]: Started libpod-conmon-208132835b62a21026c3862754b6a2735e48fa46951085b570a96da9cba1a828.scope.
Nov 24 09:34:49 compute-0 podman[135148]: 2025-11-24 09:34:49.670173054 +0000 UTC m=+0.035262437 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:34:49 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:34:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3236882a8eab7a73ae4956d471cd5de20209df0bc735d636037fd2487346dee8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:34:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3236882a8eab7a73ae4956d471cd5de20209df0bc735d636037fd2487346dee8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:34:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3236882a8eab7a73ae4956d471cd5de20209df0bc735d636037fd2487346dee8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:34:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3236882a8eab7a73ae4956d471cd5de20209df0bc735d636037fd2487346dee8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:34:49 compute-0 podman[135148]: 2025-11-24 09:34:49.789838359 +0000 UTC m=+0.154927732 container init 208132835b62a21026c3862754b6a2735e48fa46951085b570a96da9cba1a828 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_euclid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:34:49 compute-0 podman[135148]: 2025-11-24 09:34:49.797376813 +0000 UTC m=+0.162466166 container start 208132835b62a21026c3862754b6a2735e48fa46951085b570a96da9cba1a828 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 24 09:34:49 compute-0 podman[135148]: 2025-11-24 09:34:49.800810268 +0000 UTC m=+0.165899641 container attach 208132835b62a21026c3862754b6a2735e48fa46951085b570a96da9cba1a828 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_euclid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Nov 24 09:34:49 compute-0 sudo[135242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dleetpgnxvgocjkdopgddkektrsjesyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976888.7136066-62-58229788695551/AnsiballZ_dnf.py'
Nov 24 09:34:49 compute-0 sudo[135242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:34:49 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:49 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:49 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:49.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:50 compute-0 python3.9[135244]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 24 09:34:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:50 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feea0001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:50 compute-0 lvm[135317]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 09:34:50 compute-0 lvm[135317]: VG ceph_vg0 finished
Nov 24 09:34:50 compute-0 ceph-mon[74331]: pgmap v228: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:34:50 compute-0 zealous_euclid[135165]: {}
Nov 24 09:34:50 compute-0 systemd[1]: libpod-208132835b62a21026c3862754b6a2735e48fa46951085b570a96da9cba1a828.scope: Deactivated successfully.
Nov 24 09:34:50 compute-0 podman[135148]: 2025-11-24 09:34:50.559554532 +0000 UTC m=+0.924643885 container died 208132835b62a21026c3862754b6a2735e48fa46951085b570a96da9cba1a828 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_euclid, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Nov 24 09:34:50 compute-0 systemd[1]: libpod-208132835b62a21026c3862754b6a2735e48fa46951085b570a96da9cba1a828.scope: Consumed 1.231s CPU time.
Nov 24 09:34:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-3236882a8eab7a73ae4956d471cd5de20209df0bc735d636037fd2487346dee8-merged.mount: Deactivated successfully.
Nov 24 09:34:50 compute-0 podman[135148]: 2025-11-24 09:34:50.611845714 +0000 UTC m=+0.976935067 container remove 208132835b62a21026c3862754b6a2735e48fa46951085b570a96da9cba1a828 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:34:50 compute-0 systemd[1]: libpod-conmon-208132835b62a21026c3862754b6a2735e48fa46951085b570a96da9cba1a828.scope: Deactivated successfully.
Nov 24 09:34:50 compute-0 sudo[134982]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:50 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:34:50 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:34:50 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:34:50 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:34:50 compute-0 sudo[135332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:34:50 compute-0 sudo[135332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:34:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:50 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feeac002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:50 compute-0 sudo[135332]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:34:50] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Nov 24 09:34:50 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:34:50] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Nov 24 09:34:51 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:51 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:51 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v229: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:34:51 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:51 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:34:51 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:51.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:34:51 compute-0 sudo[135242]: pam_unix(sudo:session): session closed for user root
Nov 24 09:34:51 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:34:51 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:34:51 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:51 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:51 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:51.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:52 compute-0 sshd-session[135434]: Invalid user steam from 209.38.206.249 port 43022
Nov 24 09:34:52 compute-0 python3.9[135510]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:34:52 compute-0 sshd-session[135434]: Connection closed by invalid user steam 209.38.206.249 port 43022 [preauth]
Nov 24 09:34:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:52 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee800016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:52 compute-0 ceph-mon[74331]: pgmap v229: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:34:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:52 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feea0001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:53 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:34:53 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:53 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feeac002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:53 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v230: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:34:53 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:53 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:53 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:53.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:53 compute-0 sshd-session[135590]: Invalid user nagios from 209.38.206.249 port 43028
Nov 24 09:34:53 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:53 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:53 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:53.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:53 compute-0 sshd-session[135590]: Connection closed by invalid user nagios 209.38.206.249 port 43028 [preauth]
Nov 24 09:34:54 compute-0 python3.9[135665]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 24 09:34:54 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:54 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:54 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:54 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee800016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:54 compute-0 python3.9[135816]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:34:55 compute-0 ceph-mon[74331]: pgmap v230: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:34:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:55 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feea0001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:55 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v231: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:34:55 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:55 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:55 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:55.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:55 compute-0 python3.9[135967]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:34:55 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:55 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:55 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:55.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:56 compute-0 sshd-session[134518]: Connection closed by 192.168.122.30 port 54418
Nov 24 09:34:56 compute-0 sshd-session[134510]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:34:56 compute-0 systemd[1]: session-47.scope: Deactivated successfully.
Nov 24 09:34:56 compute-0 systemd[1]: session-47.scope: Consumed 6.299s CPU time.
Nov 24 09:34:56 compute-0 systemd-logind[822]: Session 47 logged out. Waiting for processes to exit.
Nov 24 09:34:56 compute-0 systemd-logind[822]: Removed session 47.
Nov 24 09:34:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:56 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feeac0021b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:56 compute-0 sshd-session[135993]: Invalid user ansadmin from 209.38.206.249 port 43032
Nov 24 09:34:56 compute-0 sshd-session[135993]: Connection closed by invalid user ansadmin 209.38.206.249 port 43032 [preauth]
Nov 24 09:34:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:56 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:34:56.973Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 09:34:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:34:56.974Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:34:57 compute-0 ceph-mon[74331]: pgmap v231: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:34:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:57 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee800032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:57 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v232: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:34:57 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:57 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:34:57 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:57.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:34:57 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:57 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:57 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:57.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:58 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:34:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:58 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feea0001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:58 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feeac002350 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:59 compute-0 ceph-mon[74331]: pgmap v232: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:34:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:34:59 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:34:59 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v233: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:34:59 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:59 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:34:59 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:34:59.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:34:59 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:34:59 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:34:59 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:34:59.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:35:00 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Nov 24 09:35:00 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:35:00.161352) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 09:35:00 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Nov 24 09:35:00 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976900161451, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 364, "num_deletes": 251, "total_data_size": 296112, "memory_usage": 303232, "flush_reason": "Manual Compaction"}
Nov 24 09:35:00 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Nov 24 09:35:00 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976900233463, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 294369, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12849, "largest_seqno": 13212, "table_properties": {"data_size": 292118, "index_size": 415, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5388, "raw_average_key_size": 17, "raw_value_size": 287616, "raw_average_value_size": 955, "num_data_blocks": 18, "num_entries": 301, "num_filter_entries": 301, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976888, "oldest_key_time": 1763976888, "file_creation_time": 1763976900, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "42aa12d2-c531-4ddc-8c4c-bc0b5971346b", "db_session_id": "RORHLERH15LC1QL8D0I4", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:35:00 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 72158 microseconds, and 2178 cpu microseconds.
Nov 24 09:35:00 compute-0 ceph-mon[74331]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:35:00 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:35:00.233526) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 294369 bytes OK
Nov 24 09:35:00 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:35:00.233558) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Nov 24 09:35:00 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:35:00.235838) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Nov 24 09:35:00 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:35:00.235901) EVENT_LOG_v1 {"time_micros": 1763976900235889, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 09:35:00 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:35:00.235929) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 09:35:00 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 293721, prev total WAL file size 293721, number of live WAL files 2.
Nov 24 09:35:00 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:35:00 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:35:00.236386) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Nov 24 09:35:00 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 09:35:00 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(287KB)], [29(14MB)]
Nov 24 09:35:00 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976900236413, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 15070413, "oldest_snapshot_seqno": -1}
Nov 24 09:35:00 compute-0 sudo[135999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:35:00 compute-0 sudo[135999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:35:00 compute-0 sudo[135999]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:00 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4259 keys, 12940658 bytes, temperature: kUnknown
Nov 24 09:35:00 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976900354076, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 12940658, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12909581, "index_size": 19319, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10693, "raw_key_size": 109143, "raw_average_key_size": 25, "raw_value_size": 12829163, "raw_average_value_size": 3012, "num_data_blocks": 815, "num_entries": 4259, "num_filter_entries": 4259, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976305, "oldest_key_time": 0, "file_creation_time": 1763976900, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "42aa12d2-c531-4ddc-8c4c-bc0b5971346b", "db_session_id": "RORHLERH15LC1QL8D0I4", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:35:00 compute-0 ceph-mon[74331]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:35:00 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:35:00.354584) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 12940658 bytes
Nov 24 09:35:00 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:35:00.393937) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 127.9 rd, 109.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 14.1 +0.0 blob) out(12.3 +0.0 blob), read-write-amplify(95.2) write-amplify(44.0) OK, records in: 4772, records dropped: 513 output_compression: NoCompression
Nov 24 09:35:00 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:35:00.393993) EVENT_LOG_v1 {"time_micros": 1763976900393972, "job": 12, "event": "compaction_finished", "compaction_time_micros": 117795, "compaction_time_cpu_micros": 27079, "output_level": 6, "num_output_files": 1, "total_output_size": 12940658, "num_input_records": 4772, "num_output_records": 4259, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 09:35:00 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:35:00 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976900394348, "job": 12, "event": "table_file_deletion", "file_number": 31}
Nov 24 09:35:00 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:35:00 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763976900396775, "job": 12, "event": "table_file_deletion", "file_number": 29}
Nov 24 09:35:00 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:35:00.236341) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:35:00 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:35:00.396808) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:35:00 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:35:00.396812) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:35:00 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:35:00.396813) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:35:00 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:35:00.396815) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:35:00 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:35:00.396816) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:35:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:00 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:00 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feea0001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:35:00] "GET /metrics HTTP/1.1" 200 48261 "" "Prometheus/2.51.0"
Nov 24 09:35:00 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:35:00] "GET /metrics HTTP/1.1" 200 48261 "" "Prometheus/2.51.0"
Nov 24 09:35:01 compute-0 ceph-mon[74331]: pgmap v233: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:35:01 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:35:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:01 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feeac009990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:01 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v234: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:35:01 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:01 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:35:01 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:01.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:35:01 compute-0 sshd-session[136025]: Accepted publickey for zuul from 192.168.122.30 port 60036 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 09:35:01 compute-0 systemd-logind[822]: New session 48 of user zuul.
Nov 24 09:35:01 compute-0 systemd[1]: Started Session 48 of User zuul.
Nov 24 09:35:01 compute-0 sshd-session[136025]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:35:01 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:01 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:35:01 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:01.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:35:02 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:02 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:02 compute-0 python3.9[136179]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:35:02 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:02 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:03 compute-0 ceph-mon[74331]: pgmap v234: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:35:03 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:35:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:03 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:03 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v235: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:35:03 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:03 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:03 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:03.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:03 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:03 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:35:03 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:03.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:35:04 compute-0 sudo[136334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpjpctudgnqeuiinwyvblojvtgpwrswx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976903.6811855-110-6937950523609/AnsiballZ_file.py'
Nov 24 09:35:04 compute-0 sudo[136334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:04 compute-0 ceph-mon[74331]: pgmap v235: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:35:04 compute-0 python3.9[136337]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:35:04 compute-0 sudo[136334]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:04 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:04 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feeac009990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:04 compute-0 sudo[136487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxjanzjezyhqbpxyckoekterdyfqydxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976904.4956422-110-267875200186379/AnsiballZ_file.py'
Nov 24 09:35:04 compute-0 sudo[136487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:04 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:04 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feeac009990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:04 compute-0 python3.9[136489]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:35:04 compute-0 sudo[136487]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:05 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:05 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v236: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:35:05 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:05 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:05 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:05.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:05 compute-0 sudo[136640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxigvboheizlbvyhxohthxejpnpmpxvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976905.1791658-157-70208591071432/AnsiballZ_stat.py'
Nov 24 09:35:05 compute-0 sudo[136640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:05 compute-0 python3.9[136642]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:35:05 compute-0 sudo[136640]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:05 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:05 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:35:05 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:05.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:35:06 compute-0 sudo[136764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqsmlmcwiiqcthitsdbbjilitejjdkaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976905.1791658-157-70208591071432/AnsiballZ_copy.py'
Nov 24 09:35:06 compute-0 sudo[136764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:06 compute-0 python3.9[136766]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976905.1791658-157-70208591071432/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=d6735be6ea702e7da5db04f536183e82de4adbd3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:35:06 compute-0 ceph-mon[74331]: pgmap v236: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:35:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:06 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:06 compute-0 sudo[136764]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:06 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feeac009990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:06 compute-0 sudo[136916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dizsrtciqfujmmlycvmuvzjxnjogtjop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976906.6527183-157-127370586986177/AnsiballZ_stat.py'
Nov 24 09:35:06 compute-0 sudo[136916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:35:06.974Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:35:07 compute-0 python3.9[136918]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:35:07 compute-0 sudo[136916]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:07 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feeac009990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:07 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v237: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:35:07 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:07 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:35:07 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:07.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:35:07 compute-0 sudo[137040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tiufdmemelrkvuxjqetibvrrezsqrlen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976906.6527183-157-127370586986177/AnsiballZ_copy.py'
Nov 24 09:35:07 compute-0 sudo[137040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:07 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 09:35:07 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 2769 writes, 13K keys, 2769 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s
                                           Cumulative WAL: 2769 writes, 2769 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2769 writes, 13K keys, 2769 commit groups, 1.0 writes per commit group, ingest: 24.43 MB, 0.04 MB/s
                                           Interval WAL: 2769 writes, 2769 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     96.6      0.22              0.07         6    0.036       0      0       0.0       0.0
                                             L6      1/0   12.34 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.0    146.9    128.9      0.48              0.16         5    0.096     21K   2287       0.0       0.0
                                            Sum      1/0   12.34 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   4.0    101.2    118.8      0.70              0.23        11    0.064     21K   2287       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   4.0    101.8    119.5      0.70              0.23        10    0.070     21K   2287       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   0.0    146.9    128.9      0.48              0.16         5    0.096     21K   2287       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     98.1      0.21              0.07         5    0.043       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.6      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.021, interval 0.020
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.08 GB write, 0.14 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.7 seconds
                                           Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.7 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b87797d350#2 capacity: 304.00 MB usage: 2.21 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 0.000135 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(160,2.00 MB,0.659305%) FilterBlock(12,70.17 KB,0.0225418%) IndexBlock(12,139.92 KB,0.0449482%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 24 09:35:07 compute-0 python3.9[137042]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976906.6527183-157-127370586986177/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=3228e523f8b01d6a11882d8cc1d2d959030dab43 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:35:07 compute-0 sudo[137040]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:07 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:07 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:07 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:07.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/093507 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:35:08 compute-0 sudo[137193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynbwlvyuaokwswkhchvtapwcmktrbdju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976907.9126518-157-236183014113206/AnsiballZ_stat.py'
Nov 24 09:35:08 compute-0 sudo[137193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:08 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:35:08 compute-0 python3.9[137195]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:35:08 compute-0 sudo[137193]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:08 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:08 compute-0 ceph-mon[74331]: pgmap v237: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:35:08 compute-0 sudo[137316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdgxckljcrlschzqatdapsnflddfcmjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976907.9126518-157-236183014113206/AnsiballZ_copy.py'
Nov 24 09:35:08 compute-0 sudo[137316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:08 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:08 compute-0 python3.9[137318]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976907.9126518-157-236183014113206/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=8c351cd8a6ebceb679460a28c301615bf645e52b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:35:08 compute-0 sudo[137316]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:09 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feeac009990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:09 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v238: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:35:09 compute-0 sudo[137469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlqybzsiieshdkrtiplkftmmnpmrfmln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976909.1660461-290-264005471288782/AnsiballZ_file.py'
Nov 24 09:35:09 compute-0 sudo[137469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:09 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:09 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:09 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:09.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:09 compute-0 python3.9[137471]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:35:09 compute-0 sudo[137469]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:09 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:09 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:09 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:09.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:10 compute-0 sudo[137622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmpfofrtmlotkwotzfdnnzazztblmmfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976909.8570676-290-156407458765715/AnsiballZ_file.py'
Nov 24 09:35:10 compute-0 sudo[137622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:10 compute-0 python3.9[137624]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:35:10 compute-0 sudo[137622]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:10 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feeac009990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:10 compute-0 ceph-mon[74331]: pgmap v238: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:35:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:10 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:10 compute-0 sudo[137774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghopxmsyqqvygzxyfowlubmmrrailndi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976910.568691-336-87343166492998/AnsiballZ_stat.py'
Nov 24 09:35:10 compute-0 sudo[137774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:35:10] "GET /metrics HTTP/1.1" 200 48261 "" "Prometheus/2.51.0"
Nov 24 09:35:10 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:35:10] "GET /metrics HTTP/1.1" 200 48261 "" "Prometheus/2.51.0"
Nov 24 09:35:11 compute-0 python3.9[137776]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:35:11 compute-0 sudo[137774]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:11 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:11 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:11 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v239: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:35:11 compute-0 sudo[137898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbrgdydjssfvoapygzsfemgevkrwlhjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976910.568691-336-87343166492998/AnsiballZ_copy.py'
Nov 24 09:35:11 compute-0 sudo[137898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:11 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:11 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 24 09:35:11 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:11.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 24 09:35:11 compute-0 python3.9[137900]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976910.568691-336-87343166492998/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=60c7a68ece9e6105a902ac858fb03c267daf78f1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:35:11 compute-0 sudo[137898]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:11 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:11 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 24 09:35:11 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:11.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 24 09:35:12 compute-0 sudo[138050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blbtyxmtsnudemtcethfgmyxikddoavk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976911.827799-336-246751294620667/AnsiballZ_stat.py'
Nov 24 09:35:12 compute-0 sudo[138050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:12 compute-0 python3.9[138053]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:35:12 compute-0 sudo[138050]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:12 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:12 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:12 compute-0 ceph-mon[74331]: pgmap v239: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:35:12 compute-0 sudo[138174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xleihvwdpecsclqfigdaogggparlaoep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976911.827799-336-246751294620667/AnsiballZ_copy.py'
Nov 24 09:35:12 compute-0 sudo[138174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:12 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:12 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feeac009990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:12 compute-0 python3.9[138176]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976911.827799-336-246751294620667/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=a9a797b79c320330a0fbef3d6d785446f2b400de backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:35:12 compute-0 sudo[138174]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:35:13 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:13 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feea0003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:13 compute-0 sudo[138327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-advflzzbegeiuruhecjzgivymjvxwzdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976913.0834768-336-19310247242047/AnsiballZ_stat.py'
Nov 24 09:35:13 compute-0 sudo[138327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:13 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v240: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:35:13 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:13 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 24 09:35:13 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:13.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 24 09:35:13 compute-0 python3.9[138329]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:35:13 compute-0 sudo[138327]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:13 compute-0 sudo[138450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktuaixfzzrysuhhkqdqbktiffzsqmmka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976913.0834768-336-19310247242047/AnsiballZ_copy.py'
Nov 24 09:35:13 compute-0 sudo[138450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:13 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:13 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:13 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:13.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:14 compute-0 python3.9[138452]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976913.0834768-336-19310247242047/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=bd13c428ac1a0938fd18470742c34906980e30e1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:35:14 compute-0 sudo[138450]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:14 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:14 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:14 compute-0 ceph-mon[74331]: pgmap v240: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:35:14 compute-0 sudo[138604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqfedxokkrcdrjbfxgbwqqcipbcwumjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976914.3458312-470-43128248590128/AnsiballZ_file.py'
Nov 24 09:35:14 compute-0 sudo[138604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:14 compute-0 python3.9[138606]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:35:14 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:14 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:14 compute-0 sudo[138604]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:15 compute-0 sudo[138757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpqrdmglmzyuugqqpebgiklqwduclanj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976914.9816456-470-203736100569391/AnsiballZ_file.py'
Nov 24 09:35:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:15 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee74000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:15 compute-0 sudo[138757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:35:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:35:15 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v241: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:35:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:35:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:35:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:35:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:35:15 compute-0 python3.9[138759]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:35:15 compute-0 sudo[138757]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:15 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:15 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:15 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:15.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:15 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:35:15 compute-0 sudo[138909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzhzgackdfetbiecficmogkdpqxtzheu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976915.68301-521-250085425625489/AnsiballZ_stat.py'
Nov 24 09:35:15 compute-0 sudo[138909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:15 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:15 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:15 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:15.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:16 compute-0 python3.9[138911]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:35:16 compute-0 sudo[138909]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:16 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feea0003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:16 compute-0 sudo[139033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwrftokzmdchqyrgqylpglykywyrgnth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976915.68301-521-250085425625489/AnsiballZ_copy.py'
Nov 24 09:35:16 compute-0 sudo[139033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:16 compute-0 ceph-mon[74331]: pgmap v241: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:35:16 compute-0 python3.9[139035]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976915.68301-521-250085425625489/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=a041df4e50b3a7485c77038688d8de6d0ecee5a7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:35:16 compute-0 sudo[139033]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:16 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:35:16.975Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:35:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:35:16.976Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:35:17 compute-0 sudo[139185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-veapawvgukzsosrcwibadpihnsmsxarz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976916.8385005-521-74622927050566/AnsiballZ_stat.py'
Nov 24 09:35:17 compute-0 sudo[139185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:17 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:17 compute-0 python3.9[139187]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:35:17 compute-0 sudo[139185]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:17 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:35:17 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v242: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:35:17 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:17 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:17 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:17.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:17 compute-0 sudo[139309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxzruihsnxyzuapxqjxqiwihpczjelff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976916.8385005-521-74622927050566/AnsiballZ_copy.py'
Nov 24 09:35:17 compute-0 sudo[139309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:17 compute-0 python3.9[139311]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976916.8385005-521-74622927050566/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=a9a797b79c320330a0fbef3d6d785446f2b400de backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:35:17 compute-0 sudo[139309]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:17 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:17 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:17 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:17.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:18 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:35:18 compute-0 sudo[139462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwvxbeeuocwqinicyfkvoscpbstyiewc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976918.0160134-521-55626701389394/AnsiballZ_stat.py'
Nov 24 09:35:18 compute-0 sudo[139462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:18 compute-0 python3.9[139464]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:35:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:18 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee740016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:18 compute-0 sudo[139462]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:18 compute-0 ceph-mon[74331]: pgmap v242: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:35:18 compute-0 sudo[139586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pojeifnabufkifdmotrznsxftiwpjtat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976918.0160134-521-55626701389394/AnsiballZ_copy.py'
Nov 24 09:35:18 compute-0 sudo[139586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:18 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feea0003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:18 compute-0 python3.9[139588]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976918.0160134-521-55626701389394/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=8a36e88b3ad0dff1cd1cac6e735cb4576cfa91df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:35:19 compute-0 sudo[139586]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:19 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:19 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee88000f30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:19 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v243: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:35:19 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:19 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 24 09:35:19 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:19.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 24 09:35:19 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:19 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:19 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:19.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:20 compute-0 sudo[139739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-quawbzlneggybdkwtjdzvduuhowjddak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976919.7179341-682-5376281979808/AnsiballZ_file.py'
Nov 24 09:35:20 compute-0 sudo[139739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:20 compute-0 python3.9[139741]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:35:20 compute-0 sudo[139739]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:20 compute-0 sudo[139788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:35:20 compute-0 sudo[139788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:35:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:20 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:35:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:20 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:35:20 compute-0 sudo[139788]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:20 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:20 compute-0 ceph-mon[74331]: pgmap v243: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:35:20 compute-0 sudo[139917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgpiidogevszpnerpeaovnyebotnojuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976920.367611-714-125134039806193/AnsiballZ_stat.py'
Nov 24 09:35:20 compute-0 sudo[139917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:20 compute-0 python3.9[139919]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:35:20 compute-0 sudo[139917]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:20 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee740016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:35:20] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Nov 24 09:35:20 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:35:20] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Nov 24 09:35:21 compute-0 sudo[140040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymuurkfzbuvqixsomnfqkflqnzzeqzmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976920.367611-714-125134039806193/AnsiballZ_copy.py'
Nov 24 09:35:21 compute-0 sudo[140040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:21 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:21 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feea0003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:21 compute-0 python3.9[140042]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976920.367611-714-125134039806193/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=544ccad07cd49583316075cf420b5b550bb4de77 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:35:21 compute-0 sudo[140040]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:21 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v244: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:35:21 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:21 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:21 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:21.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:21 compute-0 sudo[140193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oslaiswsflaeudimcedujvdlfmjddzmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976921.5335922-757-90956338549559/AnsiballZ_file.py'
Nov 24 09:35:21 compute-0 sudo[140193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:22 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:22 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 24 09:35:22 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:21.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 24 09:35:22 compute-0 python3.9[140195]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:35:22 compute-0 sudo[140193]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:22 compute-0 sudo[140346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfuzzzqxfztjfwbqgfntuwzzwzwgwjck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976922.1925876-784-113256420903495/AnsiballZ_stat.py'
Nov 24 09:35:22 compute-0 sudo[140346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:22 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:22 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee88001850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:22 compute-0 ceph-mon[74331]: pgmap v244: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:35:22 compute-0 python3.9[140348]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:35:22 compute-0 sudo[140346]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:22 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:22 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:23 compute-0 sudo[140469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgczwiyexunsktalrpqboqwuzyvmrvtw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976922.1925876-784-113256420903495/AnsiballZ_copy.py'
Nov 24 09:35:23 compute-0 sudo[140469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:23 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:35:23 compute-0 python3.9[140471]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976922.1925876-784-113256420903495/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=544ccad07cd49583316075cf420b5b550bb4de77 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:35:23 compute-0 sudo[140469]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:23 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee740016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:23 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v245: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:35:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:23 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:35:23 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:23 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 24 09:35:23 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:23.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 24 09:35:23 compute-0 sudo[140622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hoddrlxpzpwlmhqrjxiwiafquqbpdxon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976923.4348505-831-244727540506572/AnsiballZ_file.py'
Nov 24 09:35:23 compute-0 sudo[140622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:23 compute-0 python3.9[140624]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:35:23 compute-0 sudo[140622]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:24 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:24 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:24 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:24.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:24 compute-0 sudo[140775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmnvvsobmtfokkzuiyempiuwdtkhkhjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976924.0553555-853-2425519244290/AnsiballZ_stat.py'
Nov 24 09:35:24 compute-0 sudo[140775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:24 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feea0003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:24 compute-0 python3.9[140777]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:35:24 compute-0 sudo[140775]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:24 compute-0 ceph-mon[74331]: pgmap v245: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:35:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:24 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee88001850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:24 compute-0 sudo[140898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hplncxaivcodyfumtbqwpnwotfwyeelh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976924.0553555-853-2425519244290/AnsiballZ_copy.py'
Nov 24 09:35:24 compute-0 sudo[140898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:25 compute-0 python3.9[140900]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976924.0553555-853-2425519244290/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=544ccad07cd49583316075cf420b5b550bb4de77 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:35:25 compute-0 sudo[140898]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:25 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:25 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v246: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:35:25 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:25 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:25 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:25.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:25 compute-0 sudo[141051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwdguprkzktdmdrvdbysbbdbykwkwwot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976925.281499-900-103715435027982/AnsiballZ_file.py'
Nov 24 09:35:25 compute-0 sudo[141051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:25 compute-0 python3.9[141053]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:35:25 compute-0 sudo[141051]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:26 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:26 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:26 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:26.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:26 compute-0 sudo[141204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyjbyriuoasqnjlnnaeucgiwjlpfqbhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976925.9446344-923-118455071798561/AnsiballZ_stat.py'
Nov 24 09:35:26 compute-0 sudo[141204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:26 compute-0 python3.9[141206]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:35:26 compute-0 sudo[141204]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:26 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee74002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:26 compute-0 ceph-mon[74331]: pgmap v246: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:35:26 compute-0 sudo[141327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svdacjgavrvhpjehnnfuubpaefmmdltg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976925.9446344-923-118455071798561/AnsiballZ_copy.py'
Nov 24 09:35:26 compute-0 sudo[141327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:26 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feea0003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:26 compute-0 python3.9[141329]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976925.9446344-923-118455071798561/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=544ccad07cd49583316075cf420b5b550bb4de77 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:35:26 compute-0 sudo[141327]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:35:26.977Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:35:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:27 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee88001850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:27 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v247: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:35:27 compute-0 sudo[141480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtkocamdbsygpiccnuipyyhffwsriabm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976927.1433988-966-132298934567732/AnsiballZ_file.py'
Nov 24 09:35:27 compute-0 sudo[141480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:27 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:27 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:27 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:27.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:27 compute-0 python3.9[141482]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:35:27 compute-0 sudo[141480]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:28 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:28 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 24 09:35:28 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:28.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 24 09:35:28 compute-0 sudo[141632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wghdcsikbwxrzupeopoqxzijfwyrqkwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976927.7993984-993-38786980503612/AnsiballZ_stat.py'
Nov 24 09:35:28 compute-0 sudo[141632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:28 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:35:28 compute-0 python3.9[141634]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:35:28 compute-0 sudo[141632]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:28 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:28 compute-0 sudo[141756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zztspsedijgxpaypehvmnhhekxsrlsdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976927.7993984-993-38786980503612/AnsiballZ_copy.py'
Nov 24 09:35:28 compute-0 sudo[141756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:28 compute-0 ceph-mon[74331]: pgmap v247: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:35:28 compute-0 python3.9[141758]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976927.7993984-993-38786980503612/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=544ccad07cd49583316075cf420b5b550bb4de77 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:35:28 compute-0 sudo[141756]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:28 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee74002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:29 compute-0 sudo[141909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oachbqhvblrfbngkaqhsbpywftgqmnqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976929.0013676-1040-32564606033957/AnsiballZ_file.py'
Nov 24 09:35:29 compute-0 sudo[141909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:29 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:29 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feea0003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:29 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v248: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:35:29 compute-0 python3.9[141911]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:35:29 compute-0 sudo[141909]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:29 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:29 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:29 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:29.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:29 compute-0 sudo[142061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtxdpbiltgrrgeqhppbdfhekakyuccvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976929.5946257-1059-215728365282485/AnsiballZ_stat.py'
Nov 24 09:35:29 compute-0 sudo[142061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:30 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:30 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:30 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:30.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/093530 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:35:30 compute-0 python3.9[142063]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:35:30 compute-0 sudo[142061]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:30 compute-0 sudo[142185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfyaprdcswaxqcnqqkibkzwvlmpcgcyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976929.5946257-1059-215728365282485/AnsiballZ_copy.py'
Nov 24 09:35:30 compute-0 sudo[142185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:30 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee88002950 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:30 compute-0 python3.9[142187]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976929.5946257-1059-215728365282485/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=544ccad07cd49583316075cf420b5b550bb4de77 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:35:30 compute-0 sudo[142185]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:30 compute-0 ceph-mon[74331]: pgmap v248: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:35:30 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:35:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:30 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:35:30] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Nov 24 09:35:30 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:35:30] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Nov 24 09:35:31 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:31 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee74002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:31 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v249: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:35:31 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:31 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:31 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:31.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:32 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:32 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:32 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:32.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:32 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:32 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feea0003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:32 compute-0 ceph-mon[74331]: pgmap v249: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:35:32 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:32 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee88002950 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:35:33 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:33 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80003c30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:33 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v250: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:35:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:33.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:33 compute-0 sshd-session[136028]: Connection closed by 192.168.122.30 port 60036
Nov 24 09:35:33 compute-0 sshd-session[136025]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:35:33 compute-0 systemd[1]: session-48.scope: Deactivated successfully.
Nov 24 09:35:33 compute-0 systemd[1]: session-48.scope: Consumed 23.277s CPU time.
Nov 24 09:35:33 compute-0 systemd-logind[822]: Session 48 logged out. Waiting for processes to exit.
Nov 24 09:35:33 compute-0 systemd-logind[822]: Removed session 48.
Nov 24 09:35:34 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:34 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:34 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:34.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:34 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:34 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80003c30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:34 compute-0 ceph-mon[74331]: pgmap v250: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:35:34 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:34 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feea0003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:35 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee88002950 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:35 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v251: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:35:35 compute-0 sshd-session[142219]: Connection closed by 165.232.48.44 port 36276
Nov 24 09:35:35 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:35 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 24 09:35:35 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:35.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 24 09:35:36 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:36 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 24 09:35:36 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:36.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 24 09:35:36 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:36 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80003c30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:36 compute-0 ceph-mon[74331]: pgmap v251: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:35:36 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:36 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee74003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:36 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:35:36.978Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:35:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:37 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feea0003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:37 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v252: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:35:37 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:37 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:37 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:37.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:37 compute-0 sshd-session[142216]: Invalid user debian from 209.38.206.249 port 35648
Nov 24 09:35:37 compute-0 sshd-session[142216]: Connection closed by invalid user debian 209.38.206.249 port 35648 [preauth]
Nov 24 09:35:38 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:38 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:38 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:38.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:38 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:35:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:38 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feea0003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:38 compute-0 ceph-mon[74331]: pgmap v252: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:35:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:38 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80003c30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:38 compute-0 sshd-session[142223]: Accepted publickey for zuul from 192.168.122.30 port 42466 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 09:35:38 compute-0 systemd-logind[822]: New session 49 of user zuul.
Nov 24 09:35:38 compute-0 systemd[1]: Started Session 49 of User zuul.
Nov 24 09:35:38 compute-0 sshd-session[142223]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:35:39 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:39 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80003c30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:39 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v253: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:35:39 compute-0 sudo[142377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pojcbbezcnlzlavhbfdzspbdrjavgrtc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976939.0016062-26-140376635653907/AnsiballZ_file.py'
Nov 24 09:35:39 compute-0 sudo[142377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:39 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:39 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 24 09:35:39 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:39.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 24 09:35:39 compute-0 python3.9[142379]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:35:39 compute-0 sudo[142377]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:40 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:40 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 24 09:35:40 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:40.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 24 09:35:40 compute-0 sudo[142530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guxvgsihbutnrwygyasohfsvxxuyxxqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976939.9608686-62-77416966207719/AnsiballZ_stat.py'
Nov 24 09:35:40 compute-0 sudo[142530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:40 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feea0003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:40 compute-0 sudo[142533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:35:40 compute-0 sudo[142533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:35:40 compute-0 sudo[142533]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:40 compute-0 python3.9[142532]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:35:40 compute-0 sudo[142530]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:40 compute-0 ceph-mon[74331]: pgmap v253: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:35:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:40 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee88003de0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:35:40] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Nov 24 09:35:40 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:35:40] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Nov 24 09:35:41 compute-0 sudo[142680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azqsgmghbxqfrbvwgrcxioawddsicair ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976939.9608686-62-77416966207719/AnsiballZ_copy.py'
Nov 24 09:35:41 compute-0 sudo[142680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:41 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:41 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee88003de0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:41 compute-0 python3.9[142682]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763976939.9608686-62-77416966207719/.source.conf _original_basename=ceph.conf follow=False checksum=35be1475912cb94f172c67eb64af3d903820f5fe backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:35:41 compute-0 sudo[142680]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:41 compute-0 sshd-session[142640]: Invalid user ecs-user from 209.38.206.249 port 44216
Nov 24 09:35:41 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v254: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:35:41 compute-0 sshd-session[142640]: Connection closed by invalid user ecs-user 209.38.206.249 port 44216 [preauth]
Nov 24 09:35:41 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:41 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 24 09:35:41 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:41.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 24 09:35:41 compute-0 sudo[142835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtbwfvfeopxyqqynktvavcfrxgzlwiah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976941.536611-62-198228078924300/AnsiballZ_stat.py'
Nov 24 09:35:41 compute-0 sudo[142835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:41 compute-0 sshd-session[142748]: Invalid user nexus from 209.38.206.249 port 44218
Nov 24 09:35:42 compute-0 python3.9[142837]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:35:42 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:42 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:42 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:42.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:42 compute-0 sshd-session[142748]: Connection closed by invalid user nexus 209.38.206.249 port 44218 [preauth]
Nov 24 09:35:42 compute-0 sudo[142835]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:42 compute-0 sudo[142959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhdajyoyyvfzqgmjpqmditfcwzkydqiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976941.536611-62-198228078924300/AnsiballZ_copy.py'
Nov 24 09:35:42 compute-0 sudo[142959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:42 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee88003de0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:42 compute-0 python3.9[142961]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763976941.536611-62-198228078924300/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=5b68b38eb199b40419da711d3119a1cd74c89fee backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:35:42 compute-0 sudo[142959]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:42 compute-0 ceph-mon[74331]: pgmap v254: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:35:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:42 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feea0003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:43 compute-0 sshd-session[142226]: Connection closed by 192.168.122.30 port 42466
Nov 24 09:35:43 compute-0 sshd-session[142223]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:35:43 compute-0 systemd[1]: session-49.scope: Deactivated successfully.
Nov 24 09:35:43 compute-0 systemd[1]: session-49.scope: Consumed 2.937s CPU time.
Nov 24 09:35:43 compute-0 systemd-logind[822]: Session 49 logged out. Waiting for processes to exit.
Nov 24 09:35:43 compute-0 systemd-logind[822]: Removed session 49.
Nov 24 09:35:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:35:43 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:43 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee88003de0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:43 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v255: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:35:43 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:43 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 24 09:35:43 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:43.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 24 09:35:44 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:44 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:44 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:44.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:44 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee88003de0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:44 compute-0 ceph-mon[74331]: pgmap v255: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:35:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:44 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80003c30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-0[79585]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Nov 24 09:35:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:45 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feea0003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Optimize plan auto_2025-11-24_09:35:45
Nov 24 09:35:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 09:35:45 compute-0 ceph-mgr[74626]: [balancer INFO root] do_upmap
Nov 24 09:35:45 compute-0 ceph-mgr[74626]: [balancer INFO root] pools ['.nfs', 'backups', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'images', '.rgw.root', 'volumes', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta', '.mgr']
Nov 24 09:35:45 compute-0 ceph-mgr[74626]: [balancer INFO root] prepared 0/10 upmap changes
Nov 24 09:35:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:35:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:35:45 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v256: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:35:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 09:35:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:35:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 09:35:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:35:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:35:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:35:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:35:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:35:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:35:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:35:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:35:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:35:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 09:35:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:35:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:35:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:35:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 24 09:35:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:35:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 09:35:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:35:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:35:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:35:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 09:35:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:35:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 09:35:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:35:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:35:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:35:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:35:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 09:35:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:35:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:35:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:35:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:35:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 09:35:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:35:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:35:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:35:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:35:45 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:45 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:45 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:45.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:45 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:35:46 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:46 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:46 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:46.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:46 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee88003de0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:46 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee74003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:46 compute-0 ceph-mon[74331]: pgmap v256: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:35:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:35:46.979Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:35:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:47 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feeac001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:47 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v257: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:35:47 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:47 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 24 09:35:47 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:47.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 24 09:35:48 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:48 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 24 09:35:48 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:48.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 24 09:35:48 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:35:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:48 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80003c30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:48 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80003c30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:48 compute-0 ceph-mon[74331]: pgmap v257: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:35:48 compute-0 sshd-session[142993]: Accepted publickey for zuul from 192.168.122.30 port 33784 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 09:35:48 compute-0 systemd-logind[822]: New session 50 of user zuul.
Nov 24 09:35:48 compute-0 systemd[1]: Started Session 50 of User zuul.
Nov 24 09:35:48 compute-0 sshd-session[142993]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:35:49 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:49 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee74003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:49 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v258: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:35:49 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:49 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 24 09:35:49 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:49.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 24 09:35:49 compute-0 python3.9[143147]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:35:50 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:50 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:50 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:50.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:50 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feeac001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:50 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80003c30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:50 compute-0 ceph-mon[74331]: pgmap v258: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:35:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:35:50] "GET /metrics HTTP/1.1" 200 48250 "" "Prometheus/2.51.0"
Nov 24 09:35:50 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:35:50] "GET /metrics HTTP/1.1" 200 48250 "" "Prometheus/2.51.0"
Nov 24 09:35:51 compute-0 sudo[143277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:35:51 compute-0 sudo[143277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:35:51 compute-0 sudo[143277]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:51 compute-0 sudo[143327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opduyhnrhwtzovvqqsszdfsdvvfknsai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976950.591601-62-190785003195674/AnsiballZ_file.py'
Nov 24 09:35:51 compute-0 sudo[143327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:51 compute-0 sudo[143330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Nov 24 09:35:51 compute-0 sudo[143330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:35:51 compute-0 python3.9[143332]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:35:51 compute-0 sudo[143327]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:51 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:51 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c002ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:51 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v259: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:35:51 compute-0 sudo[143330]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:51 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:35:51 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:35:51 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:35:51 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:35:51 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:51 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:51 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:51.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:51 compute-0 sudo[143455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:35:51 compute-0 sudo[143455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:35:51 compute-0 sudo[143455]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:51 compute-0 sudo[143503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:35:51 compute-0 sudo[143503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:35:51 compute-0 sudo[143578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khffjblvkjasucmetjcexmmpomleexee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976951.4521003-62-70132466607790/AnsiballZ_file.py'
Nov 24 09:35:51 compute-0 sudo[143578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:51 compute-0 python3.9[143580]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:35:51 compute-0 sudo[143578]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:52 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:52 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 24 09:35:52 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:52.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 24 09:35:52 compute-0 sudo[143503]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:52 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:35:52 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:35:52 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:35:52 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:35:52 compute-0 sudo[143638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:35:52 compute-0 sudo[143638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:35:52 compute-0 sudo[143638]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:52 compute-0 sudo[143680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 24 09:35:52 compute-0 sudo[143680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:35:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:52 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee74003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:52 compute-0 ceph-mon[74331]: pgmap v259: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:35:52 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:35:52 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:35:52 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:35:52 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:35:52 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:35:52 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:35:52 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:35:52 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:35:52 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:35:52 compute-0 podman[143854]: 2025-11-24 09:35:52.755898845 +0000 UTC m=+0.047021410 container create ca7fb742b26fe0522de8bd2ee516f96285a52fb44277e2ccfe1ccf39dc41e82f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_mayer, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Nov 24 09:35:52 compute-0 systemd[1]: Started libpod-conmon-ca7fb742b26fe0522de8bd2ee516f96285a52fb44277e2ccfe1ccf39dc41e82f.scope.
Nov 24 09:35:52 compute-0 podman[143854]: 2025-11-24 09:35:52.73179528 +0000 UTC m=+0.022917865 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:35:52 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:35:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:52 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feeac001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:52 compute-0 podman[143854]: 2025-11-24 09:35:52.854534709 +0000 UTC m=+0.145657294 container init ca7fb742b26fe0522de8bd2ee516f96285a52fb44277e2ccfe1ccf39dc41e82f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:35:52 compute-0 podman[143854]: 2025-11-24 09:35:52.863271263 +0000 UTC m=+0.154393828 container start ca7fb742b26fe0522de8bd2ee516f96285a52fb44277e2ccfe1ccf39dc41e82f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_mayer, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:35:52 compute-0 podman[143854]: 2025-11-24 09:35:52.866813587 +0000 UTC m=+0.157936172 container attach ca7fb742b26fe0522de8bd2ee516f96285a52fb44277e2ccfe1ccf39dc41e82f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_mayer, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Nov 24 09:35:52 compute-0 naughty_mayer[143868]: 167 167
Nov 24 09:35:52 compute-0 systemd[1]: libpod-ca7fb742b26fe0522de8bd2ee516f96285a52fb44277e2ccfe1ccf39dc41e82f.scope: Deactivated successfully.
Nov 24 09:35:52 compute-0 podman[143854]: 2025-11-24 09:35:52.870552318 +0000 UTC m=+0.161674883 container died ca7fb742b26fe0522de8bd2ee516f96285a52fb44277e2ccfe1ccf39dc41e82f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Nov 24 09:35:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-088d4f008d3489e966882baa72ce83fb8e0f86f7570de0bfa27166338cd10576-merged.mount: Deactivated successfully.
Nov 24 09:35:52 compute-0 podman[143854]: 2025-11-24 09:35:52.918422951 +0000 UTC m=+0.209545516 container remove ca7fb742b26fe0522de8bd2ee516f96285a52fb44277e2ccfe1ccf39dc41e82f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_mayer, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:35:52 compute-0 systemd[1]: libpod-conmon-ca7fb742b26fe0522de8bd2ee516f96285a52fb44277e2ccfe1ccf39dc41e82f.scope: Deactivated successfully.
Nov 24 09:35:53 compute-0 python3.9[143853]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:35:53 compute-0 podman[143896]: 2025-11-24 09:35:53.091413407 +0000 UTC m=+0.045198472 container create 329b8b8162ac8e54c11a95e719866aadd448182e39bbb8c947f4f67ad53eeea9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_leakey, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 24 09:35:53 compute-0 systemd[1]: Started libpod-conmon-329b8b8162ac8e54c11a95e719866aadd448182e39bbb8c947f4f67ad53eeea9.scope.
Nov 24 09:35:53 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:35:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/448d5e46ecf8117341d09367b1d32ac88bc7f510d151ecfc7a98e34154a5d325/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:35:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/448d5e46ecf8117341d09367b1d32ac88bc7f510d151ecfc7a98e34154a5d325/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:35:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/448d5e46ecf8117341d09367b1d32ac88bc7f510d151ecfc7a98e34154a5d325/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:35:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/448d5e46ecf8117341d09367b1d32ac88bc7f510d151ecfc7a98e34154a5d325/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:35:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/448d5e46ecf8117341d09367b1d32ac88bc7f510d151ecfc7a98e34154a5d325/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:35:53 compute-0 podman[143896]: 2025-11-24 09:35:53.073667791 +0000 UTC m=+0.027452876 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:35:53 compute-0 podman[143896]: 2025-11-24 09:35:53.173745613 +0000 UTC m=+0.127530678 container init 329b8b8162ac8e54c11a95e719866aadd448182e39bbb8c947f4f67ad53eeea9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_leakey, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:35:53 compute-0 podman[143896]: 2025-11-24 09:35:53.181576613 +0000 UTC m=+0.135361678 container start 329b8b8162ac8e54c11a95e719866aadd448182e39bbb8c947f4f67ad53eeea9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Nov 24 09:35:53 compute-0 podman[143896]: 2025-11-24 09:35:53.185535889 +0000 UTC m=+0.139320964 container attach 329b8b8162ac8e54c11a95e719866aadd448182e39bbb8c947f4f67ad53eeea9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_leakey, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Nov 24 09:35:53 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:35:53 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:53 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80003c30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:53 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v260: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:35:53 compute-0 objective_leakey[143937]: --> passed data devices: 0 physical, 1 LVM
Nov 24 09:35:53 compute-0 objective_leakey[143937]: --> All data devices are unavailable
Nov 24 09:35:53 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:53 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 24 09:35:53 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:53.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 24 09:35:53 compute-0 systemd[1]: libpod-329b8b8162ac8e54c11a95e719866aadd448182e39bbb8c947f4f67ad53eeea9.scope: Deactivated successfully.
Nov 24 09:35:53 compute-0 podman[143896]: 2025-11-24 09:35:53.576821674 +0000 UTC m=+0.530606739 container died 329b8b8162ac8e54c11a95e719866aadd448182e39bbb8c947f4f67ad53eeea9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:35:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-448d5e46ecf8117341d09367b1d32ac88bc7f510d151ecfc7a98e34154a5d325-merged.mount: Deactivated successfully.
Nov 24 09:35:53 compute-0 podman[143896]: 2025-11-24 09:35:53.622458228 +0000 UTC m=+0.576243293 container remove 329b8b8162ac8e54c11a95e719866aadd448182e39bbb8c947f4f67ad53eeea9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_leakey, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:35:53 compute-0 systemd[1]: libpod-conmon-329b8b8162ac8e54c11a95e719866aadd448182e39bbb8c947f4f67ad53eeea9.scope: Deactivated successfully.
Nov 24 09:35:53 compute-0 sudo[143680]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:53 compute-0 sudo[144107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfqckdgemvsqrhyiisdtrslxbxfgbyrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976953.2699451-131-142430705352087/AnsiballZ_seboolean.py'
Nov 24 09:35:53 compute-0 sudo[144107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:53 compute-0 sudo[144073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:35:53 compute-0 sudo[144073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:35:53 compute-0 sudo[144073]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:53 compute-0 sudo[144117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- lvm list --format json
Nov 24 09:35:53 compute-0 sudo[144117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:35:53 compute-0 python3.9[144114]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 24 09:35:54 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:54 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:54 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:54.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:54 compute-0 podman[144183]: 2025-11-24 09:35:54.443079669 +0000 UTC m=+0.089734636 container create 6b565491ff8d58fe1d29bdd737d7db9c9ff00b65fa476c134eecc6f413561bd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lalande, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:35:54 compute-0 podman[144183]: 2025-11-24 09:35:54.381255912 +0000 UTC m=+0.027910900 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:35:54 compute-0 systemd[1]: Started libpod-conmon-6b565491ff8d58fe1d29bdd737d7db9c9ff00b65fa476c134eecc6f413561bd0.scope.
Nov 24 09:35:54 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:54 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c002ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:54 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:35:54 compute-0 podman[144183]: 2025-11-24 09:35:54.548235687 +0000 UTC m=+0.194890674 container init 6b565491ff8d58fe1d29bdd737d7db9c9ff00b65fa476c134eecc6f413561bd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lalande, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Nov 24 09:35:54 compute-0 ceph-mon[74331]: pgmap v260: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:35:54 compute-0 podman[144183]: 2025-11-24 09:35:54.554858695 +0000 UTC m=+0.201513662 container start 6b565491ff8d58fe1d29bdd737d7db9c9ff00b65fa476c134eecc6f413561bd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lalande, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:35:54 compute-0 podman[144183]: 2025-11-24 09:35:54.55914966 +0000 UTC m=+0.205804637 container attach 6b565491ff8d58fe1d29bdd737d7db9c9ff00b65fa476c134eecc6f413561bd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Nov 24 09:35:54 compute-0 naughty_lalande[144199]: 167 167
Nov 24 09:35:54 compute-0 systemd[1]: libpod-6b565491ff8d58fe1d29bdd737d7db9c9ff00b65fa476c134eecc6f413561bd0.scope: Deactivated successfully.
Nov 24 09:35:54 compute-0 podman[144183]: 2025-11-24 09:35:54.56252117 +0000 UTC m=+0.209176167 container died 6b565491ff8d58fe1d29bdd737d7db9c9ff00b65fa476c134eecc6f413561bd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True)
Nov 24 09:35:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-63bb68ce5a8b400729c7a17c1232709591e3610a46d7e9712d973e6f9029c74e-merged.mount: Deactivated successfully.
Nov 24 09:35:54 compute-0 podman[144183]: 2025-11-24 09:35:54.616688342 +0000 UTC m=+0.263343309 container remove 6b565491ff8d58fe1d29bdd737d7db9c9ff00b65fa476c134eecc6f413561bd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lalande, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:35:54 compute-0 systemd[1]: libpod-conmon-6b565491ff8d58fe1d29bdd737d7db9c9ff00b65fa476c134eecc6f413561bd0.scope: Deactivated successfully.
Nov 24 09:35:54 compute-0 podman[144224]: 2025-11-24 09:35:54.814674428 +0000 UTC m=+0.051393519 container create a33143251fd76788284d37c82a89a25a5cfc57e5ce7ef47f3abdf0efc69ee11f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_shaw, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:35:54 compute-0 dbus-broker-launch[810]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Nov 24 09:35:54 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:54 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee74003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:54 compute-0 systemd[1]: Started libpod-conmon-a33143251fd76788284d37c82a89a25a5cfc57e5ce7ef47f3abdf0efc69ee11f.scope.
Nov 24 09:35:54 compute-0 podman[144224]: 2025-11-24 09:35:54.790662464 +0000 UTC m=+0.027381585 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:35:54 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:35:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6ec4b2dab17f8fbdb725c59d5e49619224c1c7cea05cd692a99dec6d86737fc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:35:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6ec4b2dab17f8fbdb725c59d5e49619224c1c7cea05cd692a99dec6d86737fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:35:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6ec4b2dab17f8fbdb725c59d5e49619224c1c7cea05cd692a99dec6d86737fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:35:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6ec4b2dab17f8fbdb725c59d5e49619224c1c7cea05cd692a99dec6d86737fc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:35:55 compute-0 podman[144224]: 2025-11-24 09:35:55.040824418 +0000 UTC m=+0.277543609 container init a33143251fd76788284d37c82a89a25a5cfc57e5ce7ef47f3abdf0efc69ee11f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_shaw, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:35:55 compute-0 podman[144224]: 2025-11-24 09:35:55.052750488 +0000 UTC m=+0.289469599 container start a33143251fd76788284d37c82a89a25a5cfc57e5ce7ef47f3abdf0efc69ee11f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_shaw, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:35:55 compute-0 podman[144224]: 2025-11-24 09:35:55.076589627 +0000 UTC m=+0.313308768 container attach a33143251fd76788284d37c82a89a25a5cfc57e5ce7ef47f3abdf0efc69ee11f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_shaw, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 24 09:35:55 compute-0 sudo[144107]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:55 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feeac001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:55 compute-0 silly_shaw[144244]: {
Nov 24 09:35:55 compute-0 silly_shaw[144244]:     "0": [
Nov 24 09:35:55 compute-0 silly_shaw[144244]:         {
Nov 24 09:35:55 compute-0 silly_shaw[144244]:             "devices": [
Nov 24 09:35:55 compute-0 silly_shaw[144244]:                 "/dev/loop3"
Nov 24 09:35:55 compute-0 silly_shaw[144244]:             ],
Nov 24 09:35:55 compute-0 silly_shaw[144244]:             "lv_name": "ceph_lv0",
Nov 24 09:35:55 compute-0 silly_shaw[144244]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:35:55 compute-0 silly_shaw[144244]:             "lv_size": "21470642176",
Nov 24 09:35:55 compute-0 silly_shaw[144244]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=84a084c3-61a7-5de7-8207-1f88efa59a64,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 24 09:35:55 compute-0 silly_shaw[144244]:             "lv_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:35:55 compute-0 silly_shaw[144244]:             "name": "ceph_lv0",
Nov 24 09:35:55 compute-0 silly_shaw[144244]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:35:55 compute-0 silly_shaw[144244]:             "tags": {
Nov 24 09:35:55 compute-0 silly_shaw[144244]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:35:55 compute-0 silly_shaw[144244]:                 "ceph.block_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:35:55 compute-0 silly_shaw[144244]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 09:35:55 compute-0 silly_shaw[144244]:                 "ceph.cluster_fsid": "84a084c3-61a7-5de7-8207-1f88efa59a64",
Nov 24 09:35:55 compute-0 silly_shaw[144244]:                 "ceph.cluster_name": "ceph",
Nov 24 09:35:55 compute-0 silly_shaw[144244]:                 "ceph.crush_device_class": "",
Nov 24 09:35:55 compute-0 silly_shaw[144244]:                 "ceph.encrypted": "0",
Nov 24 09:35:55 compute-0 silly_shaw[144244]:                 "ceph.osd_fsid": "4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c",
Nov 24 09:35:55 compute-0 silly_shaw[144244]:                 "ceph.osd_id": "0",
Nov 24 09:35:55 compute-0 silly_shaw[144244]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 09:35:55 compute-0 silly_shaw[144244]:                 "ceph.type": "block",
Nov 24 09:35:55 compute-0 silly_shaw[144244]:                 "ceph.vdo": "0",
Nov 24 09:35:55 compute-0 silly_shaw[144244]:                 "ceph.with_tpm": "0"
Nov 24 09:35:55 compute-0 silly_shaw[144244]:             },
Nov 24 09:35:55 compute-0 silly_shaw[144244]:             "type": "block",
Nov 24 09:35:55 compute-0 silly_shaw[144244]:             "vg_name": "ceph_vg0"
Nov 24 09:35:55 compute-0 silly_shaw[144244]:         }
Nov 24 09:35:55 compute-0 silly_shaw[144244]:     ]
Nov 24 09:35:55 compute-0 silly_shaw[144244]: }
Nov 24 09:35:55 compute-0 systemd[1]: libpod-a33143251fd76788284d37c82a89a25a5cfc57e5ce7ef47f3abdf0efc69ee11f.scope: Deactivated successfully.
Nov 24 09:35:55 compute-0 podman[144224]: 2025-11-24 09:35:55.393285584 +0000 UTC m=+0.630004775 container died a33143251fd76788284d37c82a89a25a5cfc57e5ce7ef47f3abdf0efc69ee11f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_shaw, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:35:55 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v261: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:35:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6ec4b2dab17f8fbdb725c59d5e49619224c1c7cea05cd692a99dec6d86737fc-merged.mount: Deactivated successfully.
Nov 24 09:35:55 compute-0 podman[144224]: 2025-11-24 09:35:55.498594386 +0000 UTC m=+0.735313477 container remove a33143251fd76788284d37c82a89a25a5cfc57e5ce7ef47f3abdf0efc69ee11f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_shaw, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:35:55 compute-0 systemd[1]: libpod-conmon-a33143251fd76788284d37c82a89a25a5cfc57e5ce7ef47f3abdf0efc69ee11f.scope: Deactivated successfully.
Nov 24 09:35:55 compute-0 sudo[144117]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:55 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:55 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:55 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:55.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:55 compute-0 sudo[144292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:35:55 compute-0 sudo[144292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:35:55 compute-0 sudo[144292]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:55 compute-0 sudo[144317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- raw list --format json
Nov 24 09:35:55 compute-0 sudo[144317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:35:56 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:56 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 24 09:35:56 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:56.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 24 09:35:56 compute-0 podman[144383]: 2025-11-24 09:35:56.121441427 +0000 UTC m=+0.038310127 container create 0627da1350495872c6be8a5ec522c6ed2ca096dbef098652832b879e5808fc85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_ganguly, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:35:56 compute-0 systemd[1]: Started libpod-conmon-0627da1350495872c6be8a5ec522c6ed2ca096dbef098652832b879e5808fc85.scope.
Nov 24 09:35:56 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:35:56 compute-0 podman[144383]: 2025-11-24 09:35:56.200265039 +0000 UTC m=+0.117133769 container init 0627da1350495872c6be8a5ec522c6ed2ca096dbef098652832b879e5808fc85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_ganguly, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Nov 24 09:35:56 compute-0 podman[144383]: 2025-11-24 09:35:56.105604343 +0000 UTC m=+0.022473073 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:35:56 compute-0 podman[144383]: 2025-11-24 09:35:56.21039643 +0000 UTC m=+0.127265140 container start 0627da1350495872c6be8a5ec522c6ed2ca096dbef098652832b879e5808fc85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Nov 24 09:35:56 compute-0 podman[144383]: 2025-11-24 09:35:56.214609953 +0000 UTC m=+0.131478683 container attach 0627da1350495872c6be8a5ec522c6ed2ca096dbef098652832b879e5808fc85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default)
Nov 24 09:35:56 compute-0 amazing_ganguly[144400]: 167 167
Nov 24 09:35:56 compute-0 systemd[1]: libpod-0627da1350495872c6be8a5ec522c6ed2ca096dbef098652832b879e5808fc85.scope: Deactivated successfully.
Nov 24 09:35:56 compute-0 podman[144383]: 2025-11-24 09:35:56.218669892 +0000 UTC m=+0.135538612 container died 0627da1350495872c6be8a5ec522c6ed2ca096dbef098652832b879e5808fc85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_ganguly, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:35:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-1bd87d1e65183fa73c0f519ef8672df4924d222c7db692793a05e7262a89572f-merged.mount: Deactivated successfully.
Nov 24 09:35:56 compute-0 podman[144383]: 2025-11-24 09:35:56.260717679 +0000 UTC m=+0.177586389 container remove 0627da1350495872c6be8a5ec522c6ed2ca096dbef098652832b879e5808fc85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Nov 24 09:35:56 compute-0 systemd[1]: libpod-conmon-0627da1350495872c6be8a5ec522c6ed2ca096dbef098652832b879e5808fc85.scope: Deactivated successfully.
Nov 24 09:35:56 compute-0 podman[144422]: 2025-11-24 09:35:56.416287198 +0000 UTC m=+0.042533390 container create 27a6103ab2e688560ae33ea11651be217d358a896e81541b99dd27151efd27be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:35:56 compute-0 systemd[1]: Started libpod-conmon-27a6103ab2e688560ae33ea11651be217d358a896e81541b99dd27151efd27be.scope.
Nov 24 09:35:56 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:35:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7630e324fc7d19b78c03b62fbdafe0e1b990a0215c8ac680546cb4e0231384d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:35:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7630e324fc7d19b78c03b62fbdafe0e1b990a0215c8ac680546cb4e0231384d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:35:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7630e324fc7d19b78c03b62fbdafe0e1b990a0215c8ac680546cb4e0231384d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:35:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7630e324fc7d19b78c03b62fbdafe0e1b990a0215c8ac680546cb4e0231384d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:35:56 compute-0 podman[144422]: 2025-11-24 09:35:56.397423753 +0000 UTC m=+0.023669965 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:35:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:56 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:56 compute-0 podman[144422]: 2025-11-24 09:35:56.510711819 +0000 UTC m=+0.136958021 container init 27a6103ab2e688560ae33ea11651be217d358a896e81541b99dd27151efd27be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_albattani, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 24 09:35:56 compute-0 podman[144422]: 2025-11-24 09:35:56.518292022 +0000 UTC m=+0.144538224 container start 27a6103ab2e688560ae33ea11651be217d358a896e81541b99dd27151efd27be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_albattani, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:35:56 compute-0 podman[144422]: 2025-11-24 09:35:56.521593131 +0000 UTC m=+0.147839343 container attach 27a6103ab2e688560ae33ea11651be217d358a896e81541b99dd27151efd27be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_albattani, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:35:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:56 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c002ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:56 compute-0 sudo[144595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tytxsfpqnqzetzhpflbbxsvnauclvsbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976956.6126223-161-164596543275330/AnsiballZ_setup.py'
Nov 24 09:35:56 compute-0 sudo[144595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:35:56.981Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:35:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:35:56.982Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 09:35:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:35:56.982Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:35:57 compute-0 ceph-mon[74331]: pgmap v261: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:35:57 compute-0 python3.9[144603]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 09:35:57 compute-0 lvm[144643]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 09:35:57 compute-0 lvm[144643]: VG ceph_vg0 finished
Nov 24 09:35:57 compute-0 thirsty_albattani[144440]: {}
Nov 24 09:35:57 compute-0 systemd[1]: libpod-27a6103ab2e688560ae33ea11651be217d358a896e81541b99dd27151efd27be.scope: Deactivated successfully.
Nov 24 09:35:57 compute-0 systemd[1]: libpod-27a6103ab2e688560ae33ea11651be217d358a896e81541b99dd27151efd27be.scope: Consumed 1.241s CPU time.
Nov 24 09:35:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:57 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee74003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:57 compute-0 podman[144654]: 2025-11-24 09:35:57.337805664 +0000 UTC m=+0.031067744 container died 27a6103ab2e688560ae33ea11651be217d358a896e81541b99dd27151efd27be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:35:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-b7630e324fc7d19b78c03b62fbdafe0e1b990a0215c8ac680546cb4e0231384d-merged.mount: Deactivated successfully.
Nov 24 09:35:57 compute-0 podman[144654]: 2025-11-24 09:35:57.400402061 +0000 UTC m=+0.093664121 container remove 27a6103ab2e688560ae33ea11651be217d358a896e81541b99dd27151efd27be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_albattani, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:35:57 compute-0 systemd[1]: libpod-conmon-27a6103ab2e688560ae33ea11651be217d358a896e81541b99dd27151efd27be.scope: Deactivated successfully.
Nov 24 09:35:57 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v262: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:35:57 compute-0 sudo[144317]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:57 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:35:57 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:35:57 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:35:57 compute-0 sudo[144595]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:57 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:35:57 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:57 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:57 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:57.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:57 compute-0 sudo[144671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:35:57 compute-0 sudo[144671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:35:57 compute-0 sudo[144671]: pam_unix(sudo:session): session closed for user root
Nov 24 09:35:57 compute-0 sudo[144769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whcylfuanswdqujjwomjoxtymstudsoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976956.6126223-161-164596543275330/AnsiballZ_dnf.py'
Nov 24 09:35:57 compute-0 sudo[144769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:35:58 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:58 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:58 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:35:58.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:58 compute-0 python3.9[144771]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 09:35:58 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:35:58 compute-0 ceph-mon[74331]: pgmap v262: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:35:58 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:35:58 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:35:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:58 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feeac009990 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:58 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feeac009990 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:35:59 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:35:59 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v263: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:35:59 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:35:59 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:35:59 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:35:59.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:35:59 compute-0 sudo[144769]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:00 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:00 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:00 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:00.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:00 compute-0 sudo[144925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhtcdwkgkerndqyyrypyazkomgdvlaez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976959.7725782-197-248670481072429/AnsiballZ_systemd.py'
Nov 24 09:36:00 compute-0 sudo[144925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:00 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee74003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:00 compute-0 ceph-mon[74331]: pgmap v263: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:36:00 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:36:00 compute-0 sudo[144928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:36:00 compute-0 sudo[144928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:36:00 compute-0 sudo[144928]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:00 compute-0 python3.9[144927]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 09:36:00 compute-0 sudo[144925]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:00 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feeac009990 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:36:00] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Nov 24 09:36:00 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:36:00] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Nov 24 09:36:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:01 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c002ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:01 compute-0 sudo[145106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvubjxfbsgohqhgfkxvvnialfuhxzwef ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763976961.0269072-221-261839830489269/AnsiballZ_edpm_nftables_snippet.py'
Nov 24 09:36:01 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v264: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:36:01 compute-0 sudo[145106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:01 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:01 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:01 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:01.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:01 compute-0 python3[145108]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Nov 24 09:36:01 compute-0 sudo[145106]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:02 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:02 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:02 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:02.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:02 compute-0 sshd-session[145133]: Invalid user test from 209.38.206.249 port 44224
Nov 24 09:36:02 compute-0 sshd-session[145133]: Connection closed by invalid user test 209.38.206.249 port 44224 [preauth]
Nov 24 09:36:02 compute-0 sudo[145261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcsvwywqzunbdyvzxpxcmkkgcuytdldt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976962.0549064-248-185392026621386/AnsiballZ_file.py'
Nov 24 09:36:02 compute-0 sudo[145261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:02 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:02 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:02 compute-0 python3.9[145263]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:36:02 compute-0 ceph-mon[74331]: pgmap v264: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:36:02 compute-0 sudo[145261]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:02 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:02 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:03 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:36:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:03 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feeac009990 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:03 compute-0 sudo[145414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmixiygiujgaocgtrsqgwtgcittufhwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976963.0105364-272-63433082626965/AnsiballZ_stat.py'
Nov 24 09:36:03 compute-0 sudo[145414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:03 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v265: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:36:03 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:03 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:03 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:03.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:03 compute-0 python3.9[145416]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:36:03 compute-0 sudo[145414]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:03 compute-0 sudo[145492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpqnsphdazboofrhsvhtfehzpbgmdxex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976963.0105364-272-63433082626965/AnsiballZ_file.py'
Nov 24 09:36:03 compute-0 sudo[145492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:04 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:04 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 24 09:36:04 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:04.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 24 09:36:04 compute-0 python3.9[145494]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:36:04 compute-0 sudo[145492]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:04 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:04 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c002ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:04 compute-0 ceph-mon[74331]: pgmap v265: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:36:04 compute-0 sudo[145645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdurqdrowdzhykhotybzgangtypidqds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976964.503926-308-113560464053474/AnsiballZ_stat.py'
Nov 24 09:36:04 compute-0 sudo[145645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:04 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:04 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:04 compute-0 python3.9[145647]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:36:05 compute-0 sudo[145645]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:05 compute-0 sudo[145725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfpnqqjcgtoflgjfnaukvvzavsbakpel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976964.503926-308-113560464053474/AnsiballZ_file.py'
Nov 24 09:36:05 compute-0 sudo[145725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:05 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:05 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v266: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:36:05 compute-0 python3.9[145727]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.wr5stctv recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:36:05 compute-0 sudo[145725]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:05 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:05 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:05 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:05.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:06 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:06 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 24 09:36:06 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:06.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 24 09:36:06 compute-0 sudo[145877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqevzhgnnswvdqwiawhfhbwrmjlhhamx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976965.816647-344-96770123433695/AnsiballZ_stat.py'
Nov 24 09:36:06 compute-0 sudo[145877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:06 compute-0 python3.9[145880]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:36:06 compute-0 sudo[145877]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:06 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c002ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:06 compute-0 ceph-mon[74331]: pgmap v266: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:36:06 compute-0 sudo[145956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bagyvxahzlpfbiewfjcvlkalzlqikazz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976965.816647-344-96770123433695/AnsiballZ_file.py'
Nov 24 09:36:06 compute-0 sudo[145956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:06 compute-0 python3.9[145958]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:36:06 compute-0 sudo[145956]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:06 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feeac009990 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:36:06.983Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:36:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:07 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:07 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v267: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:36:07 compute-0 sudo[146109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipesfozsqelrtlfhiqpsjushwikhyypm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976967.1195996-383-186472822504123/AnsiballZ_command.py'
Nov 24 09:36:07 compute-0 sudo[146109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:07 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:07 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:07 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:07.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:07 compute-0 python3.9[146111]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:36:07 compute-0 sudo[146109]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:08 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:08 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:36:08 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:08.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:36:08 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:36:08 compute-0 sudo[146263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icbzjmpuawlunpvupyrbmjisgcsxmhbt ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763976968.0183299-407-166165027323925/AnsiballZ_edpm_nftables_from_files.py'
Nov 24 09:36:08 compute-0 sudo[146263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:08 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee74003cb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:08 compute-0 ceph-mon[74331]: pgmap v267: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:36:08 compute-0 python3[146265]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 24 09:36:08 compute-0 sudo[146263]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:08 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c002ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:09 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feeac009990 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/093609 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:36:09 compute-0 sudo[146416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efnnkethhvgaypfopoeimhjmomjamtrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976969.004952-431-146429348775812/AnsiballZ_stat.py'
Nov 24 09:36:09 compute-0 sudo[146416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:09 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v268: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:36:09 compute-0 python3.9[146418]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:36:09 compute-0 sudo[146416]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:09 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:09 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000027s ======
Nov 24 09:36:09 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:09.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 24 09:36:09 compute-0 sshd-session[146419]: Invalid user devuser from 209.38.206.249 port 37626
Nov 24 09:36:09 compute-0 sshd-session[146419]: Connection closed by invalid user devuser 209.38.206.249 port 37626 [preauth]
Nov 24 09:36:10 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:10 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 24 09:36:10 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:10.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 24 09:36:10 compute-0 sudo[146543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgimxexzbtinewnngwcteanutpyrhdac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976969.004952-431-146429348775812/AnsiballZ_copy.py'
Nov 24 09:36:10 compute-0 sudo[146543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:10 compute-0 python3.9[146545]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976969.004952-431-146429348775812/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:36:10 compute-0 sudo[146543]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:10 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:10 compute-0 ceph-mon[74331]: pgmap v268: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:36:10 compute-0 sudo[146696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-raaimuuyurjdfqajlhsuxxqsbjjspiqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976970.5749052-476-232185048198988/AnsiballZ_stat.py'
Nov 24 09:36:10 compute-0 sudo[146696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:10 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee74003cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:36:10] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Nov 24 09:36:10 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:36:10] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Nov 24 09:36:11 compute-0 python3.9[146698]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:36:11 compute-0 sudo[146696]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:11 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:11 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c002ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:11 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v269: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:36:11 compute-0 sudo[146822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qugexruifkqznzmqprwswotkxsumrdgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976970.5749052-476-232185048198988/AnsiballZ_copy.py'
Nov 24 09:36:11 compute-0 sudo[146822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:11 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:11 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:11 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:11.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:11 compute-0 python3.9[146824]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976970.5749052-476-232185048198988/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:36:11 compute-0 sudo[146822]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:12 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:12 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:12 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:12.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:12 compute-0 sudo[146975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsgbakobkddvasqvfadwtutwbpihzwxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976972.1485019-521-50371374668460/AnsiballZ_stat.py'
Nov 24 09:36:12 compute-0 sudo[146975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:12 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:12 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feeac009990 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:12 compute-0 python3.9[146977]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:36:12 compute-0 ceph-mon[74331]: pgmap v269: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:36:12 compute-0 sudo[146975]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:12 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:12 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:13 compute-0 sudo[147100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuecqooxhghkwrwsiqeeevosvlzyadah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976972.1485019-521-50371374668460/AnsiballZ_copy.py'
Nov 24 09:36:13 compute-0 sudo[147100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:36:13 compute-0 python3.9[147102]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976972.1485019-521-50371374668460/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:36:13 compute-0 sudo[147100]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:13 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:13 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee74003cf0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:13 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v270: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:36:13 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:13 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:13 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:13.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:13 compute-0 sudo[147253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpvixixvqkagxxceagsxgvpsrexvfcic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976973.4866755-566-167961987070587/AnsiballZ_stat.py'
Nov 24 09:36:13 compute-0 sudo[147253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:14 compute-0 python3.9[147255]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:36:14 compute-0 sudo[147253]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:14 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:14 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:14 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:14.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:14 compute-0 sudo[147381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qumceppnbpjtjiuzmbmrtxxmqtsvmnmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976973.4866755-566-167961987070587/AnsiballZ_copy.py'
Nov 24 09:36:14 compute-0 sudo[147381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:14 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:14 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c002ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:14 compute-0 sshd-session[147305]: Invalid user ansible from 209.38.206.249 port 46258
Nov 24 09:36:14 compute-0 python3.9[147383]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976973.4866755-566-167961987070587/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:36:14 compute-0 sudo[147381]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:14 compute-0 sshd-session[147305]: Connection closed by invalid user ansible 209.38.206.249 port 46258 [preauth]
Nov 24 09:36:14 compute-0 ceph-mon[74331]: pgmap v270: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:36:14 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:14 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feeac009990 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:15 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:15 compute-0 sudo[147534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vaquovihyzsfqbxmlfwrqexwbdpaqeuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976975.0141966-611-15325857108811/AnsiballZ_stat.py'
Nov 24 09:36:15 compute-0 sudo[147534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:36:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:36:15 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v271: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:36:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:36:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:36:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:36:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:36:15 compute-0 python3.9[147536]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:36:15 compute-0 sudo[147534]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:15 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:15 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:15 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:15.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:15 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:36:15 compute-0 sudo[147659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svzsgxbulkwyyszjruspqokdodxzdpsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976975.0141966-611-15325857108811/AnsiballZ_copy.py'
Nov 24 09:36:15 compute-0 sudo[147659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:16 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:16 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:16 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:16.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:16 compute-0 python3.9[147661]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763976975.0141966-611-15325857108811/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:36:16 compute-0 sudo[147659]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:16 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee74003d10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:16 compute-0 ceph-mon[74331]: pgmap v271: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:36:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:16 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c002ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:16 compute-0 sudo[147812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwaqmssholxsrrkhgcufyxnpttkajpkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976976.6314518-656-130779941381439/AnsiballZ_file.py'
Nov 24 09:36:16 compute-0 sudo[147812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:36:16.984Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:36:17 compute-0 python3.9[147814]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:36:17 compute-0 sudo[147812]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:17 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feeac009990 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:17 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v272: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:36:17 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:17 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:36:17 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:17.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:36:17 compute-0 sudo[147965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phntypnomwzmpnqmalshentxkixcjfui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976977.4993618-680-154115420110405/AnsiballZ_command.py'
Nov 24 09:36:17 compute-0 sudo[147965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:17 compute-0 python3.9[147967]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:36:17 compute-0 sudo[147965]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:18 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:18 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:18 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:18.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:18 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:36:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:18 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feeac009990 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:18 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:36:18 compute-0 ceph-mon[74331]: pgmap v272: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:36:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:18 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee74003d30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:18 compute-0 sudo[148123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhxroenvkktkniuxkhhccjlmrjffuxpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976978.4101062-704-214002324556033/AnsiballZ_blockinfile.py'
Nov 24 09:36:18 compute-0 sudo[148123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:19 compute-0 python3.9[148125]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:36:19 compute-0 sudo[148123]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:19 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:19 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feea0001080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:19 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v273: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:36:19 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:19 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:19 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:19.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:19 compute-0 sudo[148276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjzqmnhxrfajpgovsmaumzrtvotorxwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976979.4924536-731-46806401249386/AnsiballZ_command.py'
Nov 24 09:36:19 compute-0 sudo[148276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:20 compute-0 python3.9[148278]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:36:20 compute-0 sudo[148276]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:20 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:20 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:20 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:20.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:20 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c002ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:20 compute-0 sudo[148408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:36:20 compute-0 sudo[148453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkqxlwlbeqeqefuychjmooqythocgsft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976980.377839-755-245620885935152/AnsiballZ_stat.py'
Nov 24 09:36:20 compute-0 sudo[148408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:36:20 compute-0 sudo[148453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:20 compute-0 sudo[148408]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:20 compute-0 ceph-mon[74331]: pgmap v273: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:36:20 compute-0 python3.9[148457]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:36:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:20 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:20 compute-0 sudo[148453]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:36:20] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Nov 24 09:36:20 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:36:20] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Nov 24 09:36:21 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:21 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:21 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v274: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Nov 24 09:36:21 compute-0 sudo[148610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbqqczopmgdjlffdcdpzqylzhnqqxgwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976981.2549546-779-85006004743855/AnsiballZ_command.py'
Nov 24 09:36:21 compute-0 sudo[148610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:21 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:21 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:21 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:21.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:21 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:21 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:36:21 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:21 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:36:21 compute-0 python3.9[148612]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:36:21 compute-0 sudo[148610]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:22 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:22 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:36:22 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:22.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:36:22 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:22 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feea0001080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:22 compute-0 sudo[148766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzxseojvijwxcnijpnglryahfqrpktmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976982.1266012-803-58509625537086/AnsiballZ_file.py'
Nov 24 09:36:22 compute-0 sudo[148766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:22 compute-0 ceph-mon[74331]: pgmap v274: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Nov 24 09:36:22 compute-0 python3.9[148768]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:36:22 compute-0 sudo[148766]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:22 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:22 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c002ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:23 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:36:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:23 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c002ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:23 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v275: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:36:23 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:23 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:23 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:23.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:24 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:24 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:36:24 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:24.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:36:24 compute-0 python3.9[148919]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:36:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:24 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c002ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:24 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:36:24 compute-0 ceph-mon[74331]: pgmap v275: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:36:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:24 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c002ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:25 compute-0 sudo[149072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epcifyvieeacoytfkqmzimuueicgaymm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976984.9517717-923-56436051542489/AnsiballZ_command.py'
Nov 24 09:36:25 compute-0 sudo[149072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:25 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c002ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:25 compute-0 python3.9[149074]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:0e:0a:93:45:69:49" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:36:25 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v276: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:36:25 compute-0 ovs-vsctl[149075]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:0e:0a:93:45:69:49 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Nov 24 09:36:25 compute-0 sudo[149072]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:25 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:25 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:25 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:25.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:26 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:26 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:36:26 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:26.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:36:26 compute-0 sudo[149226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csiknvgwixaeicosuhtgqkjawmvnnvrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976985.922408-950-187492740830736/AnsiballZ_command.py'
Nov 24 09:36:26 compute-0 sudo[149226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:26 compute-0 python3.9[149228]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:36:26 compute-0 sudo[149226]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:26 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c002ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:26 compute-0 ceph-mon[74331]: pgmap v276: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:36:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:26 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c002ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:36:26.986Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:36:27 compute-0 sudo[149383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aaekjzecvlcnfjezcreybxyxpytbcvzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976986.741354-974-29030554190857/AnsiballZ_command.py'
Nov 24 09:36:27 compute-0 sudo[149383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:27 compute-0 python3.9[149385]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:36:27 compute-0 ovs-vsctl[149387]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Nov 24 09:36:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:27 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee880010d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:27 compute-0 sudo[149383]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:27 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v277: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:36:27 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:27 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:27 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:27.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:28 compute-0 python3.9[149537]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:36:28 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:28 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:36:28 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:28.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:36:28 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:36:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:28 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:28 compute-0 sudo[149690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntygywvxecvjivfxnhdkyztwsneosvvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976988.3894033-1025-162364252043346/AnsiballZ_file.py'
Nov 24 09:36:28 compute-0 sudo[149690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:28 compute-0 ceph-mon[74331]: pgmap v277: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:36:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:28 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c002ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:28 compute-0 python3.9[149692]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:36:28 compute-0 sudo[149690]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:29 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:29 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee74003e80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:29 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v278: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:36:29 compute-0 sudo[149843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blrgmgnabdgtvsiukvzxmyqloqfmlfna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976989.2212465-1049-233800522208885/AnsiballZ_stat.py'
Nov 24 09:36:29 compute-0 sudo[149843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:29 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:29 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:29 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:29.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:29 compute-0 python3.9[149845]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:36:29 compute-0 sudo[149843]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:29 compute-0 sudo[149921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynarmlyrjuguredcgrqubtkeqlbodcib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976989.2212465-1049-233800522208885/AnsiballZ_file.py'
Nov 24 09:36:29 compute-0 sudo[149921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:30 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:30 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:30 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:30.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:30 compute-0 python3.9[149923]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:36:30 compute-0 sudo[149921]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:30 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee74003e80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:30 compute-0 sudo[150074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgentiefiwyecpqapnaasgkncalvxkby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976990.3334355-1049-254111720545999/AnsiballZ_stat.py'
Nov 24 09:36:30 compute-0 sudo[150074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:30 compute-0 python3.9[150076]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:36:30 compute-0 ceph-mon[74331]: pgmap v278: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:36:30 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:36:30 compute-0 sudo[150074]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:30 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:36:30] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Nov 24 09:36:30 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:36:30] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Nov 24 09:36:31 compute-0 sudo[150152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnbcvuuzeaywakiyrjajmuunnixzztfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976990.3334355-1049-254111720545999/AnsiballZ_file.py'
Nov 24 09:36:31 compute-0 sudo[150152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:31 compute-0 python3.9[150154]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:36:31 compute-0 sudo[150152]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:31 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/093631 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:36:31 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:31 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c004250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:31 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v279: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:36:31 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:31 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:36:31 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:31.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:36:32 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:32 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:36:32 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:32.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:36:32 compute-0 sudo[150306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buonxsuuaglljsrdbcfskzsdoaebiern ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976991.9556527-1118-84128198186952/AnsiballZ_file.py'
Nov 24 09:36:32 compute-0 sudo[150306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:32 compute-0 python3.9[150308]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:36:32 compute-0 sudo[150306]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:32 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:32 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c004250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:32 compute-0 ceph-mon[74331]: pgmap v279: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:36:32 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:32 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c004250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:33 compute-0 sudo[150458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sffisdnaikdvfmvgqynywqcovigghjjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976992.7943087-1142-254100358393513/AnsiballZ_stat.py'
Nov 24 09:36:33 compute-0 sudo[150458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:36:33 compute-0 python3.9[150460]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:36:33 compute-0 sudo[150458]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:33 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:33 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee88003080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:33 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v280: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:36:33 compute-0 sudo[150537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjzzbssilyltdeskhtpdqiwzylaxvryi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976992.7943087-1142-254100358393513/AnsiballZ_file.py'
Nov 24 09:36:33 compute-0 sudo[150537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:33.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:33 compute-0 python3.9[150539]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:36:33 compute-0 sudo[150537]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:34 compute-0 ceph-osd[82549]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 09:36:34 compute-0 ceph-osd[82549]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 8461 writes, 35K keys, 8461 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s
                                           Cumulative WAL: 8461 writes, 1673 syncs, 5.06 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8461 writes, 35K keys, 8461 commit groups, 1.0 writes per commit group, ingest: 21.65 MB, 0.04 MB/s
                                           Interval WAL: 8461 writes, 1673 syncs, 5.06 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd2f30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd2f30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd2f30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 24 09:36:34 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:34 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:36:34 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:34.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:36:34 compute-0 sshd-session[150564]: Invalid user user1 from 209.38.206.249 port 46266
Nov 24 09:36:34 compute-0 sshd-session[150564]: Connection closed by invalid user user1 209.38.206.249 port 46266 [preauth]
Nov 24 09:36:34 compute-0 sudo[150692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqqqeoczbjxlgkkseftmqbmlxvngqpee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976994.1489196-1178-191628257193621/AnsiballZ_stat.py'
Nov 24 09:36:34 compute-0 sudo[150692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:34 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:34 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee74003ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:34 compute-0 python3.9[150694]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:36:34 compute-0 sudo[150692]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:34 compute-0 ceph-mon[74331]: pgmap v280: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:36:34 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:34 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:34 compute-0 sudo[150770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eoujluivppxuqtrdwropaysvxxfstbnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976994.1489196-1178-191628257193621/AnsiballZ_file.py'
Nov 24 09:36:34 compute-0 sudo[150770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:35 compute-0 python3.9[150772]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:36:35 compute-0 sudo[150770]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:35 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c004250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:35 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v281: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:36:35 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:35 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:35 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:35.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:35 compute-0 sudo[150923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyhwehgstuutyzkwqyngzhnvyarhyhdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976995.5281005-1214-78970675185641/AnsiballZ_systemd.py'
Nov 24 09:36:35 compute-0 sudo[150923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:36 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:36 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:36 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:36.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:36 compute-0 python3.9[150925]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:36:36 compute-0 systemd[1]: Reloading.
Nov 24 09:36:36 compute-0 systemd-rc-local-generator[150951]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:36:36 compute-0 systemd-sysv-generator[150957]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:36:36 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:36 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee88003080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:36 compute-0 sudo[150923]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:36 compute-0 ceph-mon[74331]: pgmap v281: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:36:36 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:36 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee74003ec0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:36 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:36:36.987Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:36:37 compute-0 sudo[151114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrzgwwibfbdkrkcqboxywdrhcdcuxgne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976996.889404-1238-75577408763012/AnsiballZ_stat.py'
Nov 24 09:36:37 compute-0 sudo[151114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:37 compute-0 python3.9[151117]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:36:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:37 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:37 compute-0 sudo[151114]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:37 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v282: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:36:37 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:37 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:37 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:37.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:37 compute-0 sudo[151193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcfkklmledqaaeuejbcznnuthtxbcnzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976996.889404-1238-75577408763012/AnsiballZ_file.py'
Nov 24 09:36:37 compute-0 sudo[151193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:37 compute-0 python3.9[151195]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:36:37 compute-0 sudo[151193]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:38 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:38 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:36:38 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:38.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:36:38 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:36:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:38 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c004250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:38 compute-0 sudo[151346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olcgiqtipjkuyubxqnpwbkearpixhbld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976998.3069296-1274-220602910292067/AnsiballZ_stat.py'
Nov 24 09:36:38 compute-0 sudo[151346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:38 compute-0 python3.9[151348]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:36:38 compute-0 sudo[151346]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:38 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee88003080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:38 compute-0 ceph-mon[74331]: pgmap v282: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:36:39 compute-0 sudo[151424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlxdlqbvkuxobkmaooncuxcyevrizcfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976998.3069296-1274-220602910292067/AnsiballZ_file.py'
Nov 24 09:36:39 compute-0 sudo[151424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:39 compute-0 python3.9[151426]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:36:39 compute-0 sudo[151424]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:39 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:39 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee74003ee0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:39 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v283: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:36:39 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:39 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:36:39 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:39.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:36:39 compute-0 sudo[151579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxtigijvtwjctblqrnfdghfcwsubogmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763976999.557287-1310-215328156148756/AnsiballZ_systemd.py'
Nov 24 09:36:39 compute-0 sudo[151579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:40 compute-0 sshd-session[151527]: Invalid user orangepi from 209.38.206.249 port 47372
Nov 24 09:36:40 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:40 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:40 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:40.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:40 compute-0 python3.9[151581]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:36:40 compute-0 sshd-session[151527]: Connection closed by invalid user orangepi 209.38.206.249 port 47372 [preauth]
Nov 24 09:36:40 compute-0 systemd[1]: Reloading.
Nov 24 09:36:40 compute-0 systemd-rc-local-generator[151607]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:36:40 compute-0 systemd-sysv-generator[151611]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:36:40 compute-0 systemd[1]: Starting Create netns directory...
Nov 24 09:36:40 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 24 09:36:40 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 24 09:36:40 compute-0 systemd[1]: Finished Create netns directory.
Nov 24 09:36:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:40 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:40 compute-0 sudo[151579]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:40 compute-0 sudo[151648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:36:40 compute-0 sudo[151648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:36:40 compute-0 sudo[151648]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:40 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c004250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:40 compute-0 ceph-mon[74331]: pgmap v283: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:36:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:36:40] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Nov 24 09:36:40 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:36:40] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Nov 24 09:36:41 compute-0 sudo[151799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pphfwzxpyyhmjicldqycxitnmbsrbamx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977000.997635-1340-34115210051667/AnsiballZ_file.py'
Nov 24 09:36:41 compute-0 sudo[151799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:41 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:41 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:41 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v284: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:36:41 compute-0 python3.9[151801]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:36:41 compute-0 sudo[151799]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:41 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:41 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:41 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:41.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:42 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:42 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:42 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:42.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:42 compute-0 sudo[151951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oddvzlydbdoxqfpndhtzbliirorqwqua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977001.8323894-1364-133197122861924/AnsiballZ_stat.py'
Nov 24 09:36:42 compute-0 sudo[151951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:42 compute-0 python3.9[151954]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:36:42 compute-0 sudo[151951]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:42 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee88002560 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:42 compute-0 sshd-session[151955]: Invalid user linuxadmin from 209.38.206.249 port 42178
Nov 24 09:36:42 compute-0 sudo[152077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfrtjnctzmrvywrmxinshmhhpefhqmxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977001.8323894-1364-133197122861924/AnsiballZ_copy.py'
Nov 24 09:36:42 compute-0 sudo[152077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:42 compute-0 sshd-session[151955]: Connection closed by invalid user linuxadmin 209.38.206.249 port 42178 [preauth]
Nov 24 09:36:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:42 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee740040a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:42 compute-0 python3.9[152079]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763977001.8323894-1364-133197122861924/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:36:42 compute-0 sudo[152077]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:42 compute-0 ceph-mon[74331]: pgmap v284: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:36:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:36:43 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:43 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c004250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:43 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v285: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:36:43 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:43 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:43 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:43.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:43 compute-0 sudo[152230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxjwonpnpowsaguoeusxxecfzxxpghhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977003.6288996-1415-5734628556852/AnsiballZ_file.py'
Nov 24 09:36:43 compute-0 sudo[152230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/093644 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:36:44 compute-0 python3.9[152232]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:36:44 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:44 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:36:44 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:44.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:36:44 compute-0 sudo[152230]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:44 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:44 compute-0 sudo[152383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwxvkqwrkmwqisqrorfsadkisdhyaozn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977004.3671322-1439-142252428219430/AnsiballZ_stat.py'
Nov 24 09:36:44 compute-0 sudo[152383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:44 compute-0 python3.9[152385]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:36:44 compute-0 sudo[152383]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:44 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee88002560 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:44 compute-0 ceph-mon[74331]: pgmap v285: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:36:45 compute-0 sudo[152507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwykqsvdfvfktqskozxybpturmrqymda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977004.3671322-1439-142252428219430/AnsiballZ_copy.py'
Nov 24 09:36:45 compute-0 sudo[152507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Optimize plan auto_2025-11-24_09:36:45
Nov 24 09:36:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 09:36:45 compute-0 ceph-mgr[74626]: [balancer INFO root] do_upmap
Nov 24 09:36:45 compute-0 ceph-mgr[74626]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', 'vms', 'default.rgw.meta', 'volumes', '.nfs', 'cephfs.cephfs.meta', 'images', 'default.rgw.log', '.mgr', 'backups', 'cephfs.cephfs.data']
Nov 24 09:36:45 compute-0 ceph-mgr[74626]: [balancer INFO root] prepared 0/10 upmap changes
Nov 24 09:36:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:45 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee740040c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:36:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:36:45 compute-0 python3.9[152509]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763977004.3671322-1439-142252428219430/.source.json _original_basename=.bzcc5uh8 follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:36:45 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v286: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:36:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:36:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:36:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:36:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:36:45 compute-0 sudo[152507]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 09:36:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:36:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 09:36:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:36:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:36:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:36:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:36:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:36:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:36:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:36:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:36:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:36:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 09:36:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:36:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:36:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:36:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 24 09:36:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:36:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 09:36:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:36:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:36:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:36:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 09:36:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:36:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 09:36:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 09:36:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:36:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:36:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:36:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:36:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 09:36:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:36:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:36:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:36:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:36:45 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:45 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:45 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:45.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:46 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:36:46 compute-0 sudo[152661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzvxatchxuvqlcgievtkrivzazapsqbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977005.7718906-1484-211351016610753/AnsiballZ_file.py'
Nov 24 09:36:46 compute-0 sudo[152661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:46 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:46 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:36:46 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:46.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:36:46 compute-0 python3.9[152663]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:36:46 compute-0 sudo[152661]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:46 compute-0 sshd-session[152630]: Invalid user dev from 209.38.206.249 port 42194
Nov 24 09:36:46 compute-0 sshd-session[152630]: Connection closed by invalid user dev 209.38.206.249 port 42194 [preauth]
Nov 24 09:36:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:46 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee740040c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:46 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:36:46.987Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:36:47 compute-0 sudo[152814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dagqtrbiujpjkzvdyyxolbfbkgsvkewc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977006.7620718-1508-39435220778967/AnsiballZ_stat.py'
Nov 24 09:36:47 compute-0 sudo[152814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:47 compute-0 ceph-mon[74331]: pgmap v286: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:36:47 compute-0 sudo[152814]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:47 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:47 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v287: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:36:47 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:47 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:47 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:47.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:47 compute-0 sudo[152938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imohchkvhjmyfispprhptardbactphkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977006.7620718-1508-39435220778967/AnsiballZ_copy.py'
Nov 24 09:36:47 compute-0 sudo[152938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:47 compute-0 sudo[152938]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:48 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:48 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:48 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:48.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:48 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:36:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:48 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:48 compute-0 sudo[153091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yagzfpuvpttkyvoeyqiubbcwyfgmkibx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977008.4108217-1559-110093846097154/AnsiballZ_container_config_data.py'
Nov 24 09:36:48 compute-0 sudo[153091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:48 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee740040c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:49 compute-0 python3.9[153093]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Nov 24 09:36:49 compute-0 ceph-mon[74331]: pgmap v287: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:36:49 compute-0 sudo[153091]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:49 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:49 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:49 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v288: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:36:49 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:49 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:49 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:49.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:49 compute-0 sudo[153244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kapnwrlnbcmrblwvytshsprqlonqqmxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977009.438376-1586-235670572312943/AnsiballZ_container_config_hash.py'
Nov 24 09:36:49 compute-0 sudo[153244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:50 compute-0 python3.9[153246]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 24 09:36:50 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:50 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:36:50 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:50.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:36:50 compute-0 sudo[153244]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:50 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:50 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c004250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:36:50] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Nov 24 09:36:50 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:36:50] "GET /metrics HTTP/1.1" 200 48256 "" "Prometheus/2.51.0"
Nov 24 09:36:51 compute-0 sudo[153397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fycrhdhptkswceipiqnrubdmjmqrhkoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977010.5102768-1613-181270110712675/AnsiballZ_podman_container_info.py'
Nov 24 09:36:51 compute-0 sudo[153397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:51 compute-0 ceph-mon[74331]: pgmap v288: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:36:51 compute-0 python3.9[153399]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 24 09:36:51 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:51 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee740040e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:51 compute-0 sudo[153397]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:51 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v289: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:36:51 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:51 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:51 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:51.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:52 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:52 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:52 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:52.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:52 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:52 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee88002560 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:52 compute-0 sudo[153578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtkblskloeckritdtvvadiworqakorza ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763977012.4564614-1652-75445420983257/AnsiballZ_edpm_container_manage.py'
Nov 24 09:36:52 compute-0 sudo[153578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:36:53 compute-0 ceph-mon[74331]: pgmap v289: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:36:53 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:36:53 compute-0 python3[153580]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 24 09:36:53 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:53 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c004250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:53 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:53 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:36:53 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v290: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:36:53 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:53 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:53 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:53.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:54 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:54 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:36:54 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:54.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:36:54 compute-0 ceph-mon[74331]: pgmap v290: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:36:54 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:54 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee74004100 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:54 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:54 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:55 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee88002560 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:55 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v291: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:36:55 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:55 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:55 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:55.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:56 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:56 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:56 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:56.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:56 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:36:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:56 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:36:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:56 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:36:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:56 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c004250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:56 compute-0 ceph-mon[74331]: pgmap v291: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:36:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:56 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee74004120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:36:56.988Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:36:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:57 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:57 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v292: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Nov 24 09:36:57 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:57 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:57 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:57.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:57 compute-0 sudo[153679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:36:57 compute-0 sudo[153679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:36:57 compute-0 sudo[153679]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:57 compute-0 sudo[153704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:36:57 compute-0 sudo[153704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:36:58 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:58 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:36:58 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:36:58.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:36:58 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:36:58 compute-0 podman[153594]: 2025-11-24 09:36:58.505695606 +0000 UTC m=+5.154948176 image pull 197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 24 09:36:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:58 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee88002560 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:58 compute-0 sudo[153704]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:58 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 09:36:58 compute-0 podman[153799]: 2025-11-24 09:36:58.721372164 +0000 UTC m=+0.075985126 container create c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:36:58 compute-0 podman[153799]: 2025-11-24 09:36:58.670249363 +0000 UTC m=+0.024862345 image pull 197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 24 09:36:58 compute-0 python3[153580]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 24 09:36:58 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:36:58 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 09:36:58 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:36:58 compute-0 ceph-mon[74331]: pgmap v292: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Nov 24 09:36:58 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:36:58 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:36:58 compute-0 sudo[153578]: pam_unix(sudo:session): session closed for user root
Nov 24 09:36:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:58 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c004250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:59 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 09:36:59 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:36:59 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 09:36:59 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:36:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:59 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feea00013c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:36:59 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v293: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:36:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:36:59 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:36:59 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:36:59 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:36:59 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:36:59.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:36:59 compute-0 sudo[153989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrhasfkvmtngpywfujhmjiziwttlbqdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977019.6624653-1676-195007013013567/AnsiballZ_stat.py'
Nov 24 09:36:59 compute-0 sudo[153989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:00 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:37:00 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:37:00 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:37:00 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:37:00 compute-0 sudo[153992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:37:00 compute-0 sudo[153992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:37:00 compute-0 sudo[153992]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:00 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:00 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:37:00 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:00.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:37:00 compute-0 python3.9[153991]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:37:00 compute-0 sudo[154018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 24 09:37:00 compute-0 sudo[154018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:37:00 compute-0 sudo[153989]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:00 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:37:00 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:37:00 compute-0 ceph-mon[74331]: pgmap v293: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:37:00 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:37:00 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:37:00 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:37:00 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:37:00 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:37:00 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:37:00 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:37:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:00 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:00 compute-0 podman[154109]: 2025-11-24 09:37:00.609253309 +0000 UTC m=+0.055259211 container create 362889e830aa1af8fbe419bbc81a5d26e6a2081e4c1dbecad3a4c7a3ee4bf41c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_black, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:37:00 compute-0 podman[154109]: 2025-11-24 09:37:00.579772305 +0000 UTC m=+0.025778227 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:37:00 compute-0 systemd[1]: Started libpod-conmon-362889e830aa1af8fbe419bbc81a5d26e6a2081e4c1dbecad3a4c7a3ee4bf41c.scope.
Nov 24 09:37:00 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:37:00 compute-0 podman[154109]: 2025-11-24 09:37:00.730199105 +0000 UTC m=+0.176205037 container init 362889e830aa1af8fbe419bbc81a5d26e6a2081e4c1dbecad3a4c7a3ee4bf41c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_black, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid)
Nov 24 09:37:00 compute-0 podman[154109]: 2025-11-24 09:37:00.739913036 +0000 UTC m=+0.185918938 container start 362889e830aa1af8fbe419bbc81a5d26e6a2081e4c1dbecad3a4c7a3ee4bf41c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_black, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid)
Nov 24 09:37:00 compute-0 podman[154109]: 2025-11-24 09:37:00.743634296 +0000 UTC m=+0.189640298 container attach 362889e830aa1af8fbe419bbc81a5d26e6a2081e4c1dbecad3a4c7a3ee4bf41c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_black, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:37:00 compute-0 gracious_black[154177]: 167 167
Nov 24 09:37:00 compute-0 systemd[1]: libpod-362889e830aa1af8fbe419bbc81a5d26e6a2081e4c1dbecad3a4c7a3ee4bf41c.scope: Deactivated successfully.
Nov 24 09:37:00 compute-0 podman[154109]: 2025-11-24 09:37:00.74756609 +0000 UTC m=+0.193572002 container died 362889e830aa1af8fbe419bbc81a5d26e6a2081e4c1dbecad3a4c7a3ee4bf41c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:37:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d50beaffa0bbc2bdc226f9d8c82254f3772790d973ecf8d5df81cbaa38cb060-merged.mount: Deactivated successfully.
Nov 24 09:37:00 compute-0 podman[154109]: 2025-11-24 09:37:00.828294046 +0000 UTC m=+0.274299948 container remove 362889e830aa1af8fbe419bbc81a5d26e6a2081e4c1dbecad3a4c7a3ee4bf41c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_black, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:37:00 compute-0 systemd[1]: libpod-conmon-362889e830aa1af8fbe419bbc81a5d26e6a2081e4c1dbecad3a4c7a3ee4bf41c.scope: Deactivated successfully.
Nov 24 09:37:00 compute-0 sudo[154241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:37:00 compute-0 sudo[154241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:37:00 compute-0 sudo[154241]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:00 compute-0 sudo[154293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqbcnmfsejihciaxoapiwbwkcskwismt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977020.6174366-1703-42482914992891/AnsiballZ_file.py'
Nov 24 09:37:00 compute-0 sudo[154293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:00 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee88002580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:37:00] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Nov 24 09:37:00 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:37:00] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Nov 24 09:37:01 compute-0 podman[154302]: 2025-11-24 09:37:01.021247441 +0000 UTC m=+0.060323861 container create b083c2a38eaa2828210630a9323f020d0c0fc7799a55c359832f33a3933f8bbc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_bohr, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:37:01 compute-0 systemd[1]: Started libpod-conmon-b083c2a38eaa2828210630a9323f020d0c0fc7799a55c359832f33a3933f8bbc.scope.
Nov 24 09:37:01 compute-0 podman[154302]: 2025-11-24 09:37:00.992453904 +0000 UTC m=+0.031530344 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:37:01 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:37:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89043b62b402cacf3873532e9b1e38ec03c291c0d2211acfe2dc57eda82d6595/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:37:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89043b62b402cacf3873532e9b1e38ec03c291c0d2211acfe2dc57eda82d6595/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:37:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89043b62b402cacf3873532e9b1e38ec03c291c0d2211acfe2dc57eda82d6595/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:37:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89043b62b402cacf3873532e9b1e38ec03c291c0d2211acfe2dc57eda82d6595/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:37:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89043b62b402cacf3873532e9b1e38ec03c291c0d2211acfe2dc57eda82d6595/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:37:01 compute-0 python3.9[154296]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:37:01 compute-0 podman[154302]: 2025-11-24 09:37:01.131739628 +0000 UTC m=+0.170816068 container init b083c2a38eaa2828210630a9323f020d0c0fc7799a55c359832f33a3933f8bbc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 09:37:01 compute-0 sudo[154293]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:01 compute-0 podman[154302]: 2025-11-24 09:37:01.141652154 +0000 UTC m=+0.180728574 container start b083c2a38eaa2828210630a9323f020d0c0fc7799a55c359832f33a3933f8bbc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_bohr, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Nov 24 09:37:01 compute-0 podman[154302]: 2025-11-24 09:37:01.155329151 +0000 UTC m=+0.194405581 container attach b083c2a38eaa2828210630a9323f020d0c0fc7799a55c359832f33a3933f8bbc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_bohr, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:37:01 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:37:01 compute-0 sudo[154399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtpcfyruamzgibntamraybronrmesbwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977020.6174366-1703-42482914992891/AnsiballZ_stat.py'
Nov 24 09:37:01 compute-0 sudo[154399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:01 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c004250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:01 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v294: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1023 B/s wr, 4 op/s
Nov 24 09:37:01 compute-0 hardcore_bohr[154318]: --> passed data devices: 0 physical, 1 LVM
Nov 24 09:37:01 compute-0 hardcore_bohr[154318]: --> All data devices are unavailable
Nov 24 09:37:01 compute-0 systemd[1]: libpod-b083c2a38eaa2828210630a9323f020d0c0fc7799a55c359832f33a3933f8bbc.scope: Deactivated successfully.
Nov 24 09:37:01 compute-0 podman[154302]: 2025-11-24 09:37:01.520056135 +0000 UTC m=+0.559132555 container died b083c2a38eaa2828210630a9323f020d0c0fc7799a55c359832f33a3933f8bbc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:37:01 compute-0 python3.9[154403]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:37:01 compute-0 sudo[154399]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:01 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:01 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:01 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:01.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-89043b62b402cacf3873532e9b1e38ec03c291c0d2211acfe2dc57eda82d6595-merged.mount: Deactivated successfully.
Nov 24 09:37:01 compute-0 podman[154302]: 2025-11-24 09:37:01.698542605 +0000 UTC m=+0.737619035 container remove b083c2a38eaa2828210630a9323f020d0c0fc7799a55c359832f33a3933f8bbc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_bohr, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:37:01 compute-0 systemd[1]: libpod-conmon-b083c2a38eaa2828210630a9323f020d0c0fc7799a55c359832f33a3933f8bbc.scope: Deactivated successfully.
Nov 24 09:37:01 compute-0 sudo[154018]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:01 compute-0 sudo[154478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:37:01 compute-0 sudo[154478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:37:01 compute-0 sudo[154478]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:01 compute-0 sudo[154503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- lvm list --format json
Nov 24 09:37:01 compute-0 sudo[154503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:37:02 compute-0 sudo[154629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epmktvooqmgucpumtxekdckbrhhwqgju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977021.6678886-1703-22846111100612/AnsiballZ_copy.py'
Nov 24 09:37:02 compute-0 sudo[154629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:02 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:02 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:02 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:02.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:02 compute-0 python3.9[154638]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763977021.6678886-1703-22846111100612/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:37:02 compute-0 podman[154668]: 2025-11-24 09:37:02.331758606 +0000 UTC m=+0.077722625 container create 5e3de4428e225df0a7a76d6bf7f921ed752e14c0ddf6091b9a8b60d9e21bfd90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_dirac, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:37:02 compute-0 sudo[154629]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:02 compute-0 podman[154668]: 2025-11-24 09:37:02.280375751 +0000 UTC m=+0.026339850 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:37:02 compute-0 systemd[1]: Started libpod-conmon-5e3de4428e225df0a7a76d6bf7f921ed752e14c0ddf6091b9a8b60d9e21bfd90.scope.
Nov 24 09:37:02 compute-0 ceph-mon[74331]: pgmap v294: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1023 B/s wr, 4 op/s
Nov 24 09:37:02 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:37:02 compute-0 podman[154668]: 2025-11-24 09:37:02.450412068 +0000 UTC m=+0.196376117 container init 5e3de4428e225df0a7a76d6bf7f921ed752e14c0ddf6091b9a8b60d9e21bfd90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_dirac, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Nov 24 09:37:02 compute-0 podman[154668]: 2025-11-24 09:37:02.459075335 +0000 UTC m=+0.205039354 container start 5e3de4428e225df0a7a76d6bf7f921ed752e14c0ddf6091b9a8b60d9e21bfd90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_dirac, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Nov 24 09:37:02 compute-0 eloquent_dirac[154685]: 167 167
Nov 24 09:37:02 compute-0 systemd[1]: libpod-5e3de4428e225df0a7a76d6bf7f921ed752e14c0ddf6091b9a8b60d9e21bfd90.scope: Deactivated successfully.
Nov 24 09:37:02 compute-0 podman[154668]: 2025-11-24 09:37:02.478747164 +0000 UTC m=+0.224711183 container attach 5e3de4428e225df0a7a76d6bf7f921ed752e14c0ddf6091b9a8b60d9e21bfd90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_dirac, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:37:02 compute-0 podman[154668]: 2025-11-24 09:37:02.479318099 +0000 UTC m=+0.225282118 container died 5e3de4428e225df0a7a76d6bf7f921ed752e14c0ddf6091b9a8b60d9e21bfd90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:37:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-18a9c2b2f4685d69fb846182a0f4f1b76f75f1ec3f0c84caff291fd7c8ce98f5-merged.mount: Deactivated successfully.
Nov 24 09:37:02 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:02 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c004250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:02 compute-0 podman[154668]: 2025-11-24 09:37:02.54936284 +0000 UTC m=+0.295326859 container remove 5e3de4428e225df0a7a76d6bf7f921ed752e14c0ddf6091b9a8b60d9e21bfd90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Nov 24 09:37:02 compute-0 systemd[1]: libpod-conmon-5e3de4428e225df0a7a76d6bf7f921ed752e14c0ddf6091b9a8b60d9e21bfd90.scope: Deactivated successfully.
Nov 24 09:37:02 compute-0 sudo[154777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwijavcsbzkmjebkmerolyxrpxyovnyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977021.6678886-1703-22846111100612/AnsiballZ_systemd.py'
Nov 24 09:37:02 compute-0 sudo[154777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:02 compute-0 podman[154785]: 2025-11-24 09:37:02.720830902 +0000 UTC m=+0.051869049 container create a7c83dad6db4ae54edba1a2d40588b840769a1e47b7322e6badb2ef53fab3520 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Nov 24 09:37:02 compute-0 systemd[1]: Started libpod-conmon-a7c83dad6db4ae54edba1a2d40588b840769a1e47b7322e6badb2ef53fab3520.scope.
Nov 24 09:37:02 compute-0 podman[154785]: 2025-11-24 09:37:02.698656353 +0000 UTC m=+0.029694530 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:37:02 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:37:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/910a88970753ea7d8ddc0f4af774bf159f93e89c7563e2e46e5e8cae10585108/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:37:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/910a88970753ea7d8ddc0f4af774bf159f93e89c7563e2e46e5e8cae10585108/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:37:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/910a88970753ea7d8ddc0f4af774bf159f93e89c7563e2e46e5e8cae10585108/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:37:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/910a88970753ea7d8ddc0f4af774bf159f93e89c7563e2e46e5e8cae10585108/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:37:02 compute-0 podman[154785]: 2025-11-24 09:37:02.815792868 +0000 UTC m=+0.146831045 container init a7c83dad6db4ae54edba1a2d40588b840769a1e47b7322e6badb2ef53fab3520 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:37:02 compute-0 podman[154785]: 2025-11-24 09:37:02.82342238 +0000 UTC m=+0.154460527 container start a7c83dad6db4ae54edba1a2d40588b840769a1e47b7322e6badb2ef53fab3520 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_pare, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325)
Nov 24 09:37:02 compute-0 podman[154785]: 2025-11-24 09:37:02.827231791 +0000 UTC m=+0.158269968 container attach a7c83dad6db4ae54edba1a2d40588b840769a1e47b7322e6badb2ef53fab3520 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_pare, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:37:02 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:02 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:02 compute-0 python3.9[154780]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 09:37:02 compute-0 systemd[1]: Reloading.
Nov 24 09:37:03 compute-0 systemd-rc-local-generator[154838]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:37:03 compute-0 systemd-sysv-generator[154841]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:37:03 compute-0 hopeful_pare[154801]: {
Nov 24 09:37:03 compute-0 hopeful_pare[154801]:     "0": [
Nov 24 09:37:03 compute-0 hopeful_pare[154801]:         {
Nov 24 09:37:03 compute-0 hopeful_pare[154801]:             "devices": [
Nov 24 09:37:03 compute-0 hopeful_pare[154801]:                 "/dev/loop3"
Nov 24 09:37:03 compute-0 hopeful_pare[154801]:             ],
Nov 24 09:37:03 compute-0 hopeful_pare[154801]:             "lv_name": "ceph_lv0",
Nov 24 09:37:03 compute-0 hopeful_pare[154801]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:37:03 compute-0 hopeful_pare[154801]:             "lv_size": "21470642176",
Nov 24 09:37:03 compute-0 hopeful_pare[154801]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=84a084c3-61a7-5de7-8207-1f88efa59a64,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 24 09:37:03 compute-0 hopeful_pare[154801]:             "lv_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:37:03 compute-0 hopeful_pare[154801]:             "name": "ceph_lv0",
Nov 24 09:37:03 compute-0 hopeful_pare[154801]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:37:03 compute-0 hopeful_pare[154801]:             "tags": {
Nov 24 09:37:03 compute-0 hopeful_pare[154801]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:37:03 compute-0 hopeful_pare[154801]:                 "ceph.block_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:37:03 compute-0 hopeful_pare[154801]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 09:37:03 compute-0 hopeful_pare[154801]:                 "ceph.cluster_fsid": "84a084c3-61a7-5de7-8207-1f88efa59a64",
Nov 24 09:37:03 compute-0 hopeful_pare[154801]:                 "ceph.cluster_name": "ceph",
Nov 24 09:37:03 compute-0 hopeful_pare[154801]:                 "ceph.crush_device_class": "",
Nov 24 09:37:03 compute-0 hopeful_pare[154801]:                 "ceph.encrypted": "0",
Nov 24 09:37:03 compute-0 hopeful_pare[154801]:                 "ceph.osd_fsid": "4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c",
Nov 24 09:37:03 compute-0 hopeful_pare[154801]:                 "ceph.osd_id": "0",
Nov 24 09:37:03 compute-0 hopeful_pare[154801]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 09:37:03 compute-0 hopeful_pare[154801]:                 "ceph.type": "block",
Nov 24 09:37:03 compute-0 hopeful_pare[154801]:                 "ceph.vdo": "0",
Nov 24 09:37:03 compute-0 hopeful_pare[154801]:                 "ceph.with_tpm": "0"
Nov 24 09:37:03 compute-0 hopeful_pare[154801]:             },
Nov 24 09:37:03 compute-0 hopeful_pare[154801]:             "type": "block",
Nov 24 09:37:03 compute-0 hopeful_pare[154801]:             "vg_name": "ceph_vg0"
Nov 24 09:37:03 compute-0 hopeful_pare[154801]:         }
Nov 24 09:37:03 compute-0 hopeful_pare[154801]:     ]
Nov 24 09:37:03 compute-0 hopeful_pare[154801]: }
Nov 24 09:37:03 compute-0 podman[154785]: 2025-11-24 09:37:03.133715626 +0000 UTC m=+0.464753793 container died a7c83dad6db4ae54edba1a2d40588b840769a1e47b7322e6badb2ef53fab3520 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_pare, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:37:03 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:37:03 compute-0 systemd[1]: libpod-a7c83dad6db4ae54edba1a2d40588b840769a1e47b7322e6badb2ef53fab3520.scope: Deactivated successfully.
Nov 24 09:37:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-910a88970753ea7d8ddc0f4af774bf159f93e89c7563e2e46e5e8cae10585108-merged.mount: Deactivated successfully.
Nov 24 09:37:03 compute-0 podman[154785]: 2025-11-24 09:37:03.326821744 +0000 UTC m=+0.657859901 container remove a7c83dad6db4ae54edba1a2d40588b840769a1e47b7322e6badb2ef53fab3520 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_pare, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Nov 24 09:37:03 compute-0 systemd[1]: libpod-conmon-a7c83dad6db4ae54edba1a2d40588b840769a1e47b7322e6badb2ef53fab3520.scope: Deactivated successfully.
Nov 24 09:37:03 compute-0 sudo[154777]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:03 compute-0 sudo[154503]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:03 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee880025a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:03 compute-0 sudo[154861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:37:03 compute-0 sudo[154861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:37:03 compute-0 sudo[154861]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:03 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v295: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:37:03 compute-0 sudo[154909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- raw list --format json
Nov 24 09:37:03 compute-0 sudo[154909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:37:03 compute-0 sudo[154984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kluzdzgegsrlokytjzksnxtqdryjyrtn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977021.6678886-1703-22846111100612/AnsiballZ_systemd.py'
Nov 24 09:37:03 compute-0 sudo[154984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:03 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:03 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:03 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:03.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:03 compute-0 python3.9[154986]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:37:03 compute-0 systemd[1]: Reloading.
Nov 24 09:37:04 compute-0 podman[155028]: 2025-11-24 09:37:03.929835206 +0000 UTC m=+0.035239583 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:37:04 compute-0 systemd-sysv-generator[155072]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:37:04 compute-0 systemd-rc-local-generator[155069]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:37:04 compute-0 podman[155028]: 2025-11-24 09:37:04.05025055 +0000 UTC m=+0.155654897 container create 34332280279eba7b1713d0ced010405e06bf77914b85207be9cbac96433c1924 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_joliot, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 24 09:37:04 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:04 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:04 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:04.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:04 compute-0 systemd[1]: Started libpod-conmon-34332280279eba7b1713d0ced010405e06bf77914b85207be9cbac96433c1924.scope.
Nov 24 09:37:04 compute-0 systemd[1]: Starting ovn_controller container...
Nov 24 09:37:04 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:37:04 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:04 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feea0002400 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:04 compute-0 podman[155028]: 2025-11-24 09:37:04.639636316 +0000 UTC m=+0.745040663 container init 34332280279eba7b1713d0ced010405e06bf77914b85207be9cbac96433c1924 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid)
Nov 24 09:37:04 compute-0 podman[155028]: 2025-11-24 09:37:04.648910016 +0000 UTC m=+0.754314363 container start 34332280279eba7b1713d0ced010405e06bf77914b85207be9cbac96433c1924 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 24 09:37:04 compute-0 systemd[1]: libpod-34332280279eba7b1713d0ced010405e06bf77914b85207be9cbac96433c1924.scope: Deactivated successfully.
Nov 24 09:37:04 compute-0 focused_joliot[155082]: 167 167
Nov 24 09:37:04 compute-0 conmon[155082]: conmon 34332280279eba7b1713 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-34332280279eba7b1713d0ced010405e06bf77914b85207be9cbac96433c1924.scope/container/memory.events
Nov 24 09:37:04 compute-0 podman[155028]: 2025-11-24 09:37:04.90171542 +0000 UTC m=+1.007119767 container attach 34332280279eba7b1713d0ced010405e06bf77914b85207be9cbac96433c1924 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:37:04 compute-0 podman[155028]: 2025-11-24 09:37:04.903417861 +0000 UTC m=+1.008822208 container died 34332280279eba7b1713d0ced010405e06bf77914b85207be9cbac96433c1924 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_joliot, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Nov 24 09:37:04 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:04 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:04 compute-0 ceph-mon[74331]: pgmap v295: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:37:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-158db0ff2bf2b2a6238a8dce712118024dc809d465c0fe1b270db06c8ae3def2-merged.mount: Deactivated successfully.
Nov 24 09:37:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:05 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c004250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:05 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v296: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:37:05 compute-0 podman[155028]: 2025-11-24 09:37:05.602795311 +0000 UTC m=+1.708199658 container remove 34332280279eba7b1713d0ced010405e06bf77914b85207be9cbac96433c1924 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Nov 24 09:37:05 compute-0 systemd[1]: libpod-conmon-34332280279eba7b1713d0ced010405e06bf77914b85207be9cbac96433c1924.scope: Deactivated successfully.
Nov 24 09:37:05 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:05 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:05 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:05.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:05 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:37:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/007aa62dcdd7abd028a62eda69c4b1ce6f0096006c421defcaa7f9090247e519/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 24 09:37:05 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad.
Nov 24 09:37:05 compute-0 podman[155086]: 2025-11-24 09:37:05.769379487 +0000 UTC m=+1.470624718 container init c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 24 09:37:05 compute-0 ovn_controller[155123]: + sudo -E kolla_set_configs
Nov 24 09:37:05 compute-0 podman[155131]: 2025-11-24 09:37:05.784083658 +0000 UTC m=+0.051311586 container create 2963b49657ecdf2da4cdb440a2c29ada1912110139181b60f608f167ac15b909 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_mirzakhani, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:37:05 compute-0 podman[155086]: 2025-11-24 09:37:05.799713471 +0000 UTC m=+1.500958672 container start c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 24 09:37:05 compute-0 systemd[1]: Created slice User Slice of UID 0.
Nov 24 09:37:05 compute-0 edpm-start-podman-container[155086]: ovn_controller
Nov 24 09:37:05 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Nov 24 09:37:05 compute-0 systemd[1]: Started libpod-conmon-2963b49657ecdf2da4cdb440a2c29ada1912110139181b60f608f167ac15b909.scope.
Nov 24 09:37:05 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Nov 24 09:37:05 compute-0 podman[155131]: 2025-11-24 09:37:05.760986416 +0000 UTC m=+0.028214364 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:37:05 compute-0 systemd[1]: Starting User Manager for UID 0...
Nov 24 09:37:05 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:37:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/200ad3d14bf85887b46afadd2966a9a8da7262b3b7b4ebc31e3d0c7c38172e8f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:37:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/200ad3d14bf85887b46afadd2966a9a8da7262b3b7b4ebc31e3d0c7c38172e8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:37:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/200ad3d14bf85887b46afadd2966a9a8da7262b3b7b4ebc31e3d0c7c38172e8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:37:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/200ad3d14bf85887b46afadd2966a9a8da7262b3b7b4ebc31e3d0c7c38172e8f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:37:05 compute-0 systemd[155181]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Nov 24 09:37:05 compute-0 podman[155131]: 2025-11-24 09:37:05.892023624 +0000 UTC m=+0.159251582 container init 2963b49657ecdf2da4cdb440a2c29ada1912110139181b60f608f167ac15b909 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:37:05 compute-0 podman[155131]: 2025-11-24 09:37:05.902646768 +0000 UTC m=+0.169874696 container start 2963b49657ecdf2da4cdb440a2c29ada1912110139181b60f608f167ac15b909 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_mirzakhani, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True)
Nov 24 09:37:05 compute-0 podman[155131]: 2025-11-24 09:37:05.910244219 +0000 UTC m=+0.177472147 container attach 2963b49657ecdf2da4cdb440a2c29ada1912110139181b60f608f167ac15b909 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_mirzakhani, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 24 09:37:05 compute-0 podman[155149]: 2025-11-24 09:37:05.919014908 +0000 UTC m=+0.107760573 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 24 09:37:05 compute-0 edpm-start-podman-container[155084]: Creating additional drop-in dependency for "ovn_controller" (c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad)
Nov 24 09:37:05 compute-0 systemd[1]: c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad-5cce7f0d9a3a0610.service: Main process exited, code=exited, status=1/FAILURE
Nov 24 09:37:05 compute-0 systemd[1]: c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad-5cce7f0d9a3a0610.service: Failed with result 'exit-code'.
Nov 24 09:37:05 compute-0 systemd[1]: Reloading.
Nov 24 09:37:06 compute-0 systemd[155181]: Queued start job for default target Main User Target.
Nov 24 09:37:06 compute-0 systemd[155181]: Created slice User Application Slice.
Nov 24 09:37:06 compute-0 systemd[155181]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Nov 24 09:37:06 compute-0 systemd[155181]: Started Daily Cleanup of User's Temporary Directories.
Nov 24 09:37:06 compute-0 systemd[155181]: Reached target Paths.
Nov 24 09:37:06 compute-0 systemd[155181]: Reached target Timers.
Nov 24 09:37:06 compute-0 systemd[155181]: Starting D-Bus User Message Bus Socket...
Nov 24 09:37:06 compute-0 systemd[155181]: Starting Create User's Volatile Files and Directories...
Nov 24 09:37:06 compute-0 systemd-rc-local-generator[155234]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:37:06 compute-0 systemd[155181]: Listening on D-Bus User Message Bus Socket.
Nov 24 09:37:06 compute-0 systemd[155181]: Reached target Sockets.
Nov 24 09:37:06 compute-0 systemd-sysv-generator[155239]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:37:06 compute-0 systemd[155181]: Finished Create User's Volatile Files and Directories.
Nov 24 09:37:06 compute-0 systemd[155181]: Reached target Basic System.
Nov 24 09:37:06 compute-0 systemd[155181]: Reached target Main User Target.
Nov 24 09:37:06 compute-0 systemd[155181]: Startup finished in 152ms.
Nov 24 09:37:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/093706 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:37:06 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:06 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:06 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:06.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:06 compute-0 systemd[1]: Started User Manager for UID 0.
Nov 24 09:37:06 compute-0 systemd[1]: Started ovn_controller container.
Nov 24 09:37:06 compute-0 systemd[1]: Started Session c1 of User root.
Nov 24 09:37:06 compute-0 sudo[154984]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:06 compute-0 ovn_controller[155123]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 24 09:37:06 compute-0 ovn_controller[155123]: INFO:__main__:Validating config file
Nov 24 09:37:06 compute-0 ovn_controller[155123]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 24 09:37:06 compute-0 ovn_controller[155123]: INFO:__main__:Writing out command to execute
Nov 24 09:37:06 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Nov 24 09:37:06 compute-0 ovn_controller[155123]: ++ cat /run_command
Nov 24 09:37:06 compute-0 ovn_controller[155123]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 24 09:37:06 compute-0 ovn_controller[155123]: + ARGS=
Nov 24 09:37:06 compute-0 ovn_controller[155123]: + sudo kolla_copy_cacerts
Nov 24 09:37:06 compute-0 systemd[1]: Started Session c2 of User root.
Nov 24 09:37:06 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Nov 24 09:37:06 compute-0 ovn_controller[155123]: + [[ ! -n '' ]]
Nov 24 09:37:06 compute-0 ovn_controller[155123]: + . kolla_extend_start
Nov 24 09:37:06 compute-0 ovn_controller[155123]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 24 09:37:06 compute-0 ovn_controller[155123]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Nov 24 09:37:06 compute-0 ovn_controller[155123]: + umask 0022
Nov 24 09:37:06 compute-0 ovn_controller[155123]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Nov 24 09:37:06 compute-0 ovn_controller[155123]: 2025-11-24T09:37:06Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 24 09:37:06 compute-0 ovn_controller[155123]: 2025-11-24T09:37:06Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 24 09:37:06 compute-0 ovn_controller[155123]: 2025-11-24T09:37:06Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Nov 24 09:37:06 compute-0 ovn_controller[155123]: 2025-11-24T09:37:06Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Nov 24 09:37:06 compute-0 ovn_controller[155123]: 2025-11-24T09:37:06Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 24 09:37:06 compute-0 ovn_controller[155123]: 2025-11-24T09:37:06Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Nov 24 09:37:06 compute-0 NetworkManager[48883]: <info>  [1763977026.4306] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Nov 24 09:37:06 compute-0 NetworkManager[48883]: <info>  [1763977026.4313] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 09:37:06 compute-0 NetworkManager[48883]: <info>  [1763977026.4323] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Nov 24 09:37:06 compute-0 NetworkManager[48883]: <info>  [1763977026.4327] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Nov 24 09:37:06 compute-0 NetworkManager[48883]: <info>  [1763977026.4329] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 24 09:37:06 compute-0 kernel: br-int: entered promiscuous mode
Nov 24 09:37:06 compute-0 ovn_controller[155123]: 2025-11-24T09:37:06Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 24 09:37:06 compute-0 ovn_controller[155123]: 2025-11-24T09:37:06Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 24 09:37:06 compute-0 ovn_controller[155123]: 2025-11-24T09:37:06Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 24 09:37:06 compute-0 ovn_controller[155123]: 2025-11-24T09:37:06Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Nov 24 09:37:06 compute-0 ovn_controller[155123]: 2025-11-24T09:37:06Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Nov 24 09:37:06 compute-0 ovn_controller[155123]: 2025-11-24T09:37:06Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Nov 24 09:37:06 compute-0 ovn_controller[155123]: 2025-11-24T09:37:06Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 24 09:37:06 compute-0 ovn_controller[155123]: 2025-11-24T09:37:06Z|00014|main|INFO|OVS feature set changed, force recompute.
Nov 24 09:37:06 compute-0 ovn_controller[155123]: 2025-11-24T09:37:06Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 24 09:37:06 compute-0 ovn_controller[155123]: 2025-11-24T09:37:06Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 24 09:37:06 compute-0 ovn_controller[155123]: 2025-11-24T09:37:06Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 24 09:37:06 compute-0 ovn_controller[155123]: 2025-11-24T09:37:06Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Nov 24 09:37:06 compute-0 ovn_controller[155123]: 2025-11-24T09:37:06Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Nov 24 09:37:06 compute-0 ovn_controller[155123]: 2025-11-24T09:37:06Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 24 09:37:06 compute-0 ovn_controller[155123]: 2025-11-24T09:37:06Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 24 09:37:06 compute-0 ovn_controller[155123]: 2025-11-24T09:37:06Z|00022|main|INFO|OVS feature set changed, force recompute.
Nov 24 09:37:06 compute-0 ovn_controller[155123]: 2025-11-24T09:37:06Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Nov 24 09:37:06 compute-0 ovn_controller[155123]: 2025-11-24T09:37:06Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Nov 24 09:37:06 compute-0 ovn_controller[155123]: 2025-11-24T09:37:06Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 24 09:37:06 compute-0 ovn_controller[155123]: 2025-11-24T09:37:06Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 24 09:37:06 compute-0 ovn_controller[155123]: 2025-11-24T09:37:06Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 24 09:37:06 compute-0 ovn_controller[155123]: 2025-11-24T09:37:06Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 24 09:37:06 compute-0 NetworkManager[48883]: <info>  [1763977026.4547] manager: (ovn-803b13-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Nov 24 09:37:06 compute-0 NetworkManager[48883]: <info>  [1763977026.4552] manager: (ovn-fae732-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/20)
Nov 24 09:37:06 compute-0 NetworkManager[48883]: <info>  [1763977026.4557] manager: (ovn-f6640d-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Nov 24 09:37:06 compute-0 systemd-udevd[155351]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 09:37:06 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Nov 24 09:37:06 compute-0 systemd-udevd[155358]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 09:37:06 compute-0 ovn_controller[155123]: 2025-11-24T09:37:06Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 24 09:37:06 compute-0 NetworkManager[48883]: <info>  [1763977026.4758] device (genev_sys_6081): carrier: link connected
Nov 24 09:37:06 compute-0 NetworkManager[48883]: <info>  [1763977026.4766] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/22)
Nov 24 09:37:06 compute-0 ovn_controller[155123]: 2025-11-24T09:37:06Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 24 09:37:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:06 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee88004080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:06 compute-0 lvm[155403]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 09:37:06 compute-0 lvm[155403]: VG ceph_vg0 finished
Nov 24 09:37:06 compute-0 elated_mirzakhani[155177]: {}
Nov 24 09:37:06 compute-0 systemd[1]: libpod-2963b49657ecdf2da4cdb440a2c29ada1912110139181b60f608f167ac15b909.scope: Deactivated successfully.
Nov 24 09:37:06 compute-0 systemd[1]: libpod-2963b49657ecdf2da4cdb440a2c29ada1912110139181b60f608f167ac15b909.scope: Consumed 1.214s CPU time.
Nov 24 09:37:06 compute-0 podman[155131]: 2025-11-24 09:37:06.713175171 +0000 UTC m=+0.980403119 container died 2963b49657ecdf2da4cdb440a2c29ada1912110139181b60f608f167ac15b909 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_mirzakhani, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 24 09:37:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-200ad3d14bf85887b46afadd2966a9a8da7262b3b7b4ebc31e3d0c7c38172e8f-merged.mount: Deactivated successfully.
Nov 24 09:37:06 compute-0 sudo[155495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldxfrsifzzjtfdfvruosuhabwxdwqohz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977026.5017304-1787-133303047222032/AnsiballZ_command.py'
Nov 24 09:37:06 compute-0 sudo[155495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:06 compute-0 podman[155131]: 2025-11-24 09:37:06.804694996 +0000 UTC m=+1.071922934 container remove 2963b49657ecdf2da4cdb440a2c29ada1912110139181b60f608f167ac15b909 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_mirzakhani, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Nov 24 09:37:06 compute-0 systemd[1]: libpod-conmon-2963b49657ecdf2da4cdb440a2c29ada1912110139181b60f608f167ac15b909.scope: Deactivated successfully.
Nov 24 09:37:06 compute-0 sudo[154909]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:06 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:37:06 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:37:06 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:37:06 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:37:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:06 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feea0002400 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:06 compute-0 sudo[155498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:37:06 compute-0 sudo[155498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:37:06 compute-0 sudo[155498]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:06 compute-0 ceph-mon[74331]: pgmap v296: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:37:06 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:37:06 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:37:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:37:06.989Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:37:07 compute-0 python3.9[155497]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:37:07 compute-0 ovs-vsctl[155523]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Nov 24 09:37:07 compute-0 sudo[155495]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:07 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:07 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v297: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:37:07 compute-0 sudo[155674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlkzwbssixmuxaotmkagqkkgnypwxzfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977027.2420628-1811-222422494933099/AnsiballZ_command.py'
Nov 24 09:37:07 compute-0 sudo[155674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:07 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:07 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:07 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:07.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:07 compute-0 python3.9[155676]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:37:07 compute-0 ovs-vsctl[155678]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Nov 24 09:37:07 compute-0 sudo[155674]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:08 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:08 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:08 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:08.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:08 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:37:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:08 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c004250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:08 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee88004080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:09 compute-0 ceph-mon[74331]: pgmap v297: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:37:09 compute-0 sshd-session[155705]: Connection closed by authenticating user root 209.38.206.249 port 45090 [preauth]
Nov 24 09:37:09 compute-0 sudo[155832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yebeeyktgessprgvjxbsrpcwmyltqoiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977028.765119-1853-30898703221033/AnsiballZ_command.py'
Nov 24 09:37:09 compute-0 sudo[155832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:09 compute-0 python3.9[155836]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:37:09 compute-0 ovs-vsctl[155838]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Nov 24 09:37:09 compute-0 sudo[155832]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:09 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feea0002400 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:09 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v298: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Nov 24 09:37:09 compute-0 sshd-session[155833]: Invalid user odroid from 209.38.206.249 port 45098
Nov 24 09:37:09 compute-0 sshd-session[155833]: Connection closed by invalid user odroid 209.38.206.249 port 45098 [preauth]
Nov 24 09:37:09 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:09 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:09 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:09.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:09 compute-0 sshd-session[142996]: Connection closed by 192.168.122.30 port 33784
Nov 24 09:37:09 compute-0 sshd-session[142993]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:37:09 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Nov 24 09:37:09 compute-0 systemd[1]: session-50.scope: Consumed 59.616s CPU time.
Nov 24 09:37:09 compute-0 systemd-logind[822]: Session 50 logged out. Waiting for processes to exit.
Nov 24 09:37:09 compute-0 systemd-logind[822]: Removed session 50.
Nov 24 09:37:10 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:10 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:10 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:10.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:10 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feea0002400 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:10 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c004250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:37:10] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Nov 24 09:37:10 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:37:10] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Nov 24 09:37:11 compute-0 ceph-mon[74331]: pgmap v298: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Nov 24 09:37:11 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:11 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee88004080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:11 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v299: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 426 B/s wr, 2 op/s
Nov 24 09:37:11 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:11 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:37:11 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:11.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:37:12 compute-0 sshd-session[155865]: Invalid user deploy from 209.38.206.249 port 45114
Nov 24 09:37:12 compute-0 sshd-session[155865]: Connection closed by invalid user deploy 209.38.206.249 port 45114 [preauth]
Nov 24 09:37:12 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:12 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:12 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:12.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:12 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:12 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee88004080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:12 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:12 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:13 compute-0 ceph-mon[74331]: pgmap v299: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 426 B/s wr, 2 op/s
Nov 24 09:37:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:37:13 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:13 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c004250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:13 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v300: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:37:13 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:13 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:37:13 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:13.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:37:14 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:14 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:14 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:14.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:14 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:14 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feea0003980 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:14 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:14 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee88004080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:15 compute-0 ceph-mon[74331]: pgmap v300: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:37:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:37:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:37:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:15 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004540 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:37:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:37:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:37:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:37:15 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v301: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:37:15 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:15 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:15 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:15.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:15 compute-0 sshd-session[155871]: Invalid user ts3 from 209.38.206.249 port 45130
Nov 24 09:37:15 compute-0 sshd-session[155873]: Accepted publickey for zuul from 192.168.122.30 port 44032 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 09:37:15 compute-0 systemd-logind[822]: New session 52 of user zuul.
Nov 24 09:37:15 compute-0 systemd[1]: Started Session 52 of User zuul.
Nov 24 09:37:15 compute-0 sshd-session[155873]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:37:15 compute-0 sshd-session[155871]: Connection closed by invalid user ts3 209.38.206.249 port 45130 [preauth]
Nov 24 09:37:16 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:37:16 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:16 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:37:16 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:16.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:37:16 compute-0 systemd[1]: Stopping User Manager for UID 0...
Nov 24 09:37:16 compute-0 systemd[155181]: Activating special unit Exit the Session...
Nov 24 09:37:16 compute-0 systemd[155181]: Stopped target Main User Target.
Nov 24 09:37:16 compute-0 systemd[155181]: Stopped target Basic System.
Nov 24 09:37:16 compute-0 systemd[155181]: Stopped target Paths.
Nov 24 09:37:16 compute-0 systemd[155181]: Stopped target Sockets.
Nov 24 09:37:16 compute-0 systemd[155181]: Stopped target Timers.
Nov 24 09:37:16 compute-0 systemd[155181]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 24 09:37:16 compute-0 systemd[155181]: Closed D-Bus User Message Bus Socket.
Nov 24 09:37:16 compute-0 systemd[155181]: Stopped Create User's Volatile Files and Directories.
Nov 24 09:37:16 compute-0 systemd[155181]: Removed slice User Application Slice.
Nov 24 09:37:16 compute-0 systemd[155181]: Reached target Shutdown.
Nov 24 09:37:16 compute-0 systemd[155181]: Finished Exit the Session.
Nov 24 09:37:16 compute-0 systemd[155181]: Reached target Exit the Session.
Nov 24 09:37:16 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Nov 24 09:37:16 compute-0 systemd[1]: Stopped User Manager for UID 0.
Nov 24 09:37:16 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Nov 24 09:37:16 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Nov 24 09:37:16 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Nov 24 09:37:16 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Nov 24 09:37:16 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Nov 24 09:37:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:16 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c004250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:16 compute-0 python3.9[156029]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:37:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:16 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feea0003980 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:37:16.990Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:37:17 compute-0 ceph-mon[74331]: pgmap v301: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:37:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:17 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee88004080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:17 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v302: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:37:17 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:17 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:17 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:17.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:18 compute-0 sudo[156184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrsbcgkvyywilledduijhmizqlyqncym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977037.4366715-62-152993575642377/AnsiballZ_file.py'
Nov 24 09:37:18 compute-0 sudo[156184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:18 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:18 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:37:18 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:18.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:37:18 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:37:18 compute-0 python3.9[156186]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:37:18 compute-0 sudo[156184]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:18 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004560 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:18 compute-0 sudo[156338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqybtifsgwewyhnkilamevmeedgptctt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977038.4592993-62-78541964354692/AnsiballZ_file.py'
Nov 24 09:37:18 compute-0 sudo[156338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:18 compute-0 python3.9[156340]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:37:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:18 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c0043f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:18 compute-0 sudo[156338]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:19 compute-0 ceph-mon[74331]: pgmap v302: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:37:19 compute-0 sudo[156491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npjvgqzobuhehsfpaktwvgdqguhamhdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977039.083261-62-107395001017705/AnsiballZ_file.py'
Nov 24 09:37:19 compute-0 sudo[156491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:19 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:19 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee7c000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:19 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v303: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:37:19 compute-0 python3.9[156493]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:37:19 compute-0 sudo[156491]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:19 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:19 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:19 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:19.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:19 compute-0 sudo[156643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmzsbsjopacinrevlvzptxeolobmhalb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977039.6879084-62-138207215212313/AnsiballZ_file.py'
Nov 24 09:37:19 compute-0 sudo[156643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:20 compute-0 python3.9[156645]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:37:20 compute-0 sudo[156643]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:20 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:20 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:20 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:20.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:20 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee88004080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:20 compute-0 sudo[156796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrotpojodcgxfrbjhtllszxyrzuzbqpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977040.3307066-62-108832863670043/AnsiballZ_file.py'
Nov 24 09:37:20 compute-0 sudo[156796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:20 compute-0 python3.9[156798]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:37:20 compute-0 sudo[156796]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:20 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:20 compute-0 sudo[156799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:37:20 compute-0 sudo[156799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:37:20 compute-0 sudo[156799]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:37:20] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Nov 24 09:37:20 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:37:20] "GET /metrics HTTP/1.1" 200 48258 "" "Prometheus/2.51.0"
Nov 24 09:37:21 compute-0 ceph-mon[74331]: pgmap v303: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:37:21 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:21 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c0043f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:21 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v304: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:37:21 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:21 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:37:21 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:21.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:37:22 compute-0 python3.9[156974]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:37:22 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:22 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000022s ======
Nov 24 09:37:22 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:22.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Nov 24 09:37:22 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:22 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c0043f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:22 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:22 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee7c0016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:22 compute-0 sudo[157125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxjyqwocbkrkxcndonczzkjxfqoxhmdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977042.507096-194-70029171263302/AnsiballZ_seboolean.py'
Nov 24 09:37:22 compute-0 sudo[157125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:23 compute-0 python3.9[157127]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 24 09:37:23 compute-0 ceph-mon[74331]: pgmap v304: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:37:23 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:37:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:23 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee800045a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:23 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v305: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:37:23 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:23 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:23 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:23.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:23 compute-0 sudo[157125]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:24 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:24 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:37:24 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:24.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:37:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:24 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee88004080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:24 compute-0 python3.9[157280]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:37:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:24 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee7c0016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:25 compute-0 ceph-mon[74331]: pgmap v305: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:37:25 compute-0 python3.9[157401]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763977044.1020052-218-255547803902316/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:37:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:25 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c0043f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:25 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v306: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:37:25 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:25 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000022s ======
Nov 24 09:37:25 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:25.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Nov 24 09:37:26 compute-0 python3.9[157552]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:37:26 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:26 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:26 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:26.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:26 compute-0 ceph-mon[74331]: pgmap v306: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:37:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:26 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee800045c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:26 compute-0 python3.9[157674]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763977045.5688665-263-43139041195243/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:37:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:26 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee88004080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:37:26.991Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:37:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:27 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee7c0016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:27 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v307: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:37:27 compute-0 sudo[157825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogaiomywmwuqhfscymzpcmpnqkzsrnst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977047.0644817-314-232824187009287/AnsiballZ_setup.py'
Nov 24 09:37:27 compute-0 sudo[157825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:27 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:27 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:37:27 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:27.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:37:27 compute-0 python3.9[157827]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 09:37:28 compute-0 sudo[157825]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:28 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:28 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:37:28 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:28.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:37:28 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:37:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:28 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c0043f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:28 compute-0 ceph-mon[74331]: pgmap v307: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:37:28 compute-0 sudo[157910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyjmljwkyajrktkktladitrqvqidirca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977047.0644817-314-232824187009287/AnsiballZ_dnf.py'
Nov 24 09:37:28 compute-0 sudo[157910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:28 compute-0 python3.9[157912]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 09:37:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:28 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee800045e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:29 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:29 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee88004080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:29 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v308: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:37:29 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:29 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000022s ======
Nov 24 09:37:29 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:29.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Nov 24 09:37:30 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:30 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:30 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:30.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:30 compute-0 sudo[157910]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:30 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee7c0016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:30 compute-0 ceph-mon[74331]: pgmap v308: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:37:30 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:37:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:30 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c0043f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:37:30] "GET /metrics HTTP/1.1" 200 48260 "" "Prometheus/2.51.0"
Nov 24 09:37:30 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:37:30] "GET /metrics HTTP/1.1" 200 48260 "" "Prometheus/2.51.0"
Nov 24 09:37:31 compute-0 sudo[158067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsqmbiahnqxwkvotajqkinjvkjiftzke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977050.7549222-350-44729159847573/AnsiballZ_systemd.py'
Nov 24 09:37:31 compute-0 sudo[158067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:31 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:31 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee74001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:31 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v309: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:37:31 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:31 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000022s ======
Nov 24 09:37:31 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:31.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Nov 24 09:37:31 compute-0 python3.9[158069]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 09:37:31 compute-0 sudo[158067]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:32 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:32 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:32 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:32.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:32 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:32 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee88004080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:32 compute-0 python3.9[158223]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:37:32 compute-0 ceph-mon[74331]: pgmap v309: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:37:32 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:32 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee7c002f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:33 compute-0 python3.9[158344]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763977052.1409397-374-4007399096909/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:37:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:37:33 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:33 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c0043f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:33 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v310: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:37:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:33.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:33 compute-0 python3.9[158495]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:37:34 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:34 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:34 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:34.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:34 compute-0 python3.9[158616]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763977053.2538729-374-135766223422304/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:37:34 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:34 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee74001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:34 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:34 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee88004080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:34 compute-0 ceph-mon[74331]: pgmap v310: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:37:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:35 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee88004080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:35 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v311: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:37:35 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:35 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:35 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:35.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:35 compute-0 python3.9[158770]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:37:36 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:36 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:36 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:36.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:36 compute-0 ovn_controller[155123]: 2025-11-24T09:37:36Z|00025|memory|INFO|15872 kB peak resident set size after 29.9 seconds
Nov 24 09:37:36 compute-0 ovn_controller[155123]: 2025-11-24T09:37:36Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:2
Nov 24 09:37:36 compute-0 podman[158866]: 2025-11-24 09:37:36.370131135 +0000 UTC m=+0.115145238 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS)
Nov 24 09:37:36 compute-0 python3.9[158902]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763977055.516734-506-161541476588035/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:37:36 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:36 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee88004080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:36 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:36 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feeac0089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:36 compute-0 ceph-mon[74331]: pgmap v311: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:37:36 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:37:36.992Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 09:37:36 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:37:36.992Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:37:36 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:37:36.992Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:37:37 compute-0 python3.9[159067]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:37:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:37 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:37 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v312: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:37:37 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:37 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000022s ======
Nov 24 09:37:37 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:37.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Nov 24 09:37:37 compute-0 python3.9[159191]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763977056.637886-506-256808005557211/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:37:38 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:38 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000022s ======
Nov 24 09:37:38 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:38.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Nov 24 09:37:38 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:37:38 compute-0 sshd-session[159068]: Invalid user dspace from 209.38.206.249 port 59630
Nov 24 09:37:38 compute-0 sshd-session[159068]: Connection closed by invalid user dspace 209.38.206.249 port 59630 [preauth]
Nov 24 09:37:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:38 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee7c003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:38 compute-0 python3.9[159342]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:37:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:38 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee88004080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:39 compute-0 ceph-mon[74331]: pgmap v312: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:37:39 compute-0 sudo[159495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohrgqlvejeirdpjxozlgvbaqvudmstwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977059.003536-620-68184500983609/AnsiballZ_file.py'
Nov 24 09:37:39 compute-0 sudo[159495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:39 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:39 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feeac0089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:39 compute-0 python3.9[159497]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:37:39 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v313: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:37:39 compute-0 sudo[159495]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:39 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:39 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:39 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:39.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:40 compute-0 sudo[159647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awfwnahihqvbrsarlfcclgjxmhohpqpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977059.8138645-644-148021422651266/AnsiballZ_stat.py'
Nov 24 09:37:40 compute-0 sudo[159647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:40 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:40 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:40 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:40.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:40 compute-0 python3.9[159649]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:37:40 compute-0 sudo[159647]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:40 compute-0 sudo[159726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qiqunhfsubgbslocdyoeqrcivclzmvdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977059.8138645-644-148021422651266/AnsiballZ_file.py'
Nov 24 09:37:40 compute-0 sudo[159726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:40 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:40 compute-0 python3.9[159728]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:37:40 compute-0 sudo[159726]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:40 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee7c0039a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:37:40] "GET /metrics HTTP/1.1" 200 48260 "" "Prometheus/2.51.0"
Nov 24 09:37:40 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:37:40] "GET /metrics HTTP/1.1" 200 48260 "" "Prometheus/2.51.0"
Nov 24 09:37:41 compute-0 ceph-mon[74331]: pgmap v313: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:37:41 compute-0 sudo[159828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:37:41 compute-0 sudo[159828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:37:41 compute-0 sudo[159828]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:41 compute-0 sudo[159903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzokpjurncihzaijlzimcsrkqzmjmbdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977060.8464763-644-224061709354141/AnsiballZ_stat.py'
Nov 24 09:37:41 compute-0 sudo[159903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:41 compute-0 python3.9[159905]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:37:41 compute-0 sudo[159903]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:41 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:41 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee88004080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:41 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v314: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:37:41 compute-0 sudo[159982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbcupicmtwuysifivdvvmkkcjudhsljx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977060.8464763-644-224061709354141/AnsiballZ_file.py'
Nov 24 09:37:41 compute-0 sudo[159982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:41 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:41 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:41 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:41.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:41 compute-0 python3.9[159984]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:37:41 compute-0 sudo[159982]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:42 compute-0 sshd-session[159985]: Invalid user vpn from 209.38.206.249 port 33932
Nov 24 09:37:42 compute-0 sshd-session[159985]: Connection closed by invalid user vpn 209.38.206.249 port 33932 [preauth]
Nov 24 09:37:42 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:42 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:37:42 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:42.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:37:42 compute-0 sudo[160137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtuslumzyndzfkjsnoujnrofhppdigsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977062.2517126-713-120668864965052/AnsiballZ_file.py'
Nov 24 09:37:42 compute-0 sudo[160137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:42 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feeac0089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:42 compute-0 python3.9[160139]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:37:42 compute-0 sudo[160137]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:42 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:43 compute-0 ceph-mon[74331]: pgmap v314: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:37:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:37:43 compute-0 sudo[160290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-noblntkfufxlxbccguclsekpldmfoyrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977063.0334938-737-474937828461/AnsiballZ_stat.py'
Nov 24 09:37:43 compute-0 sudo[160290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:43 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:43 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee7c0039a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:43 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v315: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:37:43 compute-0 python3.9[160292]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:37:43 compute-0 sudo[160290]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:43 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:43 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:43 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:43.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:43 compute-0 sudo[160368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sibbkpkdnrlbhbgkgpfrmdouywsbsoqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977063.0334938-737-474937828461/AnsiballZ_file.py'
Nov 24 09:37:43 compute-0 sudo[160368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:43 compute-0 python3.9[160370]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:37:43 compute-0 sudo[160368]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:44 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:44 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:44 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:44.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:44 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee88004080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:44 compute-0 sudo[160521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytfrxaqebnwyrgssxscbwqwjjystphns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977064.2687614-773-186207409914540/AnsiballZ_stat.py'
Nov 24 09:37:44 compute-0 sudo[160521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:44 compute-0 python3.9[160523]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:37:44 compute-0 sudo[160521]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:44 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feeac0089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:45 compute-0 sudo[160599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anufwduimhglldbbtcqkljjyvscjcntv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977064.2687614-773-186207409914540/AnsiballZ_file.py'
Nov 24 09:37:45 compute-0 sudo[160599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:45 compute-0 ceph-mon[74331]: pgmap v315: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:37:45 compute-0 python3.9[160601]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:37:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Optimize plan auto_2025-11-24_09:37:45
Nov 24 09:37:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 09:37:45 compute-0 ceph-mgr[74626]: [balancer INFO root] do_upmap
Nov 24 09:37:45 compute-0 ceph-mgr[74626]: [balancer INFO root] pools ['default.rgw.control', 'vms', 'backups', '.mgr', '.rgw.root', 'images', 'default.rgw.log', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', 'default.rgw.meta', '.nfs']
Nov 24 09:37:45 compute-0 ceph-mgr[74626]: [balancer INFO root] prepared 0/10 upmap changes
Nov 24 09:37:45 compute-0 sudo[160599]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:37:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:37:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:45 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:37:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:37:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:37:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:37:45 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v316: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:37:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 09:37:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:37:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 09:37:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:37:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:37:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:37:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:37:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:37:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:37:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:37:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:37:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:37:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 09:37:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:37:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:37:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:37:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 24 09:37:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:37:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 09:37:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:37:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:37:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:37:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 09:37:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:37:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 09:37:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 09:37:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:37:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:37:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:37:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:37:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 09:37:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:37:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:37:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:37:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:37:45 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:45 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000022s ======
Nov 24 09:37:45 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:45.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Nov 24 09:37:46 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:37:46 compute-0 sudo[160753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxrnblnclymllgdkrtefuhowesxgdjqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977065.9377894-809-258230081678522/AnsiballZ_systemd.py'
Nov 24 09:37:46 compute-0 sudo[160753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:46 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:46 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:46 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:46.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:46 compute-0 python3.9[160755]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:37:46 compute-0 systemd[1]: Reloading.
Nov 24 09:37:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:46 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:46 compute-0 systemd-sysv-generator[160786]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:37:46 compute-0 systemd-rc-local-generator[160783]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:37:46 compute-0 sudo[160753]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:46 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee88004080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:37:46.992Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:37:47 compute-0 ceph-mon[74331]: pgmap v316: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:37:47 compute-0 sudo[160944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqreftgnauoaelwzsbwbmdeggyzlvfii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977067.0986571-833-55366536586192/AnsiballZ_stat.py'
Nov 24 09:37:47 compute-0 sudo[160944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:47 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feeac0089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:47 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v317: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:37:47 compute-0 python3.9[160946]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:37:47 compute-0 sudo[160944]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:47 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:47 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000022s ======
Nov 24 09:37:47 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:47.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Nov 24 09:37:47 compute-0 sudo[161022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jemglzfqwyzrexglqrsuvcdttccsrlli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977067.0986571-833-55366536586192/AnsiballZ_file.py'
Nov 24 09:37:47 compute-0 sudo[161022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:48 compute-0 python3.9[161024]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:37:48 compute-0 sudo[161022]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:48 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:48 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000022s ======
Nov 24 09:37:48 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:48.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Nov 24 09:37:48 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:37:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:48 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee7c003b40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:48 compute-0 sudo[161175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcyjtaybvsxbowxnvjugzsauhosypnsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977068.4301593-869-131502336877924/AnsiballZ_stat.py'
Nov 24 09:37:48 compute-0 sudo[161175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:48 compute-0 python3.9[161177]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:37:48 compute-0 sudo[161175]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:48 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:49 compute-0 ceph-mon[74331]: pgmap v317: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:37:49 compute-0 sudo[161253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkoemkkmslyaynomgvfdhztyqtectole ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977068.4301593-869-131502336877924/AnsiballZ_file.py'
Nov 24 09:37:49 compute-0 sudo[161253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:49 compute-0 python3.9[161255]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:37:49 compute-0 sudo[161253]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:49 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:49 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee88004080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:49 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v318: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:37:49 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:49 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000022s ======
Nov 24 09:37:49 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:49.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Nov 24 09:37:49 compute-0 sudo[161406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awinkhsfmhtvcohbqwevfoqwptmitdzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977069.6605866-905-126460858959133/AnsiballZ_systemd.py'
Nov 24 09:37:49 compute-0 sudo[161406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:50 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:50 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:37:50 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:50.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:37:50 compute-0 python3.9[161408]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:37:50 compute-0 systemd[1]: Reloading.
Nov 24 09:37:50 compute-0 systemd-rc-local-generator[161436]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:37:50 compute-0 systemd-sysv-generator[161440]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:37:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:50 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feeac0089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:50 compute-0 systemd[1]: Starting Create netns directory...
Nov 24 09:37:50 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 24 09:37:50 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 24 09:37:50 compute-0 systemd[1]: Finished Create netns directory.
Nov 24 09:37:50 compute-0 sudo[161406]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:50 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee7c003b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:37:50] "GET /metrics HTTP/1.1" 200 48261 "" "Prometheus/2.51.0"
Nov 24 09:37:50 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:37:50] "GET /metrics HTTP/1.1" 200 48261 "" "Prometheus/2.51.0"
Nov 24 09:37:51 compute-0 ceph-mon[74331]: pgmap v318: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:37:51 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:51 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:51 compute-0 sudo[161601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pekzkvfbdaxuvdgljthxcbvhmhdpzfnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977071.1973066-935-129506072245368/AnsiballZ_file.py'
Nov 24 09:37:51 compute-0 sudo[161601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:51 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v319: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.9 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 09:37:51 compute-0 radosgw[89481]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Nov 24 09:37:51 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:51 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:37:51 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:51.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:37:51 compute-0 python3.9[161603]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:37:51 compute-0 sudo[161601]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:52 compute-0 sudo[161754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjthuuvwuzppfulnuqbningujswygrfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977071.9639006-959-229464897424862/AnsiballZ_stat.py'
Nov 24 09:37:52 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:52 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:37:52 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:52.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:37:52 compute-0 sudo[161754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:52 compute-0 python3.9[161756]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:37:52 compute-0 sudo[161754]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:52 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee88004080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:52 compute-0 sudo[161877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymyqbdeduygqsyziysrcvwuhhbneelvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977071.9639006-959-229464897424862/AnsiballZ_copy.py'
Nov 24 09:37:52 compute-0 sudo[161877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:52 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feeac0089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:53 compute-0 python3.9[161879]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763977071.9639006-959-229464897424862/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:37:53 compute-0 sudo[161877]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:53 compute-0 ceph-mon[74331]: pgmap v319: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.9 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 09:37:53 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:37:53 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:53 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feeac0089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:53 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v320: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 09:37:53 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:53 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:53 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:53.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:53 compute-0 sudo[162030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqcvqaambkkwqsuihxndlnicogynwzmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977073.6798382-1010-74281613359259/AnsiballZ_file.py'
Nov 24 09:37:53 compute-0 sudo[162030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:54 compute-0 python3.9[162032]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:37:54 compute-0 sudo[162030]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:54 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:54 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000022s ======
Nov 24 09:37:54 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:54.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Nov 24 09:37:54 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:54 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:54 compute-0 sudo[162183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtmnwbtqlrzmsxqtboasupyrcsbugcxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977074.4741848-1034-264518515850750/AnsiballZ_stat.py'
Nov 24 09:37:54 compute-0 sudo[162183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:54 compute-0 python3.9[162185]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:37:54 compute-0 sudo[162183]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:54 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:54 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee88004080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:55 compute-0 ceph-mon[74331]: pgmap v320: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 09:37:55 compute-0 sudo[162307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebebuiywiasjheevhqaqtjwkcmjpftmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977074.4741848-1034-264518515850750/AnsiballZ_copy.py'
Nov 24 09:37:55 compute-0 sudo[162307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:55 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feeac0089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:55 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v321: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 09:37:55 compute-0 python3.9[162309]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763977074.4741848-1034-264518515850750/.source.json _original_basename=.zz4lvlxo follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:37:55 compute-0 sudo[162307]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:55 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:55 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:55 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:55.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:56 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:56 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:56 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:56.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:56 compute-0 ceph-mon[74331]: pgmap v321: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 0 B/s wr, 6 op/s
Nov 24 09:37:56 compute-0 sudo[162460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awbybvnipqudqvveyslhuuvdtvbhoyvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977075.843753-1079-151084861680734/AnsiballZ_file.py'
Nov 24 09:37:56 compute-0 sudo[162460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:56 compute-0 python3.9[162462]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:37:56 compute-0 sudo[162460]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:56 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee7c003bc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:56 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:37:56.994Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:37:57 compute-0 sudo[162612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crbojmjeujxxqsaxixtewbitbftidefw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977076.7582853-1103-72136713185716/AnsiballZ_stat.py'
Nov 24 09:37:57 compute-0 sudo[162612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:57 compute-0 sudo[162612]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:57 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee88004080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:57 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v322: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 KiB/s rd, 0 B/s wr, 140 op/s
Nov 24 09:37:57 compute-0 sudo[162736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsekhyeyblavsdvxsrpmpswypanctpkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977076.7582853-1103-72136713185716/AnsiballZ_copy.py'
Nov 24 09:37:57 compute-0 sudo[162736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:57 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:57 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:57 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:57.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:37:57 compute-0 sudo[162736]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:58 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:37:58 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:58 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:37:58 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:37:58.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:37:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:58 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feeac0089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:58 compute-0 ceph-mon[74331]: pgmap v322: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 KiB/s rd, 0 B/s wr, 140 op/s
Nov 24 09:37:58 compute-0 sudo[162889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbisobhhbmwyvrlscsejlpxfmjpvrdva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977078.2662303-1154-81016990747783/AnsiballZ_container_config_data.py'
Nov 24 09:37:58 compute-0 sudo[162889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:37:58 compute-0 python3.9[162891]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Nov 24 09:37:58 compute-0 sudo[162889]: pam_unix(sudo:session): session closed for user root
Nov 24 09:37:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:58 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee7c003be0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:37:59 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:37:59 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v323: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 0 B/s wr, 140 op/s
Nov 24 09:37:59 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:37:59 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:37:59 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:37:59.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:00 compute-0 sudo[163042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxljgopspioghscolhdnjtvgjnoeryvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977079.6008964-1181-48467472285417/AnsiballZ_container_config_hash.py'
Nov 24 09:38:00 compute-0 sudo[163042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:00 compute-0 python3.9[163044]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 24 09:38:00 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:00 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:00 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:00.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:00 compute-0 sudo[163042]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:00 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee88004080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:00 compute-0 ceph-mon[74331]: pgmap v323: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 0 B/s wr, 140 op/s
Nov 24 09:38:00 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:38:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:00 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feeac0089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:38:00] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Nov 24 09:38:00 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:38:00] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Nov 24 09:38:01 compute-0 sudo[163195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivvltvgkiwfsxpdgjqwguvenkudzfowq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977080.6201882-1208-120394144322307/AnsiballZ_podman_container_info.py'
Nov 24 09:38:01 compute-0 sudo[163195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:01 compute-0 sudo[163198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:38:01 compute-0 sudo[163198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:38:01 compute-0 sudo[163198]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:01 compute-0 python3.9[163197]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 24 09:38:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:01 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee7c003c00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:01 compute-0 sudo[163195]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:01 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v324: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 KiB/s rd, 0 B/s wr, 140 op/s
Nov 24 09:38:01 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:01 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000022s ======
Nov 24 09:38:01 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:01.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Nov 24 09:38:02 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:02 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:02 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:02.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:02 compute-0 sshd-session[163274]: Invalid user vps from 209.38.206.249 port 37266
Nov 24 09:38:02 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:02 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feeac0089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:02 compute-0 sshd-session[163274]: Connection closed by invalid user vps 209.38.206.249 port 37266 [preauth]
Nov 24 09:38:02 compute-0 ceph-mon[74331]: pgmap v324: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 KiB/s rd, 0 B/s wr, 140 op/s
Nov 24 09:38:02 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:02 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:03 compute-0 sudo[163402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vywdpwyjgceglejdgswthkgrdywrmohx ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763977082.4001231-1247-105307075688775/AnsiballZ_edpm_container_manage.py'
Nov 24 09:38:03 compute-0 sudo[163402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:03 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:38:03 compute-0 python3[163404]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 24 09:38:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/093803 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:38:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:03 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee88004080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:03 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v325: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 0 B/s wr, 134 op/s
Nov 24 09:38:03 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:03 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:38:03 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:03.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:38:04 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:04 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:04 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:04.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:04 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:04 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee7c003c20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:04 compute-0 ceph-mon[74331]: pgmap v325: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 0 B/s wr, 134 op/s
Nov 24 09:38:04 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:04 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feeac0089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:05 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:05 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v326: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 0 B/s wr, 134 op/s
Nov 24 09:38:05 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:05 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:05 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:05.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:06 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:06 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000022s ======
Nov 24 09:38:06 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:06.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Nov 24 09:38:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:06 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee88004080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:06 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee7c003c40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:38:06.995Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:38:07 compute-0 ceph-mon[74331]: pgmap v326: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 0 B/s wr, 134 op/s
Nov 24 09:38:07 compute-0 podman[163474]: 2025-11-24 09:38:07.147999629 +0000 UTC m=+0.410848035 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 24 09:38:07 compute-0 sudo[163517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:38:07 compute-0 sudo[163517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:38:07 compute-0 sudo[163517]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:07 compute-0 sudo[163542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:38:07 compute-0 sudo[163542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:38:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:07 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee74002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:07 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v327: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 0 B/s wr, 134 op/s
Nov 24 09:38:07 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:07 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:07 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:07.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:08 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:38:08 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:08 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:38:08 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:08.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:38:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:08 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:08 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c0014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:09 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee7c003c40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:09 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v328: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:38:09 compute-0 ceph-mon[74331]: pgmap v327: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 0 B/s wr, 134 op/s
Nov 24 09:38:09 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:09 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000022s ======
Nov 24 09:38:09 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:09.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Nov 24 09:38:10 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:10 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:10 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:10.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:10 compute-0 sshd-session[163581]: Invalid user oracle from 209.38.206.249 port 37282
Nov 24 09:38:10 compute-0 sshd-session[163581]: Connection closed by invalid user oracle 209.38.206.249 port 37282 [preauth]
Nov 24 09:38:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:10 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee74002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:10 compute-0 ceph-mon[74331]: pgmap v328: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:38:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:10 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:38:10] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Nov 24 09:38:10 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:38:10] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Nov 24 09:38:11 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:11 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c0014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:11 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v329: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:38:11 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:11 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.002000044s ======
Nov 24 09:38:11 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:11.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000044s
Nov 24 09:38:12 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:12 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:12 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:12.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:12 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:12 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee7c003c60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:12 compute-0 podman[163419]: 2025-11-24 09:38:12.676572903 +0000 UTC m=+9.253920996 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 24 09:38:12 compute-0 sudo[163542]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:12 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:38:12 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:38:12 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:38:12 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:38:12 compute-0 podman[163663]: 2025-11-24 09:38:12.85672656 +0000 UTC m=+0.060700021 container create c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 24 09:38:12 compute-0 podman[163663]: 2025-11-24 09:38:12.823166266 +0000 UTC m=+0.027139777 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 24 09:38:12 compute-0 python3[163404]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 24 09:38:12 compute-0 sudo[163676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:38:12 compute-0 sudo[163676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:38:12 compute-0 sudo[163676]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:12 compute-0 sudo[163709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 24 09:38:12 compute-0 sudo[163709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:38:12 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:12 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee74002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:13 compute-0 sudo[163402]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:13 compute-0 ceph-mon[74331]: pgmap v329: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:38:13 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:38:13 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:38:13 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:38:13 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:38:13 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:38:13 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:38:13 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:38:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:38:13 compute-0 podman[163886]: 2025-11-24 09:38:13.372699121 +0000 UTC m=+0.043710281 container create c0b9e4820e9bfd09617485ea8bd589a5ea0353ff1bc385600aaf4ec77a88ddb6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_bose, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:38:13 compute-0 systemd[1]: Started libpod-conmon-c0b9e4820e9bfd09617485ea8bd589a5ea0353ff1bc385600aaf4ec77a88ddb6.scope.
Nov 24 09:38:13 compute-0 podman[163886]: 2025-11-24 09:38:13.35188497 +0000 UTC m=+0.022896160 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:38:13 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:38:13 compute-0 podman[163886]: 2025-11-24 09:38:13.465751918 +0000 UTC m=+0.136763118 container init c0b9e4820e9bfd09617485ea8bd589a5ea0353ff1bc385600aaf4ec77a88ddb6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_bose, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Nov 24 09:38:13 compute-0 podman[163886]: 2025-11-24 09:38:13.474359587 +0000 UTC m=+0.145370757 container start c0b9e4820e9bfd09617485ea8bd589a5ea0353ff1bc385600aaf4ec77a88ddb6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_bose, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:38:13 compute-0 podman[163886]: 2025-11-24 09:38:13.479207369 +0000 UTC m=+0.150218539 container attach c0b9e4820e9bfd09617485ea8bd589a5ea0353ff1bc385600aaf4ec77a88ddb6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1)
Nov 24 09:38:13 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:13 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:13 compute-0 systemd[1]: libpod-c0b9e4820e9bfd09617485ea8bd589a5ea0353ff1bc385600aaf4ec77a88ddb6.scope: Deactivated successfully.
Nov 24 09:38:13 compute-0 awesome_bose[163931]: 167 167
Nov 24 09:38:13 compute-0 conmon[163931]: conmon c0b9e4820e9bfd096174 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c0b9e4820e9bfd09617485ea8bd589a5ea0353ff1bc385600aaf4ec77a88ddb6.scope/container/memory.events
Nov 24 09:38:13 compute-0 podman[163886]: 2025-11-24 09:38:13.488305109 +0000 UTC m=+0.159316289 container died c0b9e4820e9bfd09617485ea8bd589a5ea0353ff1bc385600aaf4ec77a88ddb6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_bose, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 24 09:38:13 compute-0 sudo[163961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkfmdlkzwkygkjyjxlrwqacdsidznfcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977093.2083263-1271-275120875292216/AnsiballZ_stat.py'
Nov 24 09:38:13 compute-0 sudo[163961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:13 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v330: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:38:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4b6549172095900694c2081b292db25c7178101f7e606a5bc43a31f60e529e9-merged.mount: Deactivated successfully.
Nov 24 09:38:13 compute-0 podman[163886]: 2025-11-24 09:38:13.543631236 +0000 UTC m=+0.214642406 container remove c0b9e4820e9bfd09617485ea8bd589a5ea0353ff1bc385600aaf4ec77a88ddb6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:38:13 compute-0 systemd[1]: libpod-conmon-c0b9e4820e9bfd09617485ea8bd589a5ea0353ff1bc385600aaf4ec77a88ddb6.scope: Deactivated successfully.
Nov 24 09:38:13 compute-0 python3.9[163965]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:38:13 compute-0 podman[163983]: 2025-11-24 09:38:13.7036646 +0000 UTC m=+0.041272423 container create 595fe128646bd1aa9cc418efc4127a93cbf68d1ed9567d8baba2d02a6bbaf533 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_panini, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:38:13 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:13 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:13 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:13.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:13 compute-0 sudo[163961]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:13 compute-0 systemd[1]: Started libpod-conmon-595fe128646bd1aa9cc418efc4127a93cbf68d1ed9567d8baba2d02a6bbaf533.scope.
Nov 24 09:38:13 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:38:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79b6126ce9fbe8f003450084fc541d88c236df5f667afca71765edcbe2ed6129/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:38:13 compute-0 podman[163983]: 2025-11-24 09:38:13.685546832 +0000 UTC m=+0.023154685 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:38:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79b6126ce9fbe8f003450084fc541d88c236df5f667afca71765edcbe2ed6129/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:38:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79b6126ce9fbe8f003450084fc541d88c236df5f667afca71765edcbe2ed6129/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:38:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79b6126ce9fbe8f003450084fc541d88c236df5f667afca71765edcbe2ed6129/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:38:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79b6126ce9fbe8f003450084fc541d88c236df5f667afca71765edcbe2ed6129/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:38:13 compute-0 podman[163983]: 2025-11-24 09:38:13.798279915 +0000 UTC m=+0.135887758 container init 595fe128646bd1aa9cc418efc4127a93cbf68d1ed9567d8baba2d02a6bbaf533 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:38:13 compute-0 podman[163983]: 2025-11-24 09:38:13.805018789 +0000 UTC m=+0.142626612 container start 595fe128646bd1aa9cc418efc4127a93cbf68d1ed9567d8baba2d02a6bbaf533 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:38:13 compute-0 podman[163983]: 2025-11-24 09:38:13.811643252 +0000 UTC m=+0.149251085 container attach 595fe128646bd1aa9cc418efc4127a93cbf68d1ed9567d8baba2d02a6bbaf533 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_panini, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:38:14 compute-0 boring_panini[164002]: --> passed data devices: 0 physical, 1 LVM
Nov 24 09:38:14 compute-0 boring_panini[164002]: --> All data devices are unavailable
Nov 24 09:38:14 compute-0 systemd[1]: libpod-595fe128646bd1aa9cc418efc4127a93cbf68d1ed9567d8baba2d02a6bbaf533.scope: Deactivated successfully.
Nov 24 09:38:14 compute-0 podman[163983]: 2025-11-24 09:38:14.215467694 +0000 UTC m=+0.553075537 container died 595fe128646bd1aa9cc418efc4127a93cbf68d1ed9567d8baba2d02a6bbaf533 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:38:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-79b6126ce9fbe8f003450084fc541d88c236df5f667afca71765edcbe2ed6129-merged.mount: Deactivated successfully.
Nov 24 09:38:14 compute-0 ceph-mon[74331]: pgmap v330: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:38:14 compute-0 podman[163983]: 2025-11-24 09:38:14.265179511 +0000 UTC m=+0.602787334 container remove 595fe128646bd1aa9cc418efc4127a93cbf68d1ed9567d8baba2d02a6bbaf533 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_panini, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:38:14 compute-0 systemd[1]: libpod-conmon-595fe128646bd1aa9cc418efc4127a93cbf68d1ed9567d8baba2d02a6bbaf533.scope: Deactivated successfully.
Nov 24 09:38:14 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:14 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:14 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:14.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:14 compute-0 sudo[163709]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:14 compute-0 sudo[164129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:38:14 compute-0 sudo[164129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:38:14 compute-0 sudo[164129]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:14 compute-0 sudo[164178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- lvm list --format json
Nov 24 09:38:14 compute-0 sudo[164178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:38:14 compute-0 sudo[164227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwkhsnwyomgovkqozhkhjhboyjbvdrkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977094.1536992-1298-187541991569588/AnsiballZ_file.py'
Nov 24 09:38:14 compute-0 sudo[164227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:14 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:14 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c0014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:14 compute-0 python3.9[164231]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:38:14 compute-0 sudo[164227]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:14 compute-0 podman[164304]: 2025-11-24 09:38:14.818307219 +0000 UTC m=+0.042147824 container create 859768bb269dd47fa91b50d2583808cc32ef67b6ec8d4be998fd2c6ecb1029fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_yonath, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 09:38:14 compute-0 systemd[1]: Started libpod-conmon-859768bb269dd47fa91b50d2583808cc32ef67b6ec8d4be998fd2c6ecb1029fb.scope.
Nov 24 09:38:14 compute-0 sudo[164363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdrrddlrejkyvxgyfotqzppocbgwweqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977094.1536992-1298-187541991569588/AnsiballZ_stat.py'
Nov 24 09:38:14 compute-0 sudo[164363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:14 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:38:14 compute-0 podman[164304]: 2025-11-24 09:38:14.80058218 +0000 UTC m=+0.024422795 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:38:14 compute-0 podman[164304]: 2025-11-24 09:38:14.907812005 +0000 UTC m=+0.131652650 container init 859768bb269dd47fa91b50d2583808cc32ef67b6ec8d4be998fd2c6ecb1029fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 24 09:38:14 compute-0 podman[164304]: 2025-11-24 09:38:14.916048935 +0000 UTC m=+0.139889550 container start 859768bb269dd47fa91b50d2583808cc32ef67b6ec8d4be998fd2c6ecb1029fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_yonath, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:38:14 compute-0 determined_yonath[164365]: 167 167
Nov 24 09:38:14 compute-0 systemd[1]: libpod-859768bb269dd47fa91b50d2583808cc32ef67b6ec8d4be998fd2c6ecb1029fb.scope: Deactivated successfully.
Nov 24 09:38:14 compute-0 podman[164304]: 2025-11-24 09:38:14.922157816 +0000 UTC m=+0.145998451 container attach 859768bb269dd47fa91b50d2583808cc32ef67b6ec8d4be998fd2c6ecb1029fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_yonath, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Nov 24 09:38:14 compute-0 podman[164304]: 2025-11-24 09:38:14.922536015 +0000 UTC m=+0.146376630 container died 859768bb269dd47fa91b50d2583808cc32ef67b6ec8d4be998fd2c6ecb1029fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:38:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-370d21a3bfc847579a27e217054cb5691ad62b828e55e1d05b57c74ef08b4d53-merged.mount: Deactivated successfully.
Nov 24 09:38:14 compute-0 podman[164304]: 2025-11-24 09:38:14.957945652 +0000 UTC m=+0.181786267 container remove 859768bb269dd47fa91b50d2583808cc32ef67b6ec8d4be998fd2c6ecb1029fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 24 09:38:14 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:14 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee7c003c80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:14 compute-0 systemd[1]: libpod-conmon-859768bb269dd47fa91b50d2583808cc32ef67b6ec8d4be998fd2c6ecb1029fb.scope: Deactivated successfully.
Nov 24 09:38:15 compute-0 python3.9[164367]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:38:15 compute-0 sudo[164363]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:15 compute-0 podman[164391]: 2025-11-24 09:38:15.142556414 +0000 UTC m=+0.042381880 container create aef246a8c1d24fa02a153c378176bc1c5b7b6fd73fc16a3f8b86d951cc1f6186 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_diffie, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid)
Nov 24 09:38:15 compute-0 systemd[1]: Started libpod-conmon-aef246a8c1d24fa02a153c378176bc1c5b7b6fd73fc16a3f8b86d951cc1f6186.scope.
Nov 24 09:38:15 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:38:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c7a6384fc3811476dabe4aaa5e8334ac90a8c6507aca4eacce7e56d8cdd74bd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:38:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c7a6384fc3811476dabe4aaa5e8334ac90a8c6507aca4eacce7e56d8cdd74bd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:38:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c7a6384fc3811476dabe4aaa5e8334ac90a8c6507aca4eacce7e56d8cdd74bd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:38:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c7a6384fc3811476dabe4aaa5e8334ac90a8c6507aca4eacce7e56d8cdd74bd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:38:15 compute-0 podman[164391]: 2025-11-24 09:38:15.123142665 +0000 UTC m=+0.022968161 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:38:15 compute-0 podman[164391]: 2025-11-24 09:38:15.226477621 +0000 UTC m=+0.126303117 container init aef246a8c1d24fa02a153c378176bc1c5b7b6fd73fc16a3f8b86d951cc1f6186 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_diffie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 09:38:15 compute-0 podman[164391]: 2025-11-24 09:38:15.233177345 +0000 UTC m=+0.133002811 container start aef246a8c1d24fa02a153c378176bc1c5b7b6fd73fc16a3f8b86d951cc1f6186 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2)
Nov 24 09:38:15 compute-0 podman[164391]: 2025-11-24 09:38:15.236253477 +0000 UTC m=+0.136078973 container attach aef246a8c1d24fa02a153c378176bc1c5b7b6fd73fc16a3f8b86d951cc1f6186 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Nov 24 09:38:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:38:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:38:15 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:38:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:38:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:38:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:38:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:38:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:15 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:38:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:15 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:15 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v331: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:38:15 compute-0 elastic_diffie[164448]: {
Nov 24 09:38:15 compute-0 elastic_diffie[164448]:     "0": [
Nov 24 09:38:15 compute-0 elastic_diffie[164448]:         {
Nov 24 09:38:15 compute-0 elastic_diffie[164448]:             "devices": [
Nov 24 09:38:15 compute-0 elastic_diffie[164448]:                 "/dev/loop3"
Nov 24 09:38:15 compute-0 elastic_diffie[164448]:             ],
Nov 24 09:38:15 compute-0 elastic_diffie[164448]:             "lv_name": "ceph_lv0",
Nov 24 09:38:15 compute-0 elastic_diffie[164448]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:38:15 compute-0 elastic_diffie[164448]:             "lv_size": "21470642176",
Nov 24 09:38:15 compute-0 elastic_diffie[164448]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=84a084c3-61a7-5de7-8207-1f88efa59a64,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 24 09:38:15 compute-0 elastic_diffie[164448]:             "lv_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:38:15 compute-0 elastic_diffie[164448]:             "name": "ceph_lv0",
Nov 24 09:38:15 compute-0 elastic_diffie[164448]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:38:15 compute-0 elastic_diffie[164448]:             "tags": {
Nov 24 09:38:15 compute-0 elastic_diffie[164448]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:38:15 compute-0 elastic_diffie[164448]:                 "ceph.block_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:38:15 compute-0 elastic_diffie[164448]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 09:38:15 compute-0 elastic_diffie[164448]:                 "ceph.cluster_fsid": "84a084c3-61a7-5de7-8207-1f88efa59a64",
Nov 24 09:38:15 compute-0 elastic_diffie[164448]:                 "ceph.cluster_name": "ceph",
Nov 24 09:38:15 compute-0 elastic_diffie[164448]:                 "ceph.crush_device_class": "",
Nov 24 09:38:15 compute-0 elastic_diffie[164448]:                 "ceph.encrypted": "0",
Nov 24 09:38:15 compute-0 elastic_diffie[164448]:                 "ceph.osd_fsid": "4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c",
Nov 24 09:38:15 compute-0 elastic_diffie[164448]:                 "ceph.osd_id": "0",
Nov 24 09:38:15 compute-0 elastic_diffie[164448]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 09:38:15 compute-0 elastic_diffie[164448]:                 "ceph.type": "block",
Nov 24 09:38:15 compute-0 elastic_diffie[164448]:                 "ceph.vdo": "0",
Nov 24 09:38:15 compute-0 elastic_diffie[164448]:                 "ceph.with_tpm": "0"
Nov 24 09:38:15 compute-0 elastic_diffie[164448]:             },
Nov 24 09:38:15 compute-0 elastic_diffie[164448]:             "type": "block",
Nov 24 09:38:15 compute-0 elastic_diffie[164448]:             "vg_name": "ceph_vg0"
Nov 24 09:38:15 compute-0 elastic_diffie[164448]:         }
Nov 24 09:38:15 compute-0 elastic_diffie[164448]:     ]
Nov 24 09:38:15 compute-0 elastic_diffie[164448]: }
Nov 24 09:38:15 compute-0 systemd[1]: libpod-aef246a8c1d24fa02a153c378176bc1c5b7b6fd73fc16a3f8b86d951cc1f6186.scope: Deactivated successfully.
Nov 24 09:38:15 compute-0 podman[164391]: 2025-11-24 09:38:15.564446152 +0000 UTC m=+0.464271628 container died aef246a8c1d24fa02a153c378176bc1c5b7b6fd73fc16a3f8b86d951cc1f6186 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_diffie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 24 09:38:15 compute-0 sudo[164566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkluczugfslltgngkisurrrmczmleisj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977095.147464-1298-28718576620266/AnsiballZ_copy.py'
Nov 24 09:38:15 compute-0 sudo[164566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c7a6384fc3811476dabe4aaa5e8334ac90a8c6507aca4eacce7e56d8cdd74bd-merged.mount: Deactivated successfully.
Nov 24 09:38:15 compute-0 podman[164391]: 2025-11-24 09:38:15.610487875 +0000 UTC m=+0.510313351 container remove aef246a8c1d24fa02a153c378176bc1c5b7b6fd73fc16a3f8b86d951cc1f6186 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_diffie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid)
Nov 24 09:38:15 compute-0 systemd[1]: libpod-conmon-aef246a8c1d24fa02a153c378176bc1c5b7b6fd73fc16a3f8b86d951cc1f6186.scope: Deactivated successfully.
Nov 24 09:38:15 compute-0 sudo[164178]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:15 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:15 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:15 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:15.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:15 compute-0 sudo[164580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:38:15 compute-0 sudo[164580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:38:15 compute-0 sudo[164580]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:15 compute-0 sudo[164605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- raw list --format json
Nov 24 09:38:15 compute-0 sudo[164605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:38:15 compute-0 python3.9[164579]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763977095.147464-1298-28718576620266/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:38:15 compute-0 sudo[164566]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:16 compute-0 podman[164693]: 2025-11-24 09:38:16.192919159 +0000 UTC m=+0.038140742 container create b1bacd29e159a1e437fff9d79b3fee8d801ad64540c30180ffb4fc82ffe78ed0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_ardinghelli, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:38:16 compute-0 systemd[1]: Started libpod-conmon-b1bacd29e159a1e437fff9d79b3fee8d801ad64540c30180ffb4fc82ffe78ed0.scope.
Nov 24 09:38:16 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:38:16 compute-0 podman[164693]: 2025-11-24 09:38:16.176889428 +0000 UTC m=+0.022111031 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:38:16 compute-0 podman[164693]: 2025-11-24 09:38:16.282803853 +0000 UTC m=+0.128025456 container init b1bacd29e159a1e437fff9d79b3fee8d801ad64540c30180ffb4fc82ffe78ed0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_ardinghelli, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:38:16 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:16 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:16 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:16.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:16 compute-0 podman[164693]: 2025-11-24 09:38:16.290515102 +0000 UTC m=+0.135736685 container start b1bacd29e159a1e437fff9d79b3fee8d801ad64540c30180ffb4fc82ffe78ed0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Nov 24 09:38:16 compute-0 podman[164693]: 2025-11-24 09:38:16.293786587 +0000 UTC m=+0.139008180 container attach b1bacd29e159a1e437fff9d79b3fee8d801ad64540c30180ffb4fc82ffe78ed0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_ardinghelli, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 24 09:38:16 compute-0 ecstatic_ardinghelli[164711]: 167 167
Nov 24 09:38:16 compute-0 systemd[1]: libpod-b1bacd29e159a1e437fff9d79b3fee8d801ad64540c30180ffb4fc82ffe78ed0.scope: Deactivated successfully.
Nov 24 09:38:16 compute-0 podman[164693]: 2025-11-24 09:38:16.296593972 +0000 UTC m=+0.141815555 container died b1bacd29e159a1e437fff9d79b3fee8d801ad64540c30180ffb4fc82ffe78ed0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_ardinghelli, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:38:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a7089d690264835003bc42d71a85cba4a9ce41697a7bb6139686d03baeccd2b-merged.mount: Deactivated successfully.
Nov 24 09:38:16 compute-0 podman[164693]: 2025-11-24 09:38:16.340806183 +0000 UTC m=+0.186027766 container remove b1bacd29e159a1e437fff9d79b3fee8d801ad64540c30180ffb4fc82ffe78ed0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 09:38:16 compute-0 systemd[1]: libpod-conmon-b1bacd29e159a1e437fff9d79b3fee8d801ad64540c30180ffb4fc82ffe78ed0.scope: Deactivated successfully.
Nov 24 09:38:16 compute-0 sudo[164781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjekckmkdlvrtnpkajfpqnjazdbovqtc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977095.147464-1298-28718576620266/AnsiballZ_systemd.py'
Nov 24 09:38:16 compute-0 sudo[164781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:16 compute-0 ceph-mon[74331]: pgmap v331: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:38:16 compute-0 podman[164789]: 2025-11-24 09:38:16.499150378 +0000 UTC m=+0.035584073 container create afd2ace8f3e67bcc04b12ac27fff0ed59a8a8e8199b01c75c5552025b8808f80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_nobel, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:38:16 compute-0 systemd[1]: Started libpod-conmon-afd2ace8f3e67bcc04b12ac27fff0ed59a8a8e8199b01c75c5552025b8808f80.scope.
Nov 24 09:38:16 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:38:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:16 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee74002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:16 compute-0 podman[164789]: 2025-11-24 09:38:16.484179732 +0000 UTC m=+0.020613457 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:38:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ec540a74509b1eb37a8c0d9bbd182dc9245daaddd22828600efad2fee05374d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:38:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ec540a74509b1eb37a8c0d9bbd182dc9245daaddd22828600efad2fee05374d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:38:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ec540a74509b1eb37a8c0d9bbd182dc9245daaddd22828600efad2fee05374d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:38:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ec540a74509b1eb37a8c0d9bbd182dc9245daaddd22828600efad2fee05374d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:38:16 compute-0 podman[164789]: 2025-11-24 09:38:16.604133991 +0000 UTC m=+0.140567696 container init afd2ace8f3e67bcc04b12ac27fff0ed59a8a8e8199b01c75c5552025b8808f80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Nov 24 09:38:16 compute-0 podman[164789]: 2025-11-24 09:38:16.61407473 +0000 UTC m=+0.150508425 container start afd2ace8f3e67bcc04b12ac27fff0ed59a8a8e8199b01c75c5552025b8808f80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_nobel, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:38:16 compute-0 podman[164789]: 2025-11-24 09:38:16.618484852 +0000 UTC m=+0.154918567 container attach afd2ace8f3e67bcc04b12ac27fff0ed59a8a8e8199b01c75c5552025b8808f80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_nobel, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:38:16 compute-0 python3.9[164783]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 09:38:16 compute-0 systemd[1]: Reloading.
Nov 24 09:38:16 compute-0 systemd-rc-local-generator[164836]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:38:16 compute-0 systemd-sysv-generator[164840]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:38:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:16 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c0014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:38:16.996Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:38:17 compute-0 sudo[164781]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:17 compute-0 sudo[164988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygnhtwzfjtazreywqtnpkaahtdnszhiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977095.147464-1298-28718576620266/AnsiballZ_systemd.py'
Nov 24 09:38:17 compute-0 sudo[164988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:17 compute-0 lvm[164990]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 09:38:17 compute-0 lvm[164990]: VG ceph_vg0 finished
Nov 24 09:38:17 compute-0 eager_nobel[164806]: {}
Nov 24 09:38:17 compute-0 systemd[1]: libpod-afd2ace8f3e67bcc04b12ac27fff0ed59a8a8e8199b01c75c5552025b8808f80.scope: Deactivated successfully.
Nov 24 09:38:17 compute-0 systemd[1]: libpod-afd2ace8f3e67bcc04b12ac27fff0ed59a8a8e8199b01c75c5552025b8808f80.scope: Consumed 1.300s CPU time.
Nov 24 09:38:17 compute-0 podman[164789]: 2025-11-24 09:38:17.453349202 +0000 UTC m=+0.989782897 container died afd2ace8f3e67bcc04b12ac27fff0ed59a8a8e8199b01c75c5552025b8808f80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_nobel, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 24 09:38:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:17 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c003e60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ec540a74509b1eb37a8c0d9bbd182dc9245daaddd22828600efad2fee05374d-merged.mount: Deactivated successfully.
Nov 24 09:38:17 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v332: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:38:17 compute-0 podman[164789]: 2025-11-24 09:38:17.534932425 +0000 UTC m=+1.071366120 container remove afd2ace8f3e67bcc04b12ac27fff0ed59a8a8e8199b01c75c5552025b8808f80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:38:17 compute-0 systemd[1]: libpod-conmon-afd2ace8f3e67bcc04b12ac27fff0ed59a8a8e8199b01c75c5552025b8808f80.scope: Deactivated successfully.
Nov 24 09:38:17 compute-0 sudo[164605]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:17 compute-0 python3.9[164992]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:38:17 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:17 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:17 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:17.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:17 compute-0 systemd[1]: Reloading.
Nov 24 09:38:17 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:38:17 compute-0 systemd-sysv-generator[165041]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:38:17 compute-0 systemd-rc-local-generator[165036]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:38:18 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Nov 24 09:38:18 compute-0 sshd-session[165011]: Invalid user vagrant from 209.38.206.249 port 37264
Nov 24 09:38:18 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:38:18 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:38:18 compute-0 sshd-session[165011]: Connection closed by invalid user vagrant 209.38.206.249 port 37264 [preauth]
Nov 24 09:38:18 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:38:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65378cf5773372c520718778ee4a4c13f21c1a6c63c3b410ab8e2f4bd51aae0e/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Nov 24 09:38:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65378cf5773372c520718778ee4a4c13f21c1a6c63c3b410ab8e2f4bd51aae0e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 24 09:38:18 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317.
Nov 24 09:38:18 compute-0 podman[165052]: 2025-11-24 09:38:18.182706938 +0000 UTC m=+0.121526276 container init c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 24 09:38:18 compute-0 ovn_metadata_agent[165067]: + sudo -E kolla_set_configs
Nov 24 09:38:18 compute-0 podman[165052]: 2025-11-24 09:38:18.217661405 +0000 UTC m=+0.156480743 container start c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Nov 24 09:38:18 compute-0 edpm-start-podman-container[165052]: ovn_metadata_agent
Nov 24 09:38:18 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:38:18 compute-0 podman[165072]: 2025-11-24 09:38:18.280971716 +0000 UTC m=+0.064057710 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Nov 24 09:38:18 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:18 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:18 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:18.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:18 compute-0 edpm-start-podman-container[165051]: Creating additional drop-in dependency for "ovn_metadata_agent" (c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317)
Nov 24 09:38:18 compute-0 systemd[1]: Reloading.
Nov 24 09:38:18 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:38:18 compute-0 ovn_metadata_agent[165067]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 24 09:38:18 compute-0 ovn_metadata_agent[165067]: INFO:__main__:Validating config file
Nov 24 09:38:18 compute-0 ovn_metadata_agent[165067]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 24 09:38:18 compute-0 ovn_metadata_agent[165067]: INFO:__main__:Copying service configuration files
Nov 24 09:38:18 compute-0 ovn_metadata_agent[165067]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Nov 24 09:38:18 compute-0 ovn_metadata_agent[165067]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Nov 24 09:38:18 compute-0 ovn_metadata_agent[165067]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Nov 24 09:38:18 compute-0 ovn_metadata_agent[165067]: INFO:__main__:Writing out command to execute
Nov 24 09:38:18 compute-0 ovn_metadata_agent[165067]: INFO:__main__:Setting permission for /var/lib/neutron
Nov 24 09:38:18 compute-0 ovn_metadata_agent[165067]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Nov 24 09:38:18 compute-0 ovn_metadata_agent[165067]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Nov 24 09:38:18 compute-0 ovn_metadata_agent[165067]: INFO:__main__:Setting permission for /var/lib/neutron/external
Nov 24 09:38:18 compute-0 ovn_metadata_agent[165067]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Nov 24 09:38:18 compute-0 ovn_metadata_agent[165067]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Nov 24 09:38:18 compute-0 ovn_metadata_agent[165067]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Nov 24 09:38:18 compute-0 systemd-rc-local-generator[165137]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:38:18 compute-0 ovn_metadata_agent[165067]: ++ cat /run_command
Nov 24 09:38:18 compute-0 ovn_metadata_agent[165067]: + CMD=neutron-ovn-metadata-agent
Nov 24 09:38:18 compute-0 ovn_metadata_agent[165067]: + ARGS=
Nov 24 09:38:18 compute-0 ovn_metadata_agent[165067]: + sudo kolla_copy_cacerts
Nov 24 09:38:18 compute-0 systemd-sysv-generator[165143]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:38:18 compute-0 ovn_metadata_agent[165067]: + [[ ! -n '' ]]
Nov 24 09:38:18 compute-0 ovn_metadata_agent[165067]: + . kolla_extend_start
Nov 24 09:38:18 compute-0 ovn_metadata_agent[165067]: Running command: 'neutron-ovn-metadata-agent'
Nov 24 09:38:18 compute-0 ovn_metadata_agent[165067]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Nov 24 09:38:18 compute-0 ovn_metadata_agent[165067]: + umask 0022
Nov 24 09:38:18 compute-0 ovn_metadata_agent[165067]: + exec neutron-ovn-metadata-agent
Nov 24 09:38:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:18 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:38:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:18 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:38:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:18 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:18 compute-0 systemd[1]: Started ovn_metadata_agent container.
Nov 24 09:38:18 compute-0 sudo[165127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:38:18 compute-0 sudo[165127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:38:18 compute-0 sudo[165127]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:18 compute-0 sudo[164988]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:18 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee74003390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:19 compute-0 ceph-mon[74331]: pgmap v332: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:38:19 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:38:19 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:38:19 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:19 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c003e60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:19 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v333: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:38:19 compute-0 sshd-session[155876]: Connection closed by 192.168.122.30 port 44032
Nov 24 09:38:19 compute-0 sshd-session[155873]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:38:19 compute-0 systemd[1]: session-52.scope: Deactivated successfully.
Nov 24 09:38:19 compute-0 systemd[1]: session-52.scope: Consumed 56.944s CPU time.
Nov 24 09:38:19 compute-0 systemd-logind[822]: Session 52 logged out. Waiting for processes to exit.
Nov 24 09:38:19 compute-0 systemd-logind[822]: Removed session 52.
Nov 24 09:38:19 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:19 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000022s ======
Nov 24 09:38:19 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:19.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Nov 24 09:38:20 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:20 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000022s ======
Nov 24 09:38:20 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:20.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.508 165073 INFO neutron.common.config [-] Logging enabled!
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.508 165073 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.508 165073 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.509 165073 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.509 165073 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.509 165073 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.509 165073 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.509 165073 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.509 165073 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.510 165073 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.510 165073 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.510 165073 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.510 165073 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.510 165073 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.510 165073 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.510 165073 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.510 165073 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.510 165073 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.511 165073 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.511 165073 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.511 165073 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.511 165073 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.511 165073 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.511 165073 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.511 165073 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.511 165073 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.511 165073 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.511 165073 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.512 165073 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.512 165073 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.512 165073 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.512 165073 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.512 165073 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.512 165073 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.512 165073 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.512 165073 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.512 165073 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.513 165073 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.513 165073 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.513 165073 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.513 165073 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.513 165073 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.513 165073 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.513 165073 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.513 165073 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.513 165073 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.513 165073 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.514 165073 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.514 165073 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.514 165073 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.514 165073 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.514 165073 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.514 165073 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.514 165073 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.514 165073 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.514 165073 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.514 165073 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.515 165073 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.515 165073 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.515 165073 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.515 165073 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.515 165073 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.515 165073 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.515 165073 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.515 165073 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.516 165073 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.516 165073 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.516 165073 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.516 165073 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.516 165073 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.516 165073 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.516 165073 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.516 165073 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.516 165073 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.517 165073 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.517 165073 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.517 165073 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.517 165073 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.517 165073 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.517 165073 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.517 165073 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.518 165073 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.518 165073 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.518 165073 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.518 165073 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.518 165073 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.518 165073 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.518 165073 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.518 165073 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.518 165073 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.519 165073 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.519 165073 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.519 165073 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.519 165073 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.519 165073 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.519 165073 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.519 165073 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.519 165073 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.519 165073 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.520 165073 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.520 165073 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.520 165073 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.520 165073 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.520 165073 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.520 165073 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.520 165073 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.520 165073 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.520 165073 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.520 165073 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.521 165073 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.521 165073 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.521 165073 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.521 165073 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.521 165073 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.521 165073 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.521 165073 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.521 165073 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.522 165073 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.522 165073 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.522 165073 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.522 165073 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.522 165073 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.522 165073 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.522 165073 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.522 165073 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.523 165073 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.523 165073 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.523 165073 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.523 165073 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.523 165073 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.523 165073 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.523 165073 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.523 165073 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.523 165073 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.524 165073 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.524 165073 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.524 165073 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.524 165073 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.524 165073 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.524 165073 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.524 165073 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.524 165073 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.525 165073 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.525 165073 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.525 165073 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.525 165073 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.525 165073 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.525 165073 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.525 165073 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.525 165073 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.525 165073 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.526 165073 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.526 165073 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.526 165073 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.526 165073 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.526 165073 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.526 165073 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.526 165073 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.526 165073 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.527 165073 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.527 165073 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.527 165073 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.527 165073 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.527 165073 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.527 165073 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.527 165073 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.527 165073 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.527 165073 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.528 165073 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.528 165073 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.528 165073 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.528 165073 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.528 165073 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.528 165073 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.528 165073 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.529 165073 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.529 165073 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.529 165073 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.529 165073 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.529 165073 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.529 165073 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.529 165073 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.529 165073 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.530 165073 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.530 165073 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.530 165073 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.530 165073 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.530 165073 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.530 165073 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.530 165073 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.530 165073 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.531 165073 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.531 165073 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.531 165073 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.531 165073 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.531 165073 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.531 165073 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.531 165073 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.531 165073 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.532 165073 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.532 165073 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.532 165073 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.532 165073 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.532 165073 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.532 165073 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.532 165073 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.532 165073 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.533 165073 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.533 165073 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.533 165073 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.533 165073 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.533 165073 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.533 165073 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.533 165073 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.534 165073 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.534 165073 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.534 165073 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.534 165073 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.534 165073 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.534 165073 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.534 165073 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.534 165073 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.534 165073 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.534 165073 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.535 165073 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.535 165073 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.535 165073 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.535 165073 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.535 165073 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.535 165073 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.535 165073 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.535 165073 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.535 165073 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.536 165073 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.536 165073 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.536 165073 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.536 165073 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.536 165073 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.536 165073 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.536 165073 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.536 165073 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.536 165073 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.537 165073 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.537 165073 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.537 165073 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.537 165073 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.537 165073 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.537 165073 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.537 165073 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.537 165073 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.537 165073 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.538 165073 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.538 165073 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.538 165073 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.538 165073 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.538 165073 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.538 165073 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.538 165073 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.538 165073 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.538 165073 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.539 165073 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.539 165073 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.539 165073 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.539 165073 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.539 165073 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.539 165073 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.539 165073 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.539 165073 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.539 165073 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.540 165073 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.540 165073 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.540 165073 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.540 165073 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.540 165073 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.540 165073 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.540 165073 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.540 165073 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.541 165073 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.541 165073 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.541 165073 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.541 165073 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.541 165073 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.541 165073 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.541 165073 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.541 165073 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.541 165073 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.542 165073 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.542 165073 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.542 165073 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.542 165073 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.542 165073 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.542 165073 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.542 165073 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.542 165073 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.542 165073 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.543 165073 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.543 165073 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.543 165073 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.543 165073 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.543 165073 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.543 165073 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.543 165073 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.543 165073 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.543 165073 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.552 165073 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.553 165073 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.553 165073 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.553 165073 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.553 165073 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.567 165073 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name feb242b9-6422-4c37-bc7a-5c14a79beaf8 (UUID: feb242b9-6422-4c37-bc7a-5c14a79beaf8) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Nov 24 09:38:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:20 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c003e60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.589 165073 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.589 165073 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.590 165073 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.590 165073 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.593 165073 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.599 165073 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.605 165073 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'feb242b9-6422-4c37-bc7a-5c14a79beaf8'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f45b2855760>], external_ids={}, name=feb242b9-6422-4c37-bc7a-5c14a79beaf8, nb_cfg_timestamp=1763977034453, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.606 165073 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f45b2855bb0>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.607 165073 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.607 165073 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.607 165073 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.607 165073 INFO oslo_service.service [-] Starting 1 workers
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.611 165073 DEBUG oslo_service.service [-] Started child 165208 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.614 165073 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp_8_r1hak/privsep.sock']
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.615 165208 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-191123'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.634 165208 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.635 165208 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.635 165208 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.637 165208 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.643 165208 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 24 09:38:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:20.649 165208 INFO eventlet.wsgi.server [-] (165208) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Nov 24 09:38:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:20 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:38:20] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Nov 24 09:38:20 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:38:20] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Nov 24 09:38:21 compute-0 ceph-mon[74331]: pgmap v333: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:38:21 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Nov 24 09:38:21 compute-0 sudo[165213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:38:21 compute-0 sudo[165213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:38:21 compute-0 sudo[165213]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:21 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:21.400 165073 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 24 09:38:21 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:21.401 165073 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp_8_r1hak/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 24 09:38:21 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:21.191 165227 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 24 09:38:21 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:21.197 165227 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 24 09:38:21 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:21.199 165227 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Nov 24 09:38:21 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:21.199 165227 INFO oslo.privsep.daemon [-] privsep daemon running as pid 165227
Nov 24 09:38:21 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:21.404 165227 DEBUG oslo.privsep.daemon [-] privsep: reply[c6effc44-671a-421a-974b-d26f75b33bd2]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:38:21 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:21 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee74003390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:21 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v334: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 853 B/s wr, 2 op/s
Nov 24 09:38:21 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:21 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:38:21 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:21 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000022s ======
Nov 24 09:38:21 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:21.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Nov 24 09:38:21 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:21.940 165227 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:38:21 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:21.941 165227 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:38:21 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:21.941 165227 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:38:22 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:22 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:22 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:22.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.481 165227 DEBUG oslo.privsep.daemon [-] privsep: reply[ed90abfa-7d1f-47e9-be8c-5f9dbfa4c743]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.483 165073 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=feb242b9-6422-4c37-bc7a-5c14a79beaf8, column=external_ids, values=({'neutron:ovn-metadata-id': '098dbcc5-22a0-5f92-a78a-056efa94c777'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.491 165073 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feb242b9-6422-4c37-bc7a-5c14a79beaf8, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.496 165073 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.497 165073 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.497 165073 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.497 165073 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.497 165073 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.497 165073 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.497 165073 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.497 165073 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.497 165073 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.498 165073 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.498 165073 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.498 165073 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.498 165073 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.498 165073 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.498 165073 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.498 165073 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.499 165073 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.499 165073 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.499 165073 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.499 165073 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.499 165073 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.499 165073 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.499 165073 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.499 165073 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.500 165073 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.500 165073 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.500 165073 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.500 165073 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.500 165073 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.500 165073 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.500 165073 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.500 165073 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.500 165073 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.501 165073 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.501 165073 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.501 165073 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.501 165073 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.501 165073 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.502 165073 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.502 165073 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.502 165073 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.502 165073 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.502 165073 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.502 165073 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.502 165073 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.502 165073 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.502 165073 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.503 165073 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.503 165073 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.503 165073 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.503 165073 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.503 165073 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.503 165073 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.503 165073 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.503 165073 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.504 165073 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.504 165073 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.504 165073 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.504 165073 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.504 165073 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.504 165073 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.504 165073 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.504 165073 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.504 165073 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.505 165073 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.505 165073 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.505 165073 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.505 165073 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.505 165073 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.505 165073 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.505 165073 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.505 165073 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.506 165073 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.506 165073 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.506 165073 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.506 165073 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.506 165073 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.506 165073 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.506 165073 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.506 165073 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.506 165073 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.507 165073 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.507 165073 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.507 165073 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.507 165073 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.507 165073 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.507 165073 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.507 165073 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.507 165073 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.507 165073 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.507 165073 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.508 165073 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.508 165073 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.508 165073 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.508 165073 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.508 165073 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.508 165073 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.508 165073 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.508 165073 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.508 165073 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.508 165073 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.509 165073 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.509 165073 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.509 165073 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.509 165073 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.509 165073 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.509 165073 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.509 165073 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.510 165073 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.510 165073 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.510 165073 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.510 165073 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.510 165073 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.511 165073 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.511 165073 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.511 165073 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.511 165073 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.511 165073 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.511 165073 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.511 165073 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.511 165073 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.511 165073 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.512 165073 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.512 165073 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.512 165073 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.512 165073 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.512 165073 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.512 165073 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.512 165073 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.512 165073 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.513 165073 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.513 165073 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.513 165073 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.513 165073 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.513 165073 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.513 165073 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.513 165073 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.514 165073 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.514 165073 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.514 165073 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.514 165073 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.514 165073 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.514 165073 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.514 165073 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.514 165073 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.514 165073 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.514 165073 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.515 165073 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.515 165073 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.515 165073 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.515 165073 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.515 165073 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.515 165073 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.515 165073 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.515 165073 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.515 165073 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.516 165073 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.516 165073 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.516 165073 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.516 165073 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.516 165073 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.516 165073 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.516 165073 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.516 165073 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.516 165073 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.517 165073 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.517 165073 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.517 165073 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.517 165073 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.517 165073 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.517 165073 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.517 165073 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.517 165073 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.518 165073 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.518 165073 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.518 165073 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.518 165073 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.518 165073 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.518 165073 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.518 165073 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.518 165073 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.518 165073 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.518 165073 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.519 165073 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.519 165073 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.519 165073 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.519 165073 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.519 165073 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.519 165073 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.519 165073 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.520 165073 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.520 165073 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.520 165073 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.520 165073 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.520 165073 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.520 165073 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.520 165073 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.520 165073 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.520 165073 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.520 165073 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.521 165073 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.521 165073 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.521 165073 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.521 165073 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.521 165073 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.521 165073 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.521 165073 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.521 165073 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.522 165073 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.522 165073 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.522 165073 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.522 165073 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.522 165073 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.522 165073 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.522 165073 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.522 165073 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.523 165073 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.523 165073 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.523 165073 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.523 165073 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.523 165073 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.523 165073 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.523 165073 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.523 165073 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.523 165073 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.524 165073 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.524 165073 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.524 165073 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.524 165073 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.524 165073 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.524 165073 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.524 165073 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.524 165073 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.524 165073 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.524 165073 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.525 165073 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.525 165073 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.525 165073 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.525 165073 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.525 165073 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.525 165073 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.525 165073 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.526 165073 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.526 165073 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.526 165073 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.526 165073 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.526 165073 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.527 165073 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.527 165073 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.527 165073 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.527 165073 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.527 165073 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.527 165073 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.527 165073 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.527 165073 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.528 165073 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.528 165073 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.528 165073 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.528 165073 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.528 165073 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.528 165073 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.528 165073 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.529 165073 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.529 165073 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.529 165073 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.529 165073 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.529 165073 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.529 165073 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.529 165073 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.530 165073 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.530 165073 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.530 165073 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.530 165073 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.530 165073 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.530 165073 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.530 165073 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.531 165073 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.531 165073 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.531 165073 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.531 165073 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.531 165073 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.531 165073 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.531 165073 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.532 165073 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.532 165073 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.532 165073 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.532 165073 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.532 165073 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.532 165073 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.532 165073 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.532 165073 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.533 165073 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.533 165073 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.533 165073 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.533 165073 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.533 165073 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.533 165073 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.533 165073 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.534 165073 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.534 165073 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.534 165073 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.534 165073 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.534 165073 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:38:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:38:22.534 165073 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 24 09:38:22 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:22 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c003e60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:22 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:22 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee7c003d20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:23 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:38:23 compute-0 ceph-mon[74331]: pgmap v334: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 853 B/s wr, 2 op/s
Nov 24 09:38:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:23 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:23 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v335: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 853 B/s wr, 2 op/s
Nov 24 09:38:23 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:23 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:38:23 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:23.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:38:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/093824 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:38:24 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:24 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:38:24 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:24.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:38:24 compute-0 ceph-mon[74331]: pgmap v335: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 853 B/s wr, 2 op/s
Nov 24 09:38:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:24 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee74003390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:24 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c003e60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:25 compute-0 sshd-session[165247]: Accepted publickey for zuul from 192.168.122.30 port 41930 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 09:38:25 compute-0 systemd-logind[822]: New session 53 of user zuul.
Nov 24 09:38:25 compute-0 systemd[1]: Started Session 53 of User zuul.
Nov 24 09:38:25 compute-0 sshd-session[165247]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:38:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:25 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee7c003d40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:25 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v336: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 853 B/s wr, 2 op/s
Nov 24 09:38:25 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:25 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:38:25 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:25.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:38:26 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:26 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:38:26 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:26.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:38:26 compute-0 python3.9[165401]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:38:26 compute-0 ceph-mon[74331]: pgmap v336: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 853 B/s wr, 2 op/s
Nov 24 09:38:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:26 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:26 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee74003390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:38:26.997Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:38:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/093827 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:38:27 compute-0 sudo[165557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onwbguhzphgompetrecgvjdsghfstuka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977107.0584652-62-128851811852524/AnsiballZ_command.py'
Nov 24 09:38:27 compute-0 sudo[165557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:27 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee74003390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:27 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v337: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:38:27 compute-0 python3.9[165559]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:38:27 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:27 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:38:27 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:27.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:38:27 compute-0 sudo[165557]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:28 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:38:28 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:28 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:28 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:28.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:28 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c003e60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:28 compute-0 ceph-mon[74331]: pgmap v337: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:38:28 compute-0 sudo[165723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxlmtmmfwybwlipqqgjzukalwdvtvqat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977108.2878969-95-186537064258630/AnsiballZ_systemd_service.py'
Nov 24 09:38:28 compute-0 sudo[165723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:28 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:29 compute-0 python3.9[165725]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 09:38:29 compute-0 systemd[1]: Reloading.
Nov 24 09:38:29 compute-0 systemd-rc-local-generator[165752]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:38:29 compute-0 systemd-sysv-generator[165756]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:38:29 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:29 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee74003390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:29 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v338: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 341 B/s wr, 1 op/s
Nov 24 09:38:29 compute-0 sudo[165723]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:29 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:29 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:38:29 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:29.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:38:30 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:30 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:38:30 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:30.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:38:30 compute-0 python3.9[165911]: ansible-ansible.builtin.service_facts Invoked
Nov 24 09:38:30 compute-0 network[165929]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 09:38:30 compute-0 network[165930]: 'network-scripts' will be removed from distribution in near future.
Nov 24 09:38:30 compute-0 network[165931]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 09:38:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:30 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee7c003d80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:30 compute-0 ceph-mon[74331]: pgmap v338: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 341 B/s wr, 1 op/s
Nov 24 09:38:30 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:38:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:30 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c003e60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:38:30] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Nov 24 09:38:30 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:38:30] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Nov 24 09:38:31 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:31 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:31 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v339: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:38:31 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:31 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:31 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:31.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:32 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:32 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:38:32 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:32.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:38:32 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:32 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee74003390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:32 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:32 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:38:32 compute-0 ceph-mon[74331]: pgmap v339: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:38:32 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:32 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee7c003da0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:38:33 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v340: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Nov 24 09:38:33 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:33 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c003e60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:33.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:34 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:34 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:38:34 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:34.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:38:34 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:34 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:34 compute-0 ceph-mon[74331]: pgmap v340: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Nov 24 09:38:34 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:34 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee74003390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:35 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v341: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Nov 24 09:38:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:35 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee7c003dc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:35 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:35 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:38:35 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:35.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:38:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:35 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:38:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:35 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:38:35 compute-0 sudo[166196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmlyokmgsupggpctqgyngsemaiahtymd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977115.67035-152-4113816524655/AnsiballZ_systemd_service.py'
Nov 24 09:38:35 compute-0 sudo[166196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:36 compute-0 python3.9[166198]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:38:36 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:36 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:38:36 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:36.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:38:36 compute-0 sudo[166196]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:36 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:36 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c003e60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:36 compute-0 sudo[166351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrqijyujkzgapfrmmsofgbwemrhqpiwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977116.495739-152-57390816577468/AnsiballZ_systemd_service.py'
Nov 24 09:38:36 compute-0 sudo[166351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:36 compute-0 ceph-mon[74331]: pgmap v341: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Nov 24 09:38:36 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:36 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004620 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:36 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:38:36.997Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:38:37 compute-0 python3.9[166353]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:38:37 compute-0 sudo[166351]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:37 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v342: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:38:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:37 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004620 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:37 compute-0 sudo[166505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lclmmhcrxestggkyyziziasyfccmfhle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977117.2611876-152-118398889347179/AnsiballZ_systemd_service.py'
Nov 24 09:38:37 compute-0 sudo[166505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:37 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:37 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:38:37 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:37.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:38:37 compute-0 python3.9[166507]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:38:37 compute-0 sudo[166505]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:38 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:38:38 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:38 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:38 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:38.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:38 compute-0 sudo[166659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmzjiklelvqmjbixfbgnhtwajwjoiceg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977118.0706263-152-201091327421205/AnsiballZ_systemd_service.py'
Nov 24 09:38:38 compute-0 sudo[166659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:38 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee7c003e70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:38 compute-0 python3.9[166661]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:38:38 compute-0 sudo[166659]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:38 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:38:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:38 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c003e60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:39 compute-0 ceph-mon[74331]: pgmap v342: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:38:39 compute-0 sudo[166813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icvimwfwcvqngdqmfmpsthylhnwdfuhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977118.842147-152-275480954317511/AnsiballZ_systemd_service.py'
Nov 24 09:38:39 compute-0 sudo[166813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:39 compute-0 python3.9[166815]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:38:39 compute-0 sudo[166813]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:39 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v343: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:38:39 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:39 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee880010d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:39 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:39 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:38:39 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:39.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:38:40 compute-0 sudo[166967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aelwkydmffbhmrjogndoiaiagzvwnpkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977119.5906067-152-116903529048327/AnsiballZ_systemd_service.py'
Nov 24 09:38:40 compute-0 sudo[166967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:40 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:40 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:40 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:40.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:40 compute-0 python3.9[166969]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:38:40 compute-0 sudo[166967]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:40 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c003e60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:40 compute-0 sudo[167131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsrhlmnqkoakoqmfnwnohfyxixrjdwsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977120.6590223-152-106461112626875/AnsiballZ_systemd_service.py'
Nov 24 09:38:40 compute-0 sudo[167131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:38:40] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Nov 24 09:38:40 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:38:40] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Nov 24 09:38:41 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:40 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee880010d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:41 compute-0 podman[167095]: 2025-11-24 09:38:41.033802708 +0000 UTC m=+0.109236267 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 09:38:41 compute-0 ceph-mon[74331]: pgmap v343: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:38:41 compute-0 python3.9[167137]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:38:41 compute-0 sudo[167149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:38:41 compute-0 sudo[167149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:38:41 compute-0 sudo[167149]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:41 compute-0 sudo[167131]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:41 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v344: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:38:41 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:41 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee7c003eb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:41 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:41 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:38:41 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:41.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:38:42 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:42 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:38:42 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:42.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:38:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:42 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee7c003eb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:42 compute-0 sudo[167325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idrxzbhqimrweebgaubguymxtnfmazbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977122.2072194-308-266165918480617/AnsiballZ_file.py'
Nov 24 09:38:42 compute-0 sudo[167325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:42 compute-0 python3.9[167327]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:38:42 compute-0 sudo[167325]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:43 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:43 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feeac0089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:38:43 compute-0 sudo[167478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjipsgwxmtpthwfcwdqsymcdjekqydzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977123.0135624-308-147091878997883/AnsiballZ_file.py'
Nov 24 09:38:43 compute-0 sudo[167478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:43 compute-0 python3.9[167480]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:38:43 compute-0 sudo[167478]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:43 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v345: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:38:43 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:43 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee880010d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:43 compute-0 ceph-mon[74331]: pgmap v344: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:38:43 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:43 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:43 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:43.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:43 compute-0 sudo[167630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dindgrggibcbhejsajcvedaxczmxoxzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977123.638901-308-34427187082379/AnsiballZ_file.py'
Nov 24 09:38:43 compute-0 sudo[167630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:44 compute-0 python3.9[167632]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:38:44 compute-0 sudo[167630]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/093844 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:38:44 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:44 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:44 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:44.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:44 compute-0 sudo[167783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itjwsmluqshawnhghvwaxiiwvusiagqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977124.2609656-308-65845177575372/AnsiballZ_file.py'
Nov 24 09:38:44 compute-0 sudo[167783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:44 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee8c003e60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:45 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee7c003ed0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:45 compute-0 ceph-mon[74331]: pgmap v345: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:38:45 compute-0 python3.9[167785]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:38:45 compute-0 sudo[167783]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Optimize plan auto_2025-11-24_09:38:45
Nov 24 09:38:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 09:38:45 compute-0 ceph-mgr[74626]: [balancer INFO root] do_upmap
Nov 24 09:38:45 compute-0 ceph-mgr[74626]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'images', 'volumes', 'backups', '.nfs', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', '.rgw.root', 'default.rgw.log', '.mgr']
Nov 24 09:38:45 compute-0 ceph-mgr[74626]: [balancer INFO root] prepared 0/10 upmap changes
Nov 24 09:38:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:38:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:38:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:38:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:38:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:38:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:38:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 09:38:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 09:38:45 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v346: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:38:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:38:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:38:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 09:38:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:38:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:38:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:38:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:38:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:38:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:38:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:38:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:38:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:38:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 09:38:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:38:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:38:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:38:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 24 09:38:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:38:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 09:38:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:38:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:38:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:38:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 09:38:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:38:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 09:38:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:38:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:38:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:38:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:45 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feeac0089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:45 compute-0 sudo[167936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezrjlmpnjyqwkjwugzdebhwtdzuslfia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977125.26093-308-195838971249804/AnsiballZ_file.py'
Nov 24 09:38:45 compute-0 sudo[167936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 09:38:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:38:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:38:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:38:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:38:45 compute-0 python3.9[167938]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:38:45 compute-0 sudo[167936]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:45 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:45 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:45 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:45.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:46 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:38:46 compute-0 sudo[168088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhxkfataennxinsnicuhawfuabzyluir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977125.8806078-308-174616424698660/AnsiballZ_file.py'
Nov 24 09:38:46 compute-0 sudo[168088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:46 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:46 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:46 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:46.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:46 compute-0 python3.9[168090]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:38:46 compute-0 sudo[168088]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:46 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee88003080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:46 compute-0 sudo[168241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqqkuykmnumypqowysdqwgfoiaewpacs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977126.5137103-308-188353678927890/AnsiballZ_file.py'
Nov 24 09:38:46 compute-0 sudo[168241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:46 compute-0 python3.9[168243]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:38:46 compute-0 sudo[168241]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:38:46.998Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:38:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:47 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee88003080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:47 compute-0 ceph-mon[74331]: pgmap v346: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:38:47 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v347: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:38:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:47 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee7c003ef0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:47 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:47 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:47 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:47.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:47 compute-0 sudo[168394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ektkabaqievdbkathlzhdnrukfzuipno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977127.5703042-458-38830725202349/AnsiballZ_file.py'
Nov 24 09:38:47 compute-0 sudo[168394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:48 compute-0 python3.9[168396]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:38:48 compute-0 sudo[168394]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:48 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:38:48 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:48 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:48 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:48.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:48 compute-0 sudo[168561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgzehmnflvndscpukfmnutjvjyssengi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977128.2078805-458-223607449446880/AnsiballZ_file.py'
Nov 24 09:38:48 compute-0 sudo[168561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:48 compute-0 podman[168521]: 2025-11-24 09:38:48.495706929 +0000 UTC m=+0.052662840 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 24 09:38:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:48 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feeac0089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:48 compute-0 python3.9[168569]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:38:48 compute-0 sudo[168561]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:49 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:49 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee88003080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:49 compute-0 ceph-mon[74331]: pgmap v347: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:38:49 compute-0 sudo[168719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmexzbywyxkwaxdbbnmvejeksvwjupfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977128.8358529-458-101035941406052/AnsiballZ_file.py'
Nov 24 09:38:49 compute-0 sudo[168719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:49 compute-0 python3.9[168721]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:38:49 compute-0 sudo[168719]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:49 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v348: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:38:49 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:49 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee88003080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:49 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:49 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:38:49 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:49.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:38:49 compute-0 sudo[168872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwtkbtbvaicezwzgyfejrnpdcjmaovkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977129.5195446-458-111910666104435/AnsiballZ_file.py'
Nov 24 09:38:49 compute-0 sudo[168872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:49 compute-0 python3.9[168874]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:38:50 compute-0 sudo[168872]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:50 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:50 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:50 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:50.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:50 compute-0 sudo[169025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vipdzzflkjrtfxfazorlsnfeqhbklbyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977130.1672368-458-159694656476353/AnsiballZ_file.py'
Nov 24 09:38:50 compute-0 sudo[169025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:50 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee7c003f10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:50 compute-0 python3.9[169027]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:38:50 compute-0 sudo[169025]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:38:50] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Nov 24 09:38:50 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:38:50] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Nov 24 09:38:51 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:51 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feeac0089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:51 compute-0 sudo[169177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krtdktatvkpurwkgnlfecrmezanmqzxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977130.8163042-458-80777998090661/AnsiballZ_file.py'
Nov 24 09:38:51 compute-0 sudo[169177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:51 compute-0 ceph-mon[74331]: pgmap v348: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:38:51 compute-0 python3.9[169179]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:38:51 compute-0 sudo[169177]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:51 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v349: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:38:51 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:51 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee88003080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:51 compute-0 sudo[169330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhkyfadibkradacgmqheesagtvwxvyty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977131.4474082-458-45742343349317/AnsiballZ_file.py'
Nov 24 09:38:51 compute-0 sudo[169330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:51 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:51 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:38:51 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:51.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:38:51 compute-0 python3.9[169332]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:38:51 compute-0 sudo[169330]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:52 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:52 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:38:52 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:52.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:38:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:52 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee88003080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:53 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:53 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee7c003f30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:53 compute-0 ceph-mon[74331]: pgmap v349: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:38:53 compute-0 sudo[169486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wugezcbvzcdpwoubsbzmikshcnsugmoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977132.97877-611-261459739750527/AnsiballZ_command.py'
Nov 24 09:38:53 compute-0 sudo[169486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:53 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:38:53 compute-0 python3.9[169488]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:38:53 compute-0 sudo[169486]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:53 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v350: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:38:53 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:53 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee7c003f30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:53 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:53 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:53 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:53.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:54 compute-0 python3.9[169640]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 24 09:38:54 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:54 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:38:54 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:54.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:38:54 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:54 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee7c003f30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:54 compute-0 sudo[169791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tajldhcdlpsirmcoocgqeuzlnnsdsgjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977134.6958108-665-158219732518509/AnsiballZ_systemd_service.py'
Nov 24 09:38:54 compute-0 sudo[169791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:55 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee70000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:55 compute-0 ceph-mon[74331]: pgmap v350: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:38:55 compute-0 python3.9[169793]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 09:38:55 compute-0 systemd[1]: Reloading.
Nov 24 09:38:55 compute-0 systemd-sysv-generator[169823]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:38:55 compute-0 systemd-rc-local-generator[169820]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:38:55 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v351: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:38:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:55 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004620 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:55 compute-0 sudo[169791]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:55 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:55 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:55 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:55.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:56 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:56 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:38:56 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:56.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:38:56 compute-0 sudo[169979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzuqgvxehawxqljmfkycgtekvlmfigwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977136.3002217-689-67641308723820/AnsiballZ_command.py'
Nov 24 09:38:56 compute-0 sudo[169979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:56 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feea0001e30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:56 compute-0 python3.9[169981]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:38:56 compute-0 sudo[169979]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:38:56.998Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:38:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:57 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feea0001e30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:57 compute-0 ceph-mon[74331]: pgmap v351: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:38:57 compute-0 sudo[170133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iduchadoqoxqwdvibpydtzwpruhnfexj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977136.9526975-689-12115906291367/AnsiballZ_command.py'
Nov 24 09:38:57 compute-0 sudo[170133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:57 compute-0 python3.9[170135]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:38:57 compute-0 sudo[170133]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/093857 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:38:57 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v352: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:38:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:57 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee700016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:57 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:57 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:57 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:57.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:38:57 compute-0 sudo[170286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqebhzflxydqgujhmrwguwxbfxzqqhrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977137.580623-689-146917016253074/AnsiballZ_command.py'
Nov 24 09:38:57 compute-0 sudo[170286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:58 compute-0 python3.9[170288]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:38:58 compute-0 sudo[170286]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:58 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:38:58 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:58 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:38:58 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:38:58.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:38:58 compute-0 sudo[170440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cabptyaassbzcycdovjvrlfqvxkmwwhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977138.204191-689-27252226229802/AnsiballZ_command.py'
Nov 24 09:38:58 compute-0 sudo[170440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:58 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feea0001e30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:58 compute-0 python3.9[170442]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:38:58 compute-0 sudo[170440]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:59 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee7c003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:59 compute-0 sudo[170593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkgassztxeqqmnnvgyduyditbdwubsba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977138.8376558-689-65029872137385/AnsiballZ_command.py'
Nov 24 09:38:59 compute-0 sudo[170593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:38:59 compute-0 ceph-mon[74331]: pgmap v352: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:38:59 compute-0 python3.9[170595]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:38:59 compute-0 sudo[170593]: pam_unix(sudo:session): session closed for user root
Nov 24 09:38:59 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v353: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:38:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:38:59 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee80004620 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:38:59 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:38:59 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:38:59 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:38:59.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:00 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:00 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:39:00 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:00.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:39:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:39:00 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee700016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:39:00] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Nov 24 09:39:00 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:39:00] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Nov 24 09:39:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:39:01 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feea0003330 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:01 compute-0 ceph-mon[74331]: pgmap v353: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:39:01 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:39:01 compute-0 sudo[170749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldejwrxufowkdbasrwslcqqrbbxvuleb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977140.731076-689-89322940025636/AnsiballZ_command.py'
Nov 24 09:39:01 compute-0 sudo[170749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:39:01 compute-0 sudo[170754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:39:01 compute-0 python3.9[170751]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:39:01 compute-0 sudo[170754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:39:01 compute-0 sudo[170754]: pam_unix(sudo:session): session closed for user root
Nov 24 09:39:01 compute-0 sudo[170749]: pam_unix(sudo:session): session closed for user root
Nov 24 09:39:01 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v354: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:39:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:39:01 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee7c003f90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:01 compute-0 sshd-session[170752]: Invalid user ftpuser from 209.38.206.249 port 43862
Nov 24 09:39:01 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:01 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:01 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:01.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:01 compute-0 sshd-session[170752]: Connection closed by invalid user ftpuser 209.38.206.249 port 43862 [preauth]
Nov 24 09:39:01 compute-0 sudo[170929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otsjrepnanxmjrrsznkebpprovalrsuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977141.5814106-689-252929318815277/AnsiballZ_command.py'
Nov 24 09:39:01 compute-0 sudo[170929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:39:01 compute-0 python3.9[170931]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:39:02 compute-0 sudo[170929]: pam_unix(sudo:session): session closed for user root
Nov 24 09:39:02 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:02 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:02 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:02.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:02 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:39:02 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee700016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:39:03 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee800047a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:03 compute-0 ceph-mon[74331]: pgmap v354: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:39:03 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:39:03 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v355: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:39:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:39:03 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee800047a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:03 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:03 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:03 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:03.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:04 compute-0 sudo[171085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnzzsjsvgsfzruzydyctjjgqtkiwcjqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977143.9005108-851-188169369733093/AnsiballZ_getent.py'
Nov 24 09:39:04 compute-0 sudo[171085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:39:04 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:04 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:04 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:04.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:04 compute-0 python3.9[171087]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Nov 24 09:39:04 compute-0 sudo[171085]: pam_unix(sudo:session): session closed for user root
Nov 24 09:39:04 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:39:04 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee7c003fb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:39:05 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee700016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:05 compute-0 ceph-mon[74331]: pgmap v355: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:39:05 compute-0 sudo[171239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aouijvhgkvrygwoylwqrztkkdwiaazeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977144.7831326-875-35893705696440/AnsiballZ_group.py'
Nov 24 09:39:05 compute-0 sudo[171239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:39:05 compute-0 python3.9[171241]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 24 09:39:05 compute-0 groupadd[171242]: group added to /etc/group: name=libvirt, GID=42473
Nov 24 09:39:05 compute-0 groupadd[171242]: group added to /etc/gshadow: name=libvirt
Nov 24 09:39:05 compute-0 groupadd[171242]: new group: name=libvirt, GID=42473
Nov 24 09:39:05 compute-0 sudo[171239]: pam_unix(sudo:session): session closed for user root
Nov 24 09:39:05 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v356: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:39:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:39:05 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feea0003330 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:05 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:05 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:39:05 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:05.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:39:06 compute-0 sudo[171398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuyhtkjaembvukimzfdgdlrmzrutduax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977145.7795384-899-126769643864204/AnsiballZ_user.py'
Nov 24 09:39:06 compute-0 sudo[171398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:39:06 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:06 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:39:06 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:06.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:39:06 compute-0 python3.9[171400]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 24 09:39:06 compute-0 useradd[171402]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Nov 24 09:39:06 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 09:39:06 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 09:39:06 compute-0 sudo[171398]: pam_unix(sudo:session): session closed for user root
Nov 24 09:39:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:39:06 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee800047a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:39:06.999Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:39:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:39:07 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee7c003fd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:07 compute-0 ceph-mon[74331]: pgmap v356: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:39:07 compute-0 sudo[171562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcftuywyfuhyqnsmckahrktvmbitxgeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977147.0823836-932-254581930354013/AnsiballZ_setup.py'
Nov 24 09:39:07 compute-0 sudo[171562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:39:07 compute-0 sshd-session[171434]: Invalid user odoo from 209.38.206.249 port 43866
Nov 24 09:39:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:39:07 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:39:07 compute-0 sshd-session[171434]: Connection closed by invalid user odoo 209.38.206.249 port 43866 [preauth]
Nov 24 09:39:07 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v357: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:39:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:39:07 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee70002f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:07 compute-0 python3.9[171564]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 09:39:07 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:07 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:07 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:07.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:07 compute-0 sudo[171562]: pam_unix(sudo:session): session closed for user root
Nov 24 09:39:08 compute-0 sudo[171647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zotdttukadpyachktjnjsuhoeynwilyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977147.0823836-932-254581930354013/AnsiballZ_dnf.py'
Nov 24 09:39:08 compute-0 sudo[171647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:39:08 compute-0 ceph-mon[74331]: pgmap v357: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:39:08 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:39:08 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:08 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:08 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:08.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:08 compute-0 python3.9[171649]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 09:39:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:39:08 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feea0003330 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:39:09 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee800047a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:09 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v358: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:39:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:39:09 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee7c003ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:09 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:09 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:39:09 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:09.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:39:10 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:10 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:10 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:10.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:39:10 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:39:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:39:10 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:39:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:39:10 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee70002f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:10 compute-0 ceph-mon[74331]: pgmap v358: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:39:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:39:10] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Nov 24 09:39:10 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:39:10] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Nov 24 09:39:11 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:39:11 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7feea0003330 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:11 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v359: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:39:11 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:39:11 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee800047a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:11 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:11 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:11 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:11.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:11 compute-0 podman[171661]: 2025-11-24 09:39:11.909757061 +0000 UTC m=+0.164996845 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Nov 24 09:39:12 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:12 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:39:12 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:12.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:39:12 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:39:12 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee800047a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:12 compute-0 ceph-mon[74331]: pgmap v359: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:39:13 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:39:13 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee70002f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:39:13 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:39:13 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:39:13 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v360: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:39:13 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:39:13 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee70002f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:13 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:13 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:13 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:13.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:14 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:14 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:14 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:14.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:14 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:39:14 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee800047a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:14 compute-0 ceph-mon[74331]: pgmap v360: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:39:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:39:15 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee7c004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:39:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:39:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:39:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:39:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:39:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:39:15 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v361: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:39:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:39:15 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee7c004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:15 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:39:15 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:15 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:39:15 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:15.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:39:16 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:16 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:16 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:16.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:16 compute-0 kernel: ganesha.nfsd[156268]: segfault at 50 ip 00007fef57b4a32e sp 00007fef0affc210 error 4 in libntirpc.so.5.8[7fef57b2f000+2c000] likely on CPU 4 (core 0, socket 4)
Nov 24 09:39:16 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 24 09:39:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[133218]: 24/11/2025 09:39:16 : epoch 692426a6 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fee7c004050 fd 48 proxy ignored for local
Nov 24 09:39:16 compute-0 systemd[1]: Started Process Core Dump (PID 171695/UID 0).
Nov 24 09:39:16 compute-0 ceph-mon[74331]: pgmap v361: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:39:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:39:17.000Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:39:17 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v362: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:39:17 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:17 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:39:17 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:17.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:39:17 compute-0 systemd-coredump[171697]: Process 133222 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 65:
                                                    #0  0x00007fef57b4a32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 24 09:39:17 compute-0 systemd[1]: systemd-coredump@2-171695-0.service: Deactivated successfully.
Nov 24 09:39:17 compute-0 systemd[1]: systemd-coredump@2-171695-0.service: Consumed 1.242s CPU time.
Nov 24 09:39:17 compute-0 podman[171703]: 2025-11-24 09:39:17.991782189 +0000 UTC m=+0.024539874 container died df9e80bc1955751465649d3c32cf2badcbe991afb82a69a98d6dbf4f4064c0aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Nov 24 09:39:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-2dc5bed0541814efe3dddc6359c8ad3c2d9239e94e5fc2a9330a444612bde598-merged.mount: Deactivated successfully.
Nov 24 09:39:18 compute-0 podman[171703]: 2025-11-24 09:39:18.106465464 +0000 UTC m=+0.139223139 container remove df9e80bc1955751465649d3c32cf2badcbe991afb82a69a98d6dbf4f4064c0aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:39:18 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.2.0.compute-0.ssprex.service: Main process exited, code=exited, status=139/n/a
Nov 24 09:39:18 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.2.0.compute-0.ssprex.service: Failed with result 'exit-code'.
Nov 24 09:39:18 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.2.0.compute-0.ssprex.service: Consumed 2.111s CPU time.
Nov 24 09:39:18 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:39:18 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:18 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:18 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:18.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:18 compute-0 ceph-mon[74331]: pgmap v362: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:39:18 compute-0 podman[171745]: 2025-11-24 09:39:18.811127334 +0000 UTC m=+0.081707058 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Nov 24 09:39:19 compute-0 sudo[171764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:39:19 compute-0 sudo[171764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:39:19 compute-0 sudo[171764]: pam_unix(sudo:session): session closed for user root
Nov 24 09:39:19 compute-0 sudo[171790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:39:19 compute-0 sudo[171790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:39:19 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/093919 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:39:19 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v363: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:39:19 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:19 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:19 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:19.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:19 compute-0 sudo[171790]: pam_unix(sudo:session): session closed for user root
Nov 24 09:39:20 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:39:20 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:39:20 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:39:20 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:39:20 compute-0 sudo[171846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:39:20 compute-0 sudo[171846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:39:20 compute-0 sudo[171846]: pam_unix(sudo:session): session closed for user root
Nov 24 09:39:20 compute-0 sudo[171871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 24 09:39:20 compute-0 sudo[171871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:39:20 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:20 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:39:20 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:20.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:39:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:39:20.546 165073 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:39:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:39:20.547 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:39:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:39:20.547 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:39:20 compute-0 podman[171939]: 2025-11-24 09:39:20.714134331 +0000 UTC m=+0.038239382 container create 6d50584cc5abd17985d60ce035593406bf40c0fe1693fd068f8e21ba7535c89c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Nov 24 09:39:20 compute-0 systemd[1]: Started libpod-conmon-6d50584cc5abd17985d60ce035593406bf40c0fe1693fd068f8e21ba7535c89c.scope.
Nov 24 09:39:20 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:39:20 compute-0 podman[171939]: 2025-11-24 09:39:20.699474618 +0000 UTC m=+0.023579689 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:39:20 compute-0 ceph-mon[74331]: pgmap v363: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:39:20 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:39:20 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:39:20 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:39:20 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:39:20 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:39:20 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:39:20 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:39:20 compute-0 podman[171939]: 2025-11-24 09:39:20.814219194 +0000 UTC m=+0.138324255 container init 6d50584cc5abd17985d60ce035593406bf40c0fe1693fd068f8e21ba7535c89c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Nov 24 09:39:20 compute-0 podman[171939]: 2025-11-24 09:39:20.824927487 +0000 UTC m=+0.149032538 container start 6d50584cc5abd17985d60ce035593406bf40c0fe1693fd068f8e21ba7535c89c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_lehmann, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:39:20 compute-0 podman[171939]: 2025-11-24 09:39:20.828622841 +0000 UTC m=+0.152727922 container attach 6d50584cc5abd17985d60ce035593406bf40c0fe1693fd068f8e21ba7535c89c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Nov 24 09:39:20 compute-0 happy_lehmann[171955]: 167 167
Nov 24 09:39:20 compute-0 systemd[1]: libpod-6d50584cc5abd17985d60ce035593406bf40c0fe1693fd068f8e21ba7535c89c.scope: Deactivated successfully.
Nov 24 09:39:20 compute-0 podman[171939]: 2025-11-24 09:39:20.833029143 +0000 UTC m=+0.157134194 container died 6d50584cc5abd17985d60ce035593406bf40c0fe1693fd068f8e21ba7535c89c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_lehmann, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 09:39:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-b522bab9e7e7823396c599b5fe603854b3ed4a28d5dcbe1e55682566556b0d2b-merged.mount: Deactivated successfully.
Nov 24 09:39:20 compute-0 podman[171939]: 2025-11-24 09:39:20.874886767 +0000 UTC m=+0.198991818 container remove 6d50584cc5abd17985d60ce035593406bf40c0fe1693fd068f8e21ba7535c89c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_lehmann, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Nov 24 09:39:20 compute-0 systemd[1]: libpod-conmon-6d50584cc5abd17985d60ce035593406bf40c0fe1693fd068f8e21ba7535c89c.scope: Deactivated successfully.
Nov 24 09:39:20 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:39:20] "GET /metrics HTTP/1.1" 200 48272 "" "Prometheus/2.51.0"
Nov 24 09:39:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:39:20] "GET /metrics HTTP/1.1" 200 48272 "" "Prometheus/2.51.0"
Nov 24 09:39:21 compute-0 podman[171979]: 2025-11-24 09:39:21.042531917 +0000 UTC m=+0.046037320 container create 55e80064fd963da738efedeaa0240ee47efd08fda18bd6a1a3f0e16d25e798c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_turing, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 09:39:21 compute-0 systemd[1]: Started libpod-conmon-55e80064fd963da738efedeaa0240ee47efd08fda18bd6a1a3f0e16d25e798c9.scope.
Nov 24 09:39:21 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:39:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e83c5085f707cb1ec5c9bb8da201a189a091916fecede7c53b15372406449cc3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:39:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e83c5085f707cb1ec5c9bb8da201a189a091916fecede7c53b15372406449cc3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:39:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e83c5085f707cb1ec5c9bb8da201a189a091916fecede7c53b15372406449cc3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:39:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e83c5085f707cb1ec5c9bb8da201a189a091916fecede7c53b15372406449cc3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:39:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e83c5085f707cb1ec5c9bb8da201a189a091916fecede7c53b15372406449cc3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:39:21 compute-0 podman[171979]: 2025-11-24 09:39:21.025049833 +0000 UTC m=+0.028555256 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:39:21 compute-0 podman[171979]: 2025-11-24 09:39:21.125231039 +0000 UTC m=+0.128736512 container init 55e80064fd963da738efedeaa0240ee47efd08fda18bd6a1a3f0e16d25e798c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_turing, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:39:21 compute-0 podman[171979]: 2025-11-24 09:39:21.134555436 +0000 UTC m=+0.138060859 container start 55e80064fd963da738efedeaa0240ee47efd08fda18bd6a1a3f0e16d25e798c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Nov 24 09:39:21 compute-0 podman[171979]: 2025-11-24 09:39:21.138035255 +0000 UTC m=+0.141540688 container attach 55e80064fd963da738efedeaa0240ee47efd08fda18bd6a1a3f0e16d25e798c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_turing, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Nov 24 09:39:21 compute-0 clever_turing[171996]: --> passed data devices: 0 physical, 1 LVM
Nov 24 09:39:21 compute-0 clever_turing[171996]: --> All data devices are unavailable
Nov 24 09:39:21 compute-0 sudo[172010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:39:21 compute-0 sudo[172010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:39:21 compute-0 sudo[172010]: pam_unix(sudo:session): session closed for user root
Nov 24 09:39:21 compute-0 systemd[1]: libpod-55e80064fd963da738efedeaa0240ee47efd08fda18bd6a1a3f0e16d25e798c9.scope: Deactivated successfully.
Nov 24 09:39:21 compute-0 podman[171979]: 2025-11-24 09:39:21.514954165 +0000 UTC m=+0.518459578 container died 55e80064fd963da738efedeaa0240ee47efd08fda18bd6a1a3f0e16d25e798c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_turing, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Nov 24 09:39:21 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v364: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:39:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-e83c5085f707cb1ec5c9bb8da201a189a091916fecede7c53b15372406449cc3-merged.mount: Deactivated successfully.
Nov 24 09:39:21 compute-0 podman[171979]: 2025-11-24 09:39:21.556409088 +0000 UTC m=+0.559914491 container remove 55e80064fd963da738efedeaa0240ee47efd08fda18bd6a1a3f0e16d25e798c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_turing, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Nov 24 09:39:21 compute-0 systemd[1]: libpod-conmon-55e80064fd963da738efedeaa0240ee47efd08fda18bd6a1a3f0e16d25e798c9.scope: Deactivated successfully.
Nov 24 09:39:21 compute-0 sudo[171871]: pam_unix(sudo:session): session closed for user root
Nov 24 09:39:21 compute-0 sudo[172048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:39:21 compute-0 sudo[172048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:39:21 compute-0 sudo[172048]: pam_unix(sudo:session): session closed for user root
Nov 24 09:39:21 compute-0 sudo[172073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- lvm list --format json
Nov 24 09:39:21 compute-0 sudo[172073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:39:21 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:21 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:21 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:21.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:22 compute-0 podman[172139]: 2025-11-24 09:39:22.085190488 +0000 UTC m=+0.043942788 container create fcbb3027568a71abee8a6b15c397845d4fafe45658f3f6a5d2f5a2fe663cd422 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2)
Nov 24 09:39:22 compute-0 systemd[1]: Started libpod-conmon-fcbb3027568a71abee8a6b15c397845d4fafe45658f3f6a5d2f5a2fe663cd422.scope.
Nov 24 09:39:22 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:39:22 compute-0 podman[172139]: 2025-11-24 09:39:22.068607366 +0000 UTC m=+0.027359686 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:39:22 compute-0 podman[172139]: 2025-11-24 09:39:22.175188815 +0000 UTC m=+0.133941145 container init fcbb3027568a71abee8a6b15c397845d4fafe45658f3f6a5d2f5a2fe663cd422 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:39:22 compute-0 podman[172139]: 2025-11-24 09:39:22.183781263 +0000 UTC m=+0.142533563 container start fcbb3027568a71abee8a6b15c397845d4fafe45658f3f6a5d2f5a2fe663cd422 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_pasteur, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Nov 24 09:39:22 compute-0 podman[172139]: 2025-11-24 09:39:22.18720497 +0000 UTC m=+0.145957270 container attach fcbb3027568a71abee8a6b15c397845d4fafe45658f3f6a5d2f5a2fe663cd422 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:39:22 compute-0 focused_pasteur[172155]: 167 167
Nov 24 09:39:22 compute-0 systemd[1]: libpod-fcbb3027568a71abee8a6b15c397845d4fafe45658f3f6a5d2f5a2fe663cd422.scope: Deactivated successfully.
Nov 24 09:39:22 compute-0 podman[172139]: 2025-11-24 09:39:22.190881504 +0000 UTC m=+0.149633804 container died fcbb3027568a71abee8a6b15c397845d4fafe45658f3f6a5d2f5a2fe663cd422 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_pasteur, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 24 09:39:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-2dbfed00a086e78a04272e98b6a4345a923ae50d3e8974a1d61d611e8c1b9f43-merged.mount: Deactivated successfully.
Nov 24 09:39:22 compute-0 podman[172139]: 2025-11-24 09:39:22.23597278 +0000 UTC m=+0.194725080 container remove fcbb3027568a71abee8a6b15c397845d4fafe45658f3f6a5d2f5a2fe663cd422 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 09:39:22 compute-0 systemd[1]: libpod-conmon-fcbb3027568a71abee8a6b15c397845d4fafe45658f3f6a5d2f5a2fe663cd422.scope: Deactivated successfully.
Nov 24 09:39:22 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/093922 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:39:22 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:22 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:22 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:22.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:22 compute-0 podman[172180]: 2025-11-24 09:39:22.426780629 +0000 UTC m=+0.060700263 container create 90f00f361b3a361d2827b6dfe5ebc3248ec2a6ea4afd6f99c4e34e121e981da1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_nobel, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 24 09:39:22 compute-0 systemd[1]: Started libpod-conmon-90f00f361b3a361d2827b6dfe5ebc3248ec2a6ea4afd6f99c4e34e121e981da1.scope.
Nov 24 09:39:22 compute-0 podman[172180]: 2025-11-24 09:39:22.398280925 +0000 UTC m=+0.032200639 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:39:22 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:39:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/135e4643486d0a0c86f27b2ab857c61547c24d1a4d383c45bd37ed73e17c7d89/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:39:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/135e4643486d0a0c86f27b2ab857c61547c24d1a4d383c45bd37ed73e17c7d89/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:39:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/135e4643486d0a0c86f27b2ab857c61547c24d1a4d383c45bd37ed73e17c7d89/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:39:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/135e4643486d0a0c86f27b2ab857c61547c24d1a4d383c45bd37ed73e17c7d89/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:39:22 compute-0 podman[172180]: 2025-11-24 09:39:22.516954421 +0000 UTC m=+0.150874075 container init 90f00f361b3a361d2827b6dfe5ebc3248ec2a6ea4afd6f99c4e34e121e981da1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_nobel, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:39:22 compute-0 podman[172180]: 2025-11-24 09:39:22.525817637 +0000 UTC m=+0.159737261 container start 90f00f361b3a361d2827b6dfe5ebc3248ec2a6ea4afd6f99c4e34e121e981da1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Nov 24 09:39:22 compute-0 podman[172180]: 2025-11-24 09:39:22.529276414 +0000 UTC m=+0.163196038 container attach 90f00f361b3a361d2827b6dfe5ebc3248ec2a6ea4afd6f99c4e34e121e981da1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_nobel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:39:22 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/093922 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:39:22 compute-0 dazzling_nobel[172197]: {
Nov 24 09:39:22 compute-0 dazzling_nobel[172197]:     "0": [
Nov 24 09:39:22 compute-0 dazzling_nobel[172197]:         {
Nov 24 09:39:22 compute-0 dazzling_nobel[172197]:             "devices": [
Nov 24 09:39:22 compute-0 dazzling_nobel[172197]:                 "/dev/loop3"
Nov 24 09:39:22 compute-0 dazzling_nobel[172197]:             ],
Nov 24 09:39:22 compute-0 dazzling_nobel[172197]:             "lv_name": "ceph_lv0",
Nov 24 09:39:22 compute-0 dazzling_nobel[172197]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:39:22 compute-0 dazzling_nobel[172197]:             "lv_size": "21470642176",
Nov 24 09:39:22 compute-0 dazzling_nobel[172197]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=84a084c3-61a7-5de7-8207-1f88efa59a64,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 24 09:39:22 compute-0 dazzling_nobel[172197]:             "lv_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:39:22 compute-0 dazzling_nobel[172197]:             "name": "ceph_lv0",
Nov 24 09:39:22 compute-0 dazzling_nobel[172197]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:39:22 compute-0 dazzling_nobel[172197]:             "tags": {
Nov 24 09:39:22 compute-0 dazzling_nobel[172197]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:39:22 compute-0 dazzling_nobel[172197]:                 "ceph.block_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:39:22 compute-0 dazzling_nobel[172197]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 09:39:22 compute-0 dazzling_nobel[172197]:                 "ceph.cluster_fsid": "84a084c3-61a7-5de7-8207-1f88efa59a64",
Nov 24 09:39:22 compute-0 dazzling_nobel[172197]:                 "ceph.cluster_name": "ceph",
Nov 24 09:39:22 compute-0 dazzling_nobel[172197]:                 "ceph.crush_device_class": "",
Nov 24 09:39:22 compute-0 dazzling_nobel[172197]:                 "ceph.encrypted": "0",
Nov 24 09:39:22 compute-0 dazzling_nobel[172197]:                 "ceph.osd_fsid": "4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c",
Nov 24 09:39:22 compute-0 dazzling_nobel[172197]:                 "ceph.osd_id": "0",
Nov 24 09:39:22 compute-0 dazzling_nobel[172197]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 09:39:22 compute-0 dazzling_nobel[172197]:                 "ceph.type": "block",
Nov 24 09:39:22 compute-0 dazzling_nobel[172197]:                 "ceph.vdo": "0",
Nov 24 09:39:22 compute-0 dazzling_nobel[172197]:                 "ceph.with_tpm": "0"
Nov 24 09:39:22 compute-0 dazzling_nobel[172197]:             },
Nov 24 09:39:22 compute-0 dazzling_nobel[172197]:             "type": "block",
Nov 24 09:39:22 compute-0 dazzling_nobel[172197]:             "vg_name": "ceph_vg0"
Nov 24 09:39:22 compute-0 dazzling_nobel[172197]:         }
Nov 24 09:39:22 compute-0 dazzling_nobel[172197]:     ]
Nov 24 09:39:22 compute-0 dazzling_nobel[172197]: }
Nov 24 09:39:22 compute-0 ceph-mon[74331]: pgmap v364: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:39:22 compute-0 systemd[1]: libpod-90f00f361b3a361d2827b6dfe5ebc3248ec2a6ea4afd6f99c4e34e121e981da1.scope: Deactivated successfully.
Nov 24 09:39:22 compute-0 podman[172180]: 2025-11-24 09:39:22.84898126 +0000 UTC m=+0.482900924 container died 90f00f361b3a361d2827b6dfe5ebc3248ec2a6ea4afd6f99c4e34e121e981da1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_nobel, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:39:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-135e4643486d0a0c86f27b2ab857c61547c24d1a4d383c45bd37ed73e17c7d89-merged.mount: Deactivated successfully.
Nov 24 09:39:22 compute-0 podman[172180]: 2025-11-24 09:39:22.896516129 +0000 UTC m=+0.530435753 container remove 90f00f361b3a361d2827b6dfe5ebc3248ec2a6ea4afd6f99c4e34e121e981da1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_nobel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Nov 24 09:39:22 compute-0 systemd[1]: libpod-conmon-90f00f361b3a361d2827b6dfe5ebc3248ec2a6ea4afd6f99c4e34e121e981da1.scope: Deactivated successfully.
Nov 24 09:39:22 compute-0 sudo[172073]: pam_unix(sudo:session): session closed for user root
Nov 24 09:39:23 compute-0 sudo[172217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:39:23 compute-0 sudo[172217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:39:23 compute-0 sudo[172217]: pam_unix(sudo:session): session closed for user root
Nov 24 09:39:23 compute-0 sudo[172242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- raw list --format json
Nov 24 09:39:23 compute-0 sudo[172242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:39:23 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:39:23 compute-0 podman[172310]: 2025-11-24 09:39:23.50170122 +0000 UTC m=+0.040653444 container create ead708825a8147d04f3ceb35b3622d7b242eb31bfdb36e1d0afe97cf174a8f40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 24 09:39:23 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v365: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:39:23 compute-0 systemd[1]: Started libpod-conmon-ead708825a8147d04f3ceb35b3622d7b242eb31bfdb36e1d0afe97cf174a8f40.scope.
Nov 24 09:39:23 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:39:23 compute-0 podman[172310]: 2025-11-24 09:39:23.484915213 +0000 UTC m=+0.023867457 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:39:23 compute-0 podman[172310]: 2025-11-24 09:39:23.588920416 +0000 UTC m=+0.127872660 container init ead708825a8147d04f3ceb35b3622d7b242eb31bfdb36e1d0afe97cf174a8f40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_agnesi, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid)
Nov 24 09:39:23 compute-0 podman[172310]: 2025-11-24 09:39:23.596235252 +0000 UTC m=+0.135187476 container start ead708825a8147d04f3ceb35b3622d7b242eb31bfdb36e1d0afe97cf174a8f40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_agnesi, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Nov 24 09:39:23 compute-0 podman[172310]: 2025-11-24 09:39:23.599497225 +0000 UTC m=+0.138449469 container attach ead708825a8147d04f3ceb35b3622d7b242eb31bfdb36e1d0afe97cf174a8f40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_agnesi, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 09:39:23 compute-0 suspicious_agnesi[172326]: 167 167
Nov 24 09:39:23 compute-0 systemd[1]: libpod-ead708825a8147d04f3ceb35b3622d7b242eb31bfdb36e1d0afe97cf174a8f40.scope: Deactivated successfully.
Nov 24 09:39:23 compute-0 podman[172310]: 2025-11-24 09:39:23.603009584 +0000 UTC m=+0.141961808 container died ead708825a8147d04f3ceb35b3622d7b242eb31bfdb36e1d0afe97cf174a8f40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_agnesi, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:39:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5252343a92db2d908eb56c9ab14ba81e9f6b5a092528223271685610ff9b1bd-merged.mount: Deactivated successfully.
Nov 24 09:39:23 compute-0 podman[172310]: 2025-11-24 09:39:23.635760117 +0000 UTC m=+0.174712341 container remove ead708825a8147d04f3ceb35b3622d7b242eb31bfdb36e1d0afe97cf174a8f40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_agnesi, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True)
Nov 24 09:39:23 compute-0 systemd[1]: libpod-conmon-ead708825a8147d04f3ceb35b3622d7b242eb31bfdb36e1d0afe97cf174a8f40.scope: Deactivated successfully.
Nov 24 09:39:23 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:23 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:23 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:23.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:23 compute-0 podman[172350]: 2025-11-24 09:39:23.807714787 +0000 UTC m=+0.052519196 container create 1f263c75ad44a14ed6bb82ab07867b3f6ecf7e20688b584df259d4d52443cbad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:39:23 compute-0 systemd[1]: Started libpod-conmon-1f263c75ad44a14ed6bb82ab07867b3f6ecf7e20688b584df259d4d52443cbad.scope.
Nov 24 09:39:23 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:39:23 compute-0 podman[172350]: 2025-11-24 09:39:23.780350862 +0000 UTC m=+0.025155291 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:39:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f215d48fc3e45da52a0d5094741620fb4ecd6b9f858431ec267b4132df237011/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:39:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f215d48fc3e45da52a0d5094741620fb4ecd6b9f858431ec267b4132df237011/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:39:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f215d48fc3e45da52a0d5094741620fb4ecd6b9f858431ec267b4132df237011/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:39:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f215d48fc3e45da52a0d5094741620fb4ecd6b9f858431ec267b4132df237011/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:39:23 compute-0 podman[172350]: 2025-11-24 09:39:23.895486158 +0000 UTC m=+0.140290567 container init 1f263c75ad44a14ed6bb82ab07867b3f6ecf7e20688b584df259d4d52443cbad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_proskuriakova, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 24 09:39:23 compute-0 podman[172350]: 2025-11-24 09:39:23.90422953 +0000 UTC m=+0.149033939 container start 1f263c75ad44a14ed6bb82ab07867b3f6ecf7e20688b584df259d4d52443cbad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:39:23 compute-0 podman[172350]: 2025-11-24 09:39:23.907526824 +0000 UTC m=+0.152331233 container attach 1f263c75ad44a14ed6bb82ab07867b3f6ecf7e20688b584df259d4d52443cbad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_proskuriakova, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True)
Nov 24 09:39:24 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:24 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:24 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:24.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:24 compute-0 lvm[172442]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 09:39:24 compute-0 lvm[172442]: VG ceph_vg0 finished
Nov 24 09:39:24 compute-0 hopeful_proskuriakova[172367]: {}
Nov 24 09:39:24 compute-0 systemd[1]: libpod-1f263c75ad44a14ed6bb82ab07867b3f6ecf7e20688b584df259d4d52443cbad.scope: Deactivated successfully.
Nov 24 09:39:24 compute-0 systemd[1]: libpod-1f263c75ad44a14ed6bb82ab07867b3f6ecf7e20688b584df259d4d52443cbad.scope: Consumed 1.259s CPU time.
Nov 24 09:39:24 compute-0 podman[172350]: 2025-11-24 09:39:24.682837639 +0000 UTC m=+0.927642048 container died 1f263c75ad44a14ed6bb82ab07867b3f6ecf7e20688b584df259d4d52443cbad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:39:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-f215d48fc3e45da52a0d5094741620fb4ecd6b9f858431ec267b4132df237011-merged.mount: Deactivated successfully.
Nov 24 09:39:24 compute-0 podman[172350]: 2025-11-24 09:39:24.736802011 +0000 UTC m=+0.981606410 container remove 1f263c75ad44a14ed6bb82ab07867b3f6ecf7e20688b584df259d4d52443cbad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Nov 24 09:39:24 compute-0 systemd[1]: libpod-conmon-1f263c75ad44a14ed6bb82ab07867b3f6ecf7e20688b584df259d4d52443cbad.scope: Deactivated successfully.
Nov 24 09:39:24 compute-0 sudo[172242]: pam_unix(sudo:session): session closed for user root
Nov 24 09:39:24 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:39:24 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:39:24 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:39:24 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:39:24 compute-0 ceph-mon[74331]: pgmap v365: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:39:24 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:39:24 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:39:24 compute-0 sudo[172457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:39:24 compute-0 sudo[172457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:39:24 compute-0 sudo[172457]: pam_unix(sudo:session): session closed for user root
Nov 24 09:39:25 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v366: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:39:25 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:25 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:39:25 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:25.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:39:26 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:26 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:26 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:26.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:26 compute-0 ceph-mon[74331]: pgmap v366: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:39:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:39:27.003Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:39:27 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v367: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:39:27 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:27 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:27 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:27.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:28 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:39:28 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:28 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:28 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:28.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:28 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.2.0.compute-0.ssprex.service: Scheduled restart job, restart counter is at 3.
Nov 24 09:39:28 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ssprex for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:39:28 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.2.0.compute-0.ssprex.service: Consumed 2.111s CPU time.
Nov 24 09:39:28 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ssprex for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:39:28 compute-0 podman[172531]: 2025-11-24 09:39:28.693380284 +0000 UTC m=+0.044249583 container create 6cbf7d6691b854e505035fd6b4a8cbdc0dc30dce35769b238629c50fa7ee7317 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0)
Nov 24 09:39:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/276545df570905e427b2dad17fecb69f3cf1685f8c2b1cce78fe30ceaa4089f6/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 24 09:39:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/276545df570905e427b2dad17fecb69f3cf1685f8c2b1cce78fe30ceaa4089f6/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:39:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/276545df570905e427b2dad17fecb69f3cf1685f8c2b1cce78fe30ceaa4089f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:39:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/276545df570905e427b2dad17fecb69f3cf1685f8c2b1cce78fe30ceaa4089f6/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ssprex-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:39:28 compute-0 podman[172531]: 2025-11-24 09:39:28.749681994 +0000 UTC m=+0.100551313 container init 6cbf7d6691b854e505035fd6b4a8cbdc0dc30dce35769b238629c50fa7ee7317 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:39:28 compute-0 podman[172531]: 2025-11-24 09:39:28.757237184 +0000 UTC m=+0.108106483 container start 6cbf7d6691b854e505035fd6b4a8cbdc0dc30dce35769b238629c50fa7ee7317 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 24 09:39:28 compute-0 bash[172531]: 6cbf7d6691b854e505035fd6b4a8cbdc0dc30dce35769b238629c50fa7ee7317
Nov 24 09:39:28 compute-0 podman[172531]: 2025-11-24 09:39:28.676491889 +0000 UTC m=+0.027361208 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:39:28 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ssprex for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:39:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:28 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 24 09:39:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:28 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 24 09:39:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:28 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 24 09:39:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:28 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 24 09:39:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:28 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 24 09:39:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:28 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 24 09:39:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:28 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 24 09:39:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:28 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:39:28 compute-0 ceph-mon[74331]: pgmap v367: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:39:29 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v368: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:39:29 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:29 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:29 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:29.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:30 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:30 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:30 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:30.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:30 compute-0 ceph-mon[74331]: pgmap v368: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:39:30 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:39:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:39:30] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Nov 24 09:39:30 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:39:30] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Nov 24 09:39:31 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v369: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 170 B/s wr, 1 op/s
Nov 24 09:39:31 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:31 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:31 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:31.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:32 compute-0 sshd-session[172592]: Connection closed by authenticating user ftp 209.38.206.249 port 58542 [preauth]
Nov 24 09:39:32 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:32 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:32 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:32.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:39:33 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v370: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Nov 24 09:39:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:33.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:34 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:34 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:34 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:34.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:34 compute-0 ceph-mon[74331]: pgmap v369: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 170 B/s wr, 1 op/s
Nov 24 09:39:34 compute-0 sshd-session[172597]: Invalid user kafka from 209.38.206.249 port 58556
Nov 24 09:39:35 compute-0 sshd-session[172597]: Connection closed by invalid user kafka 209.38.206.249 port 58556 [preauth]
Nov 24 09:39:35 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v371: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Nov 24 09:39:35 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:35 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:39:35 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:35.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:39:35 compute-0 ceph-mon[74331]: pgmap v370: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Nov 24 09:39:36 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:36 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:39:36 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:36.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:39:36 compute-0 sshd-session[172601]: Invalid user db2inst1 from 209.38.206.249 port 58564
Nov 24 09:39:36 compute-0 sshd-session[172601]: Connection closed by invalid user db2inst1 209.38.206.249 port 58564 [preauth]
Nov 24 09:39:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:39:37.004Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:39:37 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v372: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 2 op/s
Nov 24 09:39:37 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:37 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:37 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:37.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:38 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:39:38 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:38 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:39:38 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:38.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:39:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:38 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:39:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:38 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:39:39 compute-0 ceph-mon[74331]: pgmap v371: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Nov 24 09:39:39 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v373: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 2 op/s
Nov 24 09:39:39 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:39 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:39 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:39.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:40 compute-0 ceph-mon[74331]: pgmap v372: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 2 op/s
Nov 24 09:39:40 compute-0 ceph-mon[74331]: pgmap v373: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 2 op/s
Nov 24 09:39:40 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:40 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:39:40 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:40.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:39:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:39:40] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Nov 24 09:39:40 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:39:40] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Nov 24 09:39:41 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v374: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:39:41 compute-0 sudo[172608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:39:41 compute-0 sudo[172608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:39:41 compute-0 sudo[172608]: pam_unix(sudo:session): session closed for user root
Nov 24 09:39:41 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:41 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:39:41 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:41.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:39:42 compute-0 sshd-session[172633]: Invalid user es from 209.38.206.249 port 44336
Nov 24 09:39:42 compute-0 podman[172636]: 2025-11-24 09:39:42.399934398 +0000 UTC m=+0.076237649 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 24 09:39:42 compute-0 sshd-session[172633]: Connection closed by invalid user es 209.38.206.249 port 44336 [preauth]
Nov 24 09:39:42 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:42 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:42 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:42.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:42 compute-0 ceph-mon[74331]: pgmap v374: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:39:42 compute-0 sshd-session[172663]: Invalid user testuser from 209.38.206.249 port 44344
Nov 24 09:39:42 compute-0 sshd-session[172663]: Connection closed by invalid user testuser 209.38.206.249 port 44344 [preauth]
Nov 24 09:39:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:39:43 compute-0 sshd-session[172665]: Invalid user linaro from 209.38.206.249 port 44354
Nov 24 09:39:43 compute-0 sshd-session[172665]: Connection closed by invalid user linaro 209.38.206.249 port 44354 [preauth]
Nov 24 09:39:43 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v375: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 767 B/s wr, 2 op/s
Nov 24 09:39:43 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:43 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:43 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:43.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:44 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:44 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:44 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:44.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:44 compute-0 ceph-mon[74331]: pgmap v375: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 767 B/s wr, 2 op/s
Nov 24 09:39:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Optimize plan auto_2025-11-24_09:39:45
Nov 24 09:39:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 09:39:45 compute-0 ceph-mgr[74626]: [balancer INFO root] do_upmap
Nov 24 09:39:45 compute-0 ceph-mgr[74626]: [balancer INFO root] pools ['vms', '.rgw.root', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log', 'default.rgw.control', 'volumes', '.nfs', '.mgr', 'backups']
Nov 24 09:39:45 compute-0 ceph-mgr[74626]: [balancer INFO root] prepared 0/10 upmap changes
Nov 24 09:39:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:39:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:39:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:45 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:39:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:45 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 24 09:39:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:45 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 24 09:39:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:45 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 24 09:39:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:45 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 24 09:39:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:45 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 24 09:39:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:45 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 24 09:39:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:45 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:39:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:45 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:39:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:45 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:39:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:39:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:39:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:39:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:39:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:45 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 24 09:39:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:45 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:39:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:45 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 24 09:39:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:45 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 24 09:39:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:45 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 24 09:39:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:45 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 24 09:39:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:45 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 24 09:39:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:45 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 24 09:39:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:45 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 24 09:39:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:45 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 24 09:39:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:45 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 24 09:39:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:45 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 24 09:39:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:45 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 24 09:39:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:45 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 24 09:39:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:45 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 09:39:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:45 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 24 09:39:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:45 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 09:39:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 09:39:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:39:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 09:39:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:39:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:39:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:39:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:39:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:39:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:39:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:39:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:39:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:39:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 09:39:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:39:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:39:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:39:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 24 09:39:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:39:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 09:39:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:39:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:39:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:39:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 09:39:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:39:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 09:39:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 09:39:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:39:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:39:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:39:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:39:45 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v376: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 767 B/s wr, 2 op/s
Nov 24 09:39:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 09:39:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:39:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:39:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:39:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:39:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:45 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24f8000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:45 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:39:45 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:45 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:39:45 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:45.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:39:46 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:46 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:46 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:46.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:46 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24ec001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:46 compute-0 ceph-mon[74331]: pgmap v376: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 767 B/s wr, 2 op/s
Nov 24 09:39:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:39:47.007Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:39:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:47 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24d4000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:47 compute-0 ceph-mgr[74626]: [devicehealth INFO root] Check health
Nov 24 09:39:47 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v377: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:39:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:47 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24cc000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:47 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:47 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:39:47 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:47.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:39:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/093948 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 1ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:39:48 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:39:48 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:48 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:48 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:48.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/093948 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:39:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:48 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24d8000fa0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:48 compute-0 ceph-mon[74331]: pgmap v377: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:39:49 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:49 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24ec001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:49 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v378: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 511 B/s wr, 2 op/s
Nov 24 09:39:49 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:49 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24d40016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:49 compute-0 podman[172863]: 2025-11-24 09:39:49.790049438 +0000 UTC m=+0.057113581 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Nov 24 09:39:49 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:49 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:49 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:49.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:50 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:50 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:50 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:50.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:50 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24cc0016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:39:50] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Nov 24 09:39:50 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:39:50] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Nov 24 09:39:51 compute-0 ceph-mon[74331]: pgmap v378: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 511 B/s wr, 2 op/s
Nov 24 09:39:51 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:51 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24d8001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:51 compute-0 sshd-session[172884]: Invalid user web from 209.38.206.249 port 44368
Nov 24 09:39:51 compute-0 sshd-session[172884]: Connection closed by invalid user web 209.38.206.249 port 44368 [preauth]
Nov 24 09:39:51 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v379: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 511 B/s wr, 2 op/s
Nov 24 09:39:51 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:51 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24ec001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:51 compute-0 sshd-session[172887]: Invalid user ec2-user from 209.38.206.249 port 44508
Nov 24 09:39:51 compute-0 sshd-session[172887]: Connection closed by invalid user ec2-user 209.38.206.249 port 44508 [preauth]
Nov 24 09:39:51 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:51 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:39:51 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:51.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:39:52 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:52 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:39:52 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:52.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:39:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:52 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24d40016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:53 compute-0 ceph-mon[74331]: pgmap v379: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 511 B/s wr, 2 op/s
Nov 24 09:39:53 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:53 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24cc0016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:53 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:39:53 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v380: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Nov 24 09:39:53 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:53 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24d8001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:53 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:53 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:39:53 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:53.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:39:54 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:54 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:54 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:54.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:54 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:54 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24ec001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:55 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24d40016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:55 compute-0 ceph-mon[74331]: pgmap v380: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Nov 24 09:39:55 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v381: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Nov 24 09:39:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:55 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24cc0016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:55 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:55 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:39:55 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:55.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:39:56 compute-0 sshd-session[172899]: Invalid user docker from 209.38.206.249 port 44510
Nov 24 09:39:56 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:56 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:56 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:56.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:56 compute-0 sshd-session[172899]: Connection closed by invalid user docker 209.38.206.249 port 44510 [preauth]
Nov 24 09:39:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:56 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24d8001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:39:57.007Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:39:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:39:57.008Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 09:39:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:39:57.008Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:39:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:57 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24ec001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:57 compute-0 ceph-mon[74331]: pgmap v381: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Nov 24 09:39:57 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v382: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 170 B/s wr, 1 op/s
Nov 24 09:39:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:57 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24d4002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:57 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:57 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:57 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:57.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:39:58 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:39:58 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:58 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:39:58 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:39:58.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:39:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:58 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24cc002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:59 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24d8002f50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:59 compute-0 sshd-session[172904]: Invalid user hadoop from 209.38.206.249 port 44378
Nov 24 09:39:59 compute-0 sshd-session[172904]: Connection closed by invalid user hadoop 209.38.206.249 port 44378 [preauth]
Nov 24 09:39:59 compute-0 ceph-mon[74331]: pgmap v382: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 170 B/s wr, 1 op/s
Nov 24 09:39:59 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v383: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:39:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:39:59 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24ec001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:39:59 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:39:59 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:39:59 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:39:59.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:00 compute-0 ceph-mon[74331]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 24 09:40:00 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:00 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:00 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:00.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:00 compute-0 ceph-mon[74331]: pgmap v383: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:00 compute-0 ceph-mon[74331]: overall HEALTH_OK
Nov 24 09:40:00 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:40:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:00 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24d4002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:40:00] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Nov 24 09:40:00 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:40:00] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Nov 24 09:40:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:01 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24cc002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:01 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v384: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:01 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24d8002f50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:01 compute-0 sudo[172909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:40:01 compute-0 sudo[172909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:40:01 compute-0 sudo[172909]: pam_unix(sudo:session): session closed for user root
Nov 24 09:40:01 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:01 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:01 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:01.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:02 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:02 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:02 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:02.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:02 compute-0 ceph-mon[74331]: pgmap v384: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:02 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:02 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24ec001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:03 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24d4002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:03 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:40:03 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v385: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:03 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24cc002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:03 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:03 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:40:03 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:03.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:40:04 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:04 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:04 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:04.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:04 compute-0 kernel: SELinux:  Converting 2776 SID table entries...
Nov 24 09:40:04 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 09:40:04 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 24 09:40:04 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 09:40:04 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 24 09:40:04 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 09:40:04 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 09:40:04 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 09:40:04 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:04 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24d8002f50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:04 compute-0 ceph-mon[74331]: pgmap v385: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:05 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24ec001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:05 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v386: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:05 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24d4003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:05 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:05 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:40:05 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:05.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:40:06 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:06 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:40:06 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:06.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:40:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:06 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24cc003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:06 compute-0 ceph-mon[74331]: pgmap v386: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:40:07.009Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:40:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:40:07.009Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:40:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:07 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24d8004050 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:07 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v387: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 24 09:40:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:07 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24ec001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:07 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:07 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:07 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:07.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:08 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:40:08 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:08 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:08 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:08.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:08 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24d4003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:08 compute-0 ceph-mon[74331]: pgmap v387: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 24 09:40:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:09 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24cc003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=cleanup t=2025-11-24T09:40:09.201706125Z level=info msg="Completed cleanup jobs" duration=44.089536ms
Nov 24 09:40:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=grafana.update.checker t=2025-11-24T09:40:09.322272606Z level=info msg="Update check succeeded" duration=73.00186ms
Nov 24 09:40:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=plugins.update.checker t=2025-11-24T09:40:09.32237122Z level=info msg="Update check succeeded" duration=64.668772ms
Nov 24 09:40:09 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v388: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:09 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24d8004050 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:09 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:09 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:09 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:09.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:10 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:10 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:10 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:10.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:10 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24ec001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:10 compute-0 ceph-mon[74331]: pgmap v388: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:10 compute-0 sshd-session[172951]: Invalid user devopsadmin from 209.38.206.249 port 44394
Nov 24 09:40:10 compute-0 sshd-session[172951]: Connection closed by invalid user devopsadmin 209.38.206.249 port 44394 [preauth]
Nov 24 09:40:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:40:10] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Nov 24 09:40:10 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:40:10] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Nov 24 09:40:11 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:11 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24d4003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:11 compute-0 sshd-session[172953]: Invalid user centos from 209.38.206.249 port 46442
Nov 24 09:40:11 compute-0 sshd-session[172953]: Connection closed by invalid user centos 209.38.206.249 port 46442 [preauth]
Nov 24 09:40:11 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v389: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:11 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:11 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24cc003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:11 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:11 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:40:11 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:11.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:40:12 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:12 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:12 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:12.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:12 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:12 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24d8004050 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:12 compute-0 dbus-broker-launch[810]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Nov 24 09:40:12 compute-0 podman[172959]: 2025-11-24 09:40:12.821369883 +0000 UTC m=+0.093473773 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 09:40:12 compute-0 ceph-mon[74331]: pgmap v389: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:13 compute-0 sshd-session[172957]: Invalid user devopsuser from 209.38.206.249 port 46452
Nov 24 09:40:13 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:13 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24ec003cc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:13 compute-0 sshd-session[172957]: Connection closed by invalid user devopsuser 209.38.206.249 port 46452 [preauth]
Nov 24 09:40:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:40:13 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v390: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:13 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:13 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24d4003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:13 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:13 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:40:13 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:13.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:40:14 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:14 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:14 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:14.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:14 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:14 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24cc003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:15 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24d8004050 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:40:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:40:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:40:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:40:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:40:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:40:15 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v391: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:15 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24ec003cc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:15 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:15 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:15 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:15.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:16 compute-0 ceph-mon[74331]: pgmap v390: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:16 compute-0 kernel: SELinux:  Converting 2776 SID table entries...
Nov 24 09:40:16 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 09:40:16 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 24 09:40:16 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 09:40:16 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 24 09:40:16 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 09:40:16 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 09:40:16 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 09:40:16 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:16 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:16 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:16.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:16 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24d4003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:16 compute-0 sshd-session[172993]: Invalid user a from 209.38.206.249 port 46456
Nov 24 09:40:16 compute-0 sshd-session[172993]: Connection closed by invalid user a 209.38.206.249 port 46456 [preauth]
Nov 24 09:40:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:40:17.011Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:40:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:17 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24cc003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:17 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:40:17 compute-0 ceph-mon[74331]: pgmap v391: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:17 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v392: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 24 09:40:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:17 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24c4000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:17 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:17 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:40:17 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:17.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:40:18 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:40:18 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:18 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:18 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:18.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:18 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24c4000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:19 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:19 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24d4003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:19 compute-0 ceph-mon[74331]: pgmap v392: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 24 09:40:19 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v393: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:19 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:19 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24cc003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:19 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:19 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:40:19 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:19.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:40:20 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:20 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:20 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:20.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:40:20.548 165073 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:40:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:40:20.550 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:40:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:40:20.550 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:40:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:20 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24c4000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:20 compute-0 dbus-broker-launch[810]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Nov 24 09:40:20 compute-0 podman[173002]: 2025-11-24 09:40:20.785572589 +0000 UTC m=+0.050399589 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 24 09:40:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:40:20] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Nov 24 09:40:20 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:40:20] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Nov 24 09:40:21 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:21 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24ec003cc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:21 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v394: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:21 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:21 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24d4003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:21 compute-0 sudo[173022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:40:21 compute-0 sudo[173022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:40:21 compute-0 sudo[173022]: pam_unix(sudo:session): session closed for user root
Nov 24 09:40:21 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:21 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:21 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:21.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:22 compute-0 sshd-session[173047]: Invalid user max from 209.38.206.249 port 48958
Nov 24 09:40:22 compute-0 sshd-session[173047]: Connection closed by invalid user max 209.38.206.249 port 48958 [preauth]
Nov 24 09:40:22 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:22 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:22 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:22.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:22 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:22 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24cc003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:23 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24c4001fc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:23 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:40:23 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v395: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:23 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24ec003cc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:23 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:23 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:23 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:23.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:24 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:24 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:40:24 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:24.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:40:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:24 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24d4003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:25 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24cc003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:25 compute-0 ceph-mon[74331]: pgmap v393: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:25 compute-0 sudo[173052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:40:25 compute-0 sudo[173052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:40:25 compute-0 sudo[173052]: pam_unix(sudo:session): session closed for user root
Nov 24 09:40:25 compute-0 sudo[173077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:40:25 compute-0 sudo[173077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:40:25 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v396: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:25 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24c4001fc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:25 compute-0 sudo[173077]: pam_unix(sudo:session): session closed for user root
Nov 24 09:40:25 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Nov 24 09:40:25 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 09:40:25 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:25 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:25 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:25.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:26 compute-0 ceph-mon[74331]: pgmap v394: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:26 compute-0 ceph-mon[74331]: pgmap v395: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:26 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 09:40:26 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 09:40:26 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:26 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:40:26 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:26.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:40:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:26 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24ec003cc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:40:27.012Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:40:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:27 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24d4003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:27 compute-0 ceph-mon[74331]: pgmap v396: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:27 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 09:40:27 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:40:27 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 09:40:27 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:40:27 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v397: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 24 09:40:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:27 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24cc003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:27 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:27 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:27 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:27.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:28 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Nov 24 09:40:28 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 24 09:40:28 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:40:28 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:28 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:28 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:28.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:28 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:40:28 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:40:28 compute-0 ceph-mon[74331]: pgmap v397: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 24 09:40:28 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 24 09:40:28 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 24 09:40:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:28 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24c4001fc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:28 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 09:40:28 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:40:28 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 09:40:29 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:40:29 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:29 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24ec003cc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:29 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v398: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:29 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:29 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24d4003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:29 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:29 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:40:29 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:29.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:40:29 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Nov 24 09:40:29 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 24 09:40:29 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:40:29 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:40:29 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:40:29 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:40:29 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:40:29 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:40:30 compute-0 sudo[173736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:40:30 compute-0 sudo[173736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:40:30 compute-0 sudo[173736]: pam_unix(sudo:session): session closed for user root
Nov 24 09:40:30 compute-0 sudo[173803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 24 09:40:30 compute-0 sudo[173803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:40:30 compute-0 podman[174113]: 2025-11-24 09:40:30.517952955 +0000 UTC m=+0.042658535 container create 73607e03407fe50716153bcc9072b6e40aa00f957f331e199c3e8e77ca878c44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_goldwasser, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:40:30 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:30 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:30 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:30.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:30 compute-0 systemd[1]: Started libpod-conmon-73607e03407fe50716153bcc9072b6e40aa00f957f331e199c3e8e77ca878c44.scope.
Nov 24 09:40:30 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:40:30 compute-0 podman[174113]: 2025-11-24 09:40:30.498023826 +0000 UTC m=+0.022729426 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:40:30 compute-0 podman[174113]: 2025-11-24 09:40:30.604130171 +0000 UTC m=+0.128835771 container init 73607e03407fe50716153bcc9072b6e40aa00f957f331e199c3e8e77ca878c44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_goldwasser, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 24 09:40:30 compute-0 podman[174113]: 2025-11-24 09:40:30.611704643 +0000 UTC m=+0.136410223 container start 73607e03407fe50716153bcc9072b6e40aa00f957f331e199c3e8e77ca878c44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_goldwasser, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Nov 24 09:40:30 compute-0 podman[174113]: 2025-11-24 09:40:30.615023122 +0000 UTC m=+0.139728702 container attach 73607e03407fe50716153bcc9072b6e40aa00f957f331e199c3e8e77ca878c44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_goldwasser, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Nov 24 09:40:30 compute-0 beautiful_goldwasser[174197]: 167 167
Nov 24 09:40:30 compute-0 systemd[1]: libpod-73607e03407fe50716153bcc9072b6e40aa00f957f331e199c3e8e77ca878c44.scope: Deactivated successfully.
Nov 24 09:40:30 compute-0 podman[174113]: 2025-11-24 09:40:30.617528412 +0000 UTC m=+0.142233992 container died 73607e03407fe50716153bcc9072b6e40aa00f957f331e199c3e8e77ca878c44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_goldwasser, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:40:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-5eb71580c72e4ffe7c9ea57622eb08c02b97b32c0b46db08f3420e1070c6d97b-merged.mount: Deactivated successfully.
Nov 24 09:40:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:30 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24cc003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:30 compute-0 podman[174113]: 2025-11-24 09:40:30.655033792 +0000 UTC m=+0.179739372 container remove 73607e03407fe50716153bcc9072b6e40aa00f957f331e199c3e8e77ca878c44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Nov 24 09:40:30 compute-0 systemd[1]: libpod-conmon-73607e03407fe50716153bcc9072b6e40aa00f957f331e199c3e8e77ca878c44.scope: Deactivated successfully.
Nov 24 09:40:30 compute-0 podman[174347]: 2025-11-24 09:40:30.803938172 +0000 UTC m=+0.038549926 container create 42349828027b43569c8ddf1d56ca3e0ccda9745e07520bf8508730f6d0829dc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:40:30 compute-0 systemd[1]: Started libpod-conmon-42349828027b43569c8ddf1d56ca3e0ccda9745e07520bf8508730f6d0829dc5.scope.
Nov 24 09:40:30 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:40:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3021bf05b0a9e41ef2920bc79a34b9fe46d364322082427af57599a8a554fc88/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:40:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3021bf05b0a9e41ef2920bc79a34b9fe46d364322082427af57599a8a554fc88/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:40:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3021bf05b0a9e41ef2920bc79a34b9fe46d364322082427af57599a8a554fc88/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:40:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3021bf05b0a9e41ef2920bc79a34b9fe46d364322082427af57599a8a554fc88/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:40:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3021bf05b0a9e41ef2920bc79a34b9fe46d364322082427af57599a8a554fc88/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:40:30 compute-0 podman[174347]: 2025-11-24 09:40:30.87268343 +0000 UTC m=+0.107295184 container init 42349828027b43569c8ddf1d56ca3e0ccda9745e07520bf8508730f6d0829dc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_haibt, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:40:30 compute-0 podman[174347]: 2025-11-24 09:40:30.787138869 +0000 UTC m=+0.021750643 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:40:30 compute-0 podman[174347]: 2025-11-24 09:40:30.883688124 +0000 UTC m=+0.118299878 container start 42349828027b43569c8ddf1d56ca3e0ccda9745e07520bf8508730f6d0829dc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:40:30 compute-0 podman[174347]: 2025-11-24 09:40:30.886973313 +0000 UTC m=+0.121585107 container attach 42349828027b43569c8ddf1d56ca3e0ccda9745e07520bf8508730f6d0829dc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_haibt, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 09:40:30 compute-0 ceph-mon[74331]: pgmap v398: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:30 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 24 09:40:30 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 24 09:40:30 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:40:30 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:40:30 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:40:30 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:40:30 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:40:30 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:40:30 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:40:30 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:40:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:40:30] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Nov 24 09:40:30 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:40:30] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Nov 24 09:40:31 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:31 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24c40032f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:31 compute-0 fervent_haibt[174416]: --> passed data devices: 0 physical, 1 LVM
Nov 24 09:40:31 compute-0 fervent_haibt[174416]: --> All data devices are unavailable
Nov 24 09:40:31 compute-0 systemd[1]: libpod-42349828027b43569c8ddf1d56ca3e0ccda9745e07520bf8508730f6d0829dc5.scope: Deactivated successfully.
Nov 24 09:40:31 compute-0 podman[174347]: 2025-11-24 09:40:31.21464512 +0000 UTC m=+0.449256874 container died 42349828027b43569c8ddf1d56ca3e0ccda9745e07520bf8508730f6d0829dc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_haibt, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True)
Nov 24 09:40:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-3021bf05b0a9e41ef2920bc79a34b9fe46d364322082427af57599a8a554fc88-merged.mount: Deactivated successfully.
Nov 24 09:40:31 compute-0 podman[174347]: 2025-11-24 09:40:31.262801625 +0000 UTC m=+0.497413379 container remove 42349828027b43569c8ddf1d56ca3e0ccda9745e07520bf8508730f6d0829dc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_haibt, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:40:31 compute-0 systemd[1]: libpod-conmon-42349828027b43569c8ddf1d56ca3e0ccda9745e07520bf8508730f6d0829dc5.scope: Deactivated successfully.
Nov 24 09:40:31 compute-0 sudo[173803]: pam_unix(sudo:session): session closed for user root
Nov 24 09:40:31 compute-0 sudo[174750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:40:31 compute-0 sudo[174750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:40:31 compute-0 sudo[174750]: pam_unix(sudo:session): session closed for user root
Nov 24 09:40:31 compute-0 sudo[174817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- lvm list --format json
Nov 24 09:40:31 compute-0 sudo[174817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:40:31 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v399: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:31 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:31 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24ec003cc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:31 compute-0 podman[175141]: 2025-11-24 09:40:31.790655058 +0000 UTC m=+0.041216714 container create 9e503eb55e64f7cfcc60ea18b2ae24cbf63e80c38618fc200dda7ac1ef12dce9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_hertz, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:40:31 compute-0 systemd[1]: Started libpod-conmon-9e503eb55e64f7cfcc60ea18b2ae24cbf63e80c38618fc200dda7ac1ef12dce9.scope.
Nov 24 09:40:31 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:31 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:31 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:31.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:31 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:40:31 compute-0 podman[175141]: 2025-11-24 09:40:31.773047423 +0000 UTC m=+0.023609099 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:40:31 compute-0 podman[175141]: 2025-11-24 09:40:31.873670017 +0000 UTC m=+0.124231693 container init 9e503eb55e64f7cfcc60ea18b2ae24cbf63e80c38618fc200dda7ac1ef12dce9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_hertz, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 24 09:40:31 compute-0 podman[175141]: 2025-11-24 09:40:31.879730003 +0000 UTC m=+0.130291659 container start 9e503eb55e64f7cfcc60ea18b2ae24cbf63e80c38618fc200dda7ac1ef12dce9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_hertz, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:40:31 compute-0 podman[175141]: 2025-11-24 09:40:31.883283775 +0000 UTC m=+0.133845441 container attach 9e503eb55e64f7cfcc60ea18b2ae24cbf63e80c38618fc200dda7ac1ef12dce9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_hertz, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:40:31 compute-0 loving_hertz[175221]: 167 167
Nov 24 09:40:31 compute-0 systemd[1]: libpod-9e503eb55e64f7cfcc60ea18b2ae24cbf63e80c38618fc200dda7ac1ef12dce9.scope: Deactivated successfully.
Nov 24 09:40:31 compute-0 podman[175141]: 2025-11-24 09:40:31.886291872 +0000 UTC m=+0.136853548 container died 9e503eb55e64f7cfcc60ea18b2ae24cbf63e80c38618fc200dda7ac1ef12dce9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_hertz, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:40:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-a16a6bfaf84fa06162c4f030811606f4c51c06b5ca9e59cd4fd6c0d35b74a584-merged.mount: Deactivated successfully.
Nov 24 09:40:31 compute-0 podman[175141]: 2025-11-24 09:40:31.932318689 +0000 UTC m=+0.182880345 container remove 9e503eb55e64f7cfcc60ea18b2ae24cbf63e80c38618fc200dda7ac1ef12dce9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_hertz, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Nov 24 09:40:31 compute-0 systemd[1]: libpod-conmon-9e503eb55e64f7cfcc60ea18b2ae24cbf63e80c38618fc200dda7ac1ef12dce9.scope: Deactivated successfully.
Nov 24 09:40:32 compute-0 podman[175385]: 2025-11-24 09:40:32.086311628 +0000 UTC m=+0.043430221 container create f9ecb4ce28b2a21480d91af4c4574710fd0169fef4996c1d17e1ea75f876fdd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:40:32 compute-0 systemd[1]: Started libpod-conmon-f9ecb4ce28b2a21480d91af4c4574710fd0169fef4996c1d17e1ea75f876fdd6.scope.
Nov 24 09:40:32 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:40:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f58c92857e126b8479a0d3a710751eb37b1bf6a4a8ed01ee5f979b022101a419/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:40:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f58c92857e126b8479a0d3a710751eb37b1bf6a4a8ed01ee5f979b022101a419/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:40:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f58c92857e126b8479a0d3a710751eb37b1bf6a4a8ed01ee5f979b022101a419/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:40:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f58c92857e126b8479a0d3a710751eb37b1bf6a4a8ed01ee5f979b022101a419/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:40:32 compute-0 podman[175385]: 2025-11-24 09:40:32.066314573 +0000 UTC m=+0.023433196 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:40:32 compute-0 podman[175385]: 2025-11-24 09:40:32.168911857 +0000 UTC m=+0.126030480 container init f9ecb4ce28b2a21480d91af4c4574710fd0169fef4996c1d17e1ea75f876fdd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_lichterman, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 24 09:40:32 compute-0 podman[175385]: 2025-11-24 09:40:32.175015274 +0000 UTC m=+0.132133867 container start f9ecb4ce28b2a21480d91af4c4574710fd0169fef4996c1d17e1ea75f876fdd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_lichterman, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 24 09:40:32 compute-0 podman[175385]: 2025-11-24 09:40:32.178508414 +0000 UTC m=+0.135627007 container attach f9ecb4ce28b2a21480d91af4c4574710fd0169fef4996c1d17e1ea75f876fdd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:40:32 compute-0 recursing_lichterman[175465]: {
Nov 24 09:40:32 compute-0 recursing_lichterman[175465]:     "0": [
Nov 24 09:40:32 compute-0 recursing_lichterman[175465]:         {
Nov 24 09:40:32 compute-0 recursing_lichterman[175465]:             "devices": [
Nov 24 09:40:32 compute-0 recursing_lichterman[175465]:                 "/dev/loop3"
Nov 24 09:40:32 compute-0 recursing_lichterman[175465]:             ],
Nov 24 09:40:32 compute-0 recursing_lichterman[175465]:             "lv_name": "ceph_lv0",
Nov 24 09:40:32 compute-0 recursing_lichterman[175465]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:40:32 compute-0 recursing_lichterman[175465]:             "lv_size": "21470642176",
Nov 24 09:40:32 compute-0 recursing_lichterman[175465]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=84a084c3-61a7-5de7-8207-1f88efa59a64,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 24 09:40:32 compute-0 recursing_lichterman[175465]:             "lv_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:40:32 compute-0 recursing_lichterman[175465]:             "name": "ceph_lv0",
Nov 24 09:40:32 compute-0 recursing_lichterman[175465]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:40:32 compute-0 recursing_lichterman[175465]:             "tags": {
Nov 24 09:40:32 compute-0 recursing_lichterman[175465]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:40:32 compute-0 recursing_lichterman[175465]:                 "ceph.block_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:40:32 compute-0 recursing_lichterman[175465]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 09:40:32 compute-0 recursing_lichterman[175465]:                 "ceph.cluster_fsid": "84a084c3-61a7-5de7-8207-1f88efa59a64",
Nov 24 09:40:32 compute-0 recursing_lichterman[175465]:                 "ceph.cluster_name": "ceph",
Nov 24 09:40:32 compute-0 recursing_lichterman[175465]:                 "ceph.crush_device_class": "",
Nov 24 09:40:32 compute-0 recursing_lichterman[175465]:                 "ceph.encrypted": "0",
Nov 24 09:40:32 compute-0 recursing_lichterman[175465]:                 "ceph.osd_fsid": "4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c",
Nov 24 09:40:32 compute-0 recursing_lichterman[175465]:                 "ceph.osd_id": "0",
Nov 24 09:40:32 compute-0 recursing_lichterman[175465]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 09:40:32 compute-0 recursing_lichterman[175465]:                 "ceph.type": "block",
Nov 24 09:40:32 compute-0 recursing_lichterman[175465]:                 "ceph.vdo": "0",
Nov 24 09:40:32 compute-0 recursing_lichterman[175465]:                 "ceph.with_tpm": "0"
Nov 24 09:40:32 compute-0 recursing_lichterman[175465]:             },
Nov 24 09:40:32 compute-0 recursing_lichterman[175465]:             "type": "block",
Nov 24 09:40:32 compute-0 recursing_lichterman[175465]:             "vg_name": "ceph_vg0"
Nov 24 09:40:32 compute-0 recursing_lichterman[175465]:         }
Nov 24 09:40:32 compute-0 recursing_lichterman[175465]:     ]
Nov 24 09:40:32 compute-0 recursing_lichterman[175465]: }
Nov 24 09:40:32 compute-0 systemd[1]: libpod-f9ecb4ce28b2a21480d91af4c4574710fd0169fef4996c1d17e1ea75f876fdd6.scope: Deactivated successfully.
Nov 24 09:40:32 compute-0 podman[175385]: 2025-11-24 09:40:32.485544968 +0000 UTC m=+0.442663561 container died f9ecb4ce28b2a21480d91af4c4574710fd0169fef4996c1d17e1ea75f876fdd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 24 09:40:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-f58c92857e126b8479a0d3a710751eb37b1bf6a4a8ed01ee5f979b022101a419-merged.mount: Deactivated successfully.
Nov 24 09:40:32 compute-0 podman[175385]: 2025-11-24 09:40:32.531023919 +0000 UTC m=+0.488142512 container remove f9ecb4ce28b2a21480d91af4c4574710fd0169fef4996c1d17e1ea75f876fdd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:40:32 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:32 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:32 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:32.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:32 compute-0 systemd[1]: libpod-conmon-f9ecb4ce28b2a21480d91af4c4574710fd0169fef4996c1d17e1ea75f876fdd6.scope: Deactivated successfully.
Nov 24 09:40:32 compute-0 sudo[174817]: pam_unix(sudo:session): session closed for user root
Nov 24 09:40:32 compute-0 sudo[175816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:40:32 compute-0 sudo[175816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:40:32 compute-0 sudo[175816]: pam_unix(sudo:session): session closed for user root
Nov 24 09:40:32 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:32 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24d4003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:32 compute-0 sudo[175887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- raw list --format json
Nov 24 09:40:32 compute-0 sudo[175887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:40:32 compute-0 ceph-mon[74331]: pgmap v399: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:33 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:33 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24cc003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:33 compute-0 podman[176232]: 2025-11-24 09:40:33.127998437 +0000 UTC m=+0.045007941 container create bcce45d2345e6e39a7af25faf8b730a3c581fe1c92a1ff2859158ca48fa76f70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_shamir, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:40:33 compute-0 systemd[1]: Started libpod-conmon-bcce45d2345e6e39a7af25faf8b730a3c581fe1c92a1ff2859158ca48fa76f70.scope.
Nov 24 09:40:33 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:40:33 compute-0 podman[176232]: 2025-11-24 09:40:33.11028698 +0000 UTC m=+0.027296524 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:40:33 compute-0 podman[176232]: 2025-11-24 09:40:33.209713093 +0000 UTC m=+0.126722627 container init bcce45d2345e6e39a7af25faf8b730a3c581fe1c92a1ff2859158ca48fa76f70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_shamir, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:40:33 compute-0 podman[176232]: 2025-11-24 09:40:33.218271663 +0000 UTC m=+0.135281177 container start bcce45d2345e6e39a7af25faf8b730a3c581fe1c92a1ff2859158ca48fa76f70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 24 09:40:33 compute-0 podman[176232]: 2025-11-24 09:40:33.222237365 +0000 UTC m=+0.139247009 container attach bcce45d2345e6e39a7af25faf8b730a3c581fe1c92a1ff2859158ca48fa76f70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_shamir, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:40:33 compute-0 adoring_shamir[176300]: 167 167
Nov 24 09:40:33 compute-0 systemd[1]: libpod-bcce45d2345e6e39a7af25faf8b730a3c581fe1c92a1ff2859158ca48fa76f70.scope: Deactivated successfully.
Nov 24 09:40:33 compute-0 podman[176232]: 2025-11-24 09:40:33.226014103 +0000 UTC m=+0.143023627 container died bcce45d2345e6e39a7af25faf8b730a3c581fe1c92a1ff2859158ca48fa76f70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:40:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-020048f72db60a185fe5717e643ab5952bd528463aa39cea104f63354a1a1192-merged.mount: Deactivated successfully.
Nov 24 09:40:33 compute-0 podman[176232]: 2025-11-24 09:40:33.261133858 +0000 UTC m=+0.178143372 container remove bcce45d2345e6e39a7af25faf8b730a3c581fe1c92a1ff2859158ca48fa76f70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_shamir, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:40:33 compute-0 systemd[1]: libpod-conmon-bcce45d2345e6e39a7af25faf8b730a3c581fe1c92a1ff2859158ca48fa76f70.scope: Deactivated successfully.
Nov 24 09:40:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:40:33 compute-0 podman[176451]: 2025-11-24 09:40:33.436147369 +0000 UTC m=+0.047828444 container create a1d1cf46101c54602969f66e493390912691d6030f5f07aaa01673a72c993fa4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 24 09:40:33 compute-0 systemd[1]: Started libpod-conmon-a1d1cf46101c54602969f66e493390912691d6030f5f07aaa01673a72c993fa4.scope.
Nov 24 09:40:33 compute-0 podman[176451]: 2025-11-24 09:40:33.415035815 +0000 UTC m=+0.026716910 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:40:33 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:40:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2432cf5214b385665b92a0a49f78bfd276c31258b9bb8d9aa7f20744c70b5103/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:40:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2432cf5214b385665b92a0a49f78bfd276c31258b9bb8d9aa7f20744c70b5103/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:40:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2432cf5214b385665b92a0a49f78bfd276c31258b9bb8d9aa7f20744c70b5103/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:40:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2432cf5214b385665b92a0a49f78bfd276c31258b9bb8d9aa7f20744c70b5103/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:40:33 compute-0 podman[176451]: 2025-11-24 09:40:33.531070456 +0000 UTC m=+0.142751531 container init a1d1cf46101c54602969f66e493390912691d6030f5f07aaa01673a72c993fa4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_allen, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:40:33 compute-0 podman[176451]: 2025-11-24 09:40:33.537212644 +0000 UTC m=+0.148893729 container start a1d1cf46101c54602969f66e493390912691d6030f5f07aaa01673a72c993fa4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_allen, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Nov 24 09:40:33 compute-0 podman[176451]: 2025-11-24 09:40:33.540392866 +0000 UTC m=+0.152074101 container attach a1d1cf46101c54602969f66e493390912691d6030f5f07aaa01673a72c993fa4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Nov 24 09:40:33 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v400: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:33 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:33 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24c40032f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:33.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:34 compute-0 lvm[177066]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 09:40:34 compute-0 lvm[177066]: VG ceph_vg0 finished
Nov 24 09:40:34 compute-0 xenodochial_allen[176537]: {}
Nov 24 09:40:34 compute-0 systemd[1]: libpod-a1d1cf46101c54602969f66e493390912691d6030f5f07aaa01673a72c993fa4.scope: Deactivated successfully.
Nov 24 09:40:34 compute-0 systemd[1]: libpod-a1d1cf46101c54602969f66e493390912691d6030f5f07aaa01673a72c993fa4.scope: Consumed 1.131s CPU time.
Nov 24 09:40:34 compute-0 podman[176451]: 2025-11-24 09:40:34.241010464 +0000 UTC m=+0.852691539 container died a1d1cf46101c54602969f66e493390912691d6030f5f07aaa01673a72c993fa4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_allen, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 09:40:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-2432cf5214b385665b92a0a49f78bfd276c31258b9bb8d9aa7f20744c70b5103-merged.mount: Deactivated successfully.
Nov 24 09:40:34 compute-0 podman[176451]: 2025-11-24 09:40:34.303407772 +0000 UTC m=+0.915088847 container remove a1d1cf46101c54602969f66e493390912691d6030f5f07aaa01673a72c993fa4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:40:34 compute-0 systemd[1]: libpod-conmon-a1d1cf46101c54602969f66e493390912691d6030f5f07aaa01673a72c993fa4.scope: Deactivated successfully.
Nov 24 09:40:34 compute-0 sudo[175887]: pam_unix(sudo:session): session closed for user root
Nov 24 09:40:34 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:40:34 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:40:34 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:40:34 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:40:34 compute-0 sudo[177263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:40:34 compute-0 sudo[177263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:40:34 compute-0 sudo[177263]: pam_unix(sudo:session): session closed for user root
Nov 24 09:40:34 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:34 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:34 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:34.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:34 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:34 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24c40032f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:35 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24d4003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:35 compute-0 ceph-mon[74331]: pgmap v400: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:35 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:40:35 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:40:35 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v401: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:35 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24cc003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:35 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:35 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:35 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:35.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:36 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:36 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:36 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:36.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:36 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:36 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24ec003cc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:40:37.013Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:40:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:37 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24d4003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:37 compute-0 ceph-mon[74331]: pgmap v401: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:37 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v402: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 24 09:40:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:37 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24c4003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:37 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:37 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:37 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:37.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:38 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:40:38 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:38 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:38 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:38.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:38 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24cc003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:39 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:39 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24ec003cc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:39 compute-0 ceph-mon[74331]: pgmap v402: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 24 09:40:39 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v403: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:39 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:39 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24d4003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:39 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:39 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:39 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:39.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:40 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:40 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:40 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:40.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:40 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24c4003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:40:40] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Nov 24 09:40:40 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:40:40] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Nov 24 09:40:41 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:41 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24cc003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:41 compute-0 ceph-mon[74331]: pgmap v403: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:41 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v404: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:41 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:41 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24ec003cc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:41 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:41 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:41 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:41.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:41 compute-0 sudo[182325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:40:41 compute-0 sudo[182325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:40:41 compute-0 sudo[182325]: pam_unix(sudo:session): session closed for user root
Nov 24 09:40:42 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:42 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:42 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:42.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:42 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24d4003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:42 compute-0 sshd-session[182667]: Invalid user cs2 from 209.38.206.249 port 48962
Nov 24 09:40:42 compute-0 sshd-session[182667]: Connection closed by invalid user cs2 209.38.206.249 port 48962 [preauth]
Nov 24 09:40:43 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:43 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24c4003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:43 compute-0 ceph-mon[74331]: pgmap v404: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:40:43 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v405: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:43 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:43 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24cc003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:40:43 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:43 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:43 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:43.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:43 compute-0 podman[183604]: 2025-11-24 09:40:43.861449476 +0000 UTC m=+0.135851842 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, managed_by=edpm_ansible)
Nov 24 09:40:44 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:44 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:44 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:44.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:44 compute-0 kernel: ganesha.nfsd[172732]: segfault at 50 ip 00007f25a6ee132e sp 00007f2575ffa210 error 4 in libntirpc.so.5.8[7f25a6ec6000+2c000] likely on CPU 7 (core 0, socket 7)
Nov 24 09:40:44 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 24 09:40:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[172547]: 24/11/2025 09:40:44 : epoch 692427d0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24ec003cc0 fd 37 proxy ignored for local
Nov 24 09:40:44 compute-0 systemd[1]: Started Process Core Dump (PID 184339/UID 0).
Nov 24 09:40:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Optimize plan auto_2025-11-24_09:40:45
Nov 24 09:40:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 09:40:45 compute-0 ceph-mgr[74626]: [balancer INFO root] do_upmap
Nov 24 09:40:45 compute-0 ceph-mgr[74626]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', 'default.rgw.control', 'default.rgw.log', 'volumes', 'vms', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'backups', '.nfs', 'images']
Nov 24 09:40:45 compute-0 ceph-mgr[74626]: [balancer INFO root] prepared 0/10 upmap changes
Nov 24 09:40:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:40:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:40:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:40:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:40:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:40:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:40:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 09:40:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:40:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 09:40:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:40:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:40:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:40:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:40:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:40:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:40:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:40:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:40:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:40:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 09:40:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:40:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:40:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:40:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 24 09:40:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:40:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 09:40:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:40:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:40:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:40:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 09:40:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:40:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 09:40:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 09:40:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:40:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:40:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:40:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:40:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 09:40:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:40:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:40:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:40:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:40:45 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v406: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:45 compute-0 ceph-mon[74331]: pgmap v405: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:45 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:45 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:45 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:45.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:45 compute-0 systemd-coredump[184352]: Process 172551 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 43:
                                                    #0  0x00007f25a6ee132e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 24 09:40:45 compute-0 systemd[1]: systemd-coredump@3-184339-0.service: Deactivated successfully.
Nov 24 09:40:45 compute-0 systemd[1]: systemd-coredump@3-184339-0.service: Consumed 1.228s CPU time.
Nov 24 09:40:46 compute-0 podman[185343]: 2025-11-24 09:40:46.042357618 +0000 UTC m=+0.025149040 container died 6cbf7d6691b854e505035fd6b4a8cbdc0dc30dce35769b238629c50fa7ee7317 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True)
Nov 24 09:40:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-276545df570905e427b2dad17fecb69f3cf1685f8c2b1cce78fe30ceaa4089f6-merged.mount: Deactivated successfully.
Nov 24 09:40:46 compute-0 podman[185343]: 2025-11-24 09:40:46.178761304 +0000 UTC m=+0.161552706 container remove 6cbf7d6691b854e505035fd6b4a8cbdc0dc30dce35769b238629c50fa7ee7317 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:40:46 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.2.0.compute-0.ssprex.service: Main process exited, code=exited, status=139/n/a
Nov 24 09:40:46 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.2.0.compute-0.ssprex.service: Failed with result 'exit-code'.
Nov 24 09:40:46 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.2.0.compute-0.ssprex.service: Consumed 1.545s CPU time.
Nov 24 09:40:46 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:46 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:46 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:46.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:46 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:40:46 compute-0 ceph-mon[74331]: pgmap v406: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:40:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:40:47.014Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:40:47 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v407: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:40:47 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:47 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:47 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:47.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:48 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:40:48 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:48 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:48 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:48.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:48 compute-0 ceph-mon[74331]: pgmap v407: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:40:49 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v408: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:40:49 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:49 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:40:49 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:49.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:40:50 compute-0 sshd-session[187888]: Invalid user minecraft from 209.38.206.249 port 52482
Nov 24 09:40:50 compute-0 sshd-session[187888]: Connection closed by invalid user minecraft 209.38.206.249 port 52482 [preauth]
Nov 24 09:40:50 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:50 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:50 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:50.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/094050 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:40:50 compute-0 ceph-mon[74331]: pgmap v408: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:40:50 compute-0 sshd-session[188232]: Invalid user appserver from 209.38.206.249 port 57198
Nov 24 09:40:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:40:50] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Nov 24 09:40:50 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:40:50] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Nov 24 09:40:51 compute-0 sshd-session[188232]: Connection closed by invalid user appserver 209.38.206.249 port 57198 [preauth]
Nov 24 09:40:51 compute-0 podman[188459]: 2025-11-24 09:40:51.081396766 +0000 UTC m=+0.090895943 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 09:40:51 compute-0 sshd-session[188606]: Invalid user hduser from 209.38.206.249 port 57200
Nov 24 09:40:51 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v409: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:40:51 compute-0 sshd-session[188606]: Connection closed by invalid user hduser 209.38.206.249 port 57200 [preauth]
Nov 24 09:40:51 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:51 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 24 09:40:51 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:51.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 24 09:40:52 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:52 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:52 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:52.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:52 compute-0 ceph-mon[74331]: pgmap v409: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:40:53 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:40:53 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v410: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:40:53 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:53 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:53 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:53.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:54 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:54 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 24 09:40:54 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:54.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 24 09:40:54 compute-0 ceph-mon[74331]: pgmap v410: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:40:55 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v411: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:40:55 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:55 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:55 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:55.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:56 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.2.0.compute-0.ssprex.service: Scheduled restart job, restart counter is at 4.
Nov 24 09:40:56 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ssprex for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:40:56 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.2.0.compute-0.ssprex.service: Consumed 1.545s CPU time.
Nov 24 09:40:56 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ssprex for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:40:56 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:56 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:56 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:56.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:56 compute-0 podman[190746]: 2025-11-24 09:40:56.686199697 +0000 UTC m=+0.046336695 container create 9ba91ae3e1e1e5f9865e5f38c289f288c02b1947600f66476b33f0b8ef6b5dee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Nov 24 09:40:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9970c1bf71d5f4b0ffe61f67e7998c3dc31577e113673819809f7ef01474cea/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 24 09:40:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9970c1bf71d5f4b0ffe61f67e7998c3dc31577e113673819809f7ef01474cea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:40:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9970c1bf71d5f4b0ffe61f67e7998c3dc31577e113673819809f7ef01474cea/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:40:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9970c1bf71d5f4b0ffe61f67e7998c3dc31577e113673819809f7ef01474cea/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ssprex-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:40:56 compute-0 podman[190746]: 2025-11-24 09:40:56.665424873 +0000 UTC m=+0.025561891 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:40:56 compute-0 podman[190746]: 2025-11-24 09:40:56.769021533 +0000 UTC m=+0.129158541 container init 9ba91ae3e1e1e5f9865e5f38c289f288c02b1947600f66476b33f0b8ef6b5dee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Nov 24 09:40:56 compute-0 podman[190746]: 2025-11-24 09:40:56.77436356 +0000 UTC m=+0.134500558 container start 9ba91ae3e1e1e5f9865e5f38c289f288c02b1947600f66476b33f0b8ef6b5dee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 09:40:56 compute-0 bash[190746]: 9ba91ae3e1e1e5f9865e5f38c289f288c02b1947600f66476b33f0b8ef6b5dee
Nov 24 09:40:56 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ssprex for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:40:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:40:56 : epoch 69242828 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 24 09:40:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:40:56 : epoch 69242828 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 24 09:40:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:40:56 : epoch 69242828 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 24 09:40:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:40:56 : epoch 69242828 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 24 09:40:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:40:56 : epoch 69242828 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 24 09:40:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:40:56 : epoch 69242828 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 24 09:40:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:40:56 : epoch 69242828 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 24 09:40:57 compute-0 ceph-mon[74331]: pgmap v411: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:40:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:40:57.015Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:40:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:40:57 : epoch 69242828 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:40:57 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v412: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:40:57 compute-0 sshd-session[190781]: Invalid user ubnt from 209.38.206.249 port 57208
Nov 24 09:40:57 compute-0 sshd-session[190781]: Connection closed by invalid user ubnt 209.38.206.249 port 57208 [preauth]
Nov 24 09:40:57 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:57 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:40:57 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:57.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:40:58 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:40:58 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:58 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:40:58 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:40:58.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:40:59 compute-0 ceph-mon[74331]: pgmap v412: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:40:59 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v413: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:40:59 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:40:59 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:40:59 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:40:59.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:41:00 compute-0 sshd-session[190816]: Invalid user deployer from 209.38.206.249 port 52650
Nov 24 09:41:00 compute-0 sshd-session[190816]: Connection closed by invalid user deployer 209.38.206.249 port 52650 [preauth]
Nov 24 09:41:00 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:00 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:00 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:00.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:41:00] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Nov 24 09:41:00 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:41:00] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Nov 24 09:41:01 compute-0 ceph-mon[74331]: pgmap v413: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:41:01 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:41:01 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v414: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Nov 24 09:41:01 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:01 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:01 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:01.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:01 compute-0 sudo[190820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:41:01 compute-0 sudo[190820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:41:01 compute-0 sudo[190820]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:02 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:02 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:41:02 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:02.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:41:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:03 : epoch 69242828 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:41:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:03 : epoch 69242828 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:41:03 compute-0 ceph-mon[74331]: pgmap v414: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Nov 24 09:41:03 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:41:03 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v415: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Nov 24 09:41:03 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:03 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:03 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:03.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:04 compute-0 sshd-session[190847]: Invalid user moxa from 209.38.206.249 port 52652
Nov 24 09:41:04 compute-0 sshd-session[190847]: Connection closed by invalid user moxa 209.38.206.249 port 52652 [preauth]
Nov 24 09:41:04 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:04 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:04 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:04.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:05 compute-0 ceph-mon[74331]: pgmap v415: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Nov 24 09:41:05 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v416: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Nov 24 09:41:05 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:05 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:05 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:05.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:06 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:06 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:06 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:06.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:41:07.016Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:41:07 compute-0 ceph-mon[74331]: pgmap v416: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Nov 24 09:41:07 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v417: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1022 B/s wr, 3 op/s
Nov 24 09:41:07 compute-0 kernel: SELinux:  Converting 2777 SID table entries...
Nov 24 09:41:07 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 09:41:07 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 24 09:41:07 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 09:41:07 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 24 09:41:07 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 09:41:07 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 09:41:07 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 09:41:07 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:07 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:41:07 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:07.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:41:08 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:41:08 compute-0 ceph-mon[74331]: pgmap v417: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1022 B/s wr, 3 op/s
Nov 24 09:41:08 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:08 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:08 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:08.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:08 compute-0 groupadd[190868]: group added to /etc/group: name=dnsmasq, GID=991
Nov 24 09:41:08 compute-0 groupadd[190868]: group added to /etc/gshadow: name=dnsmasq
Nov 24 09:41:08 compute-0 groupadd[190868]: new group: name=dnsmasq, GID=991
Nov 24 09:41:08 compute-0 useradd[190875]: new user: name=dnsmasq, UID=991, GID=991, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Nov 24 09:41:08 compute-0 dbus-broker-launch[790]: Noticed file-system modification, trigger reload.
Nov 24 09:41:08 compute-0 dbus-broker-launch[810]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Nov 24 09:41:08 compute-0 dbus-broker-launch[790]: Noticed file-system modification, trigger reload.
Nov 24 09:41:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:09 : epoch 69242828 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:41:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:09 : epoch 69242828 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 24 09:41:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:09 : epoch 69242828 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 24 09:41:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:09 : epoch 69242828 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 24 09:41:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:09 : epoch 69242828 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 24 09:41:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:09 : epoch 69242828 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 24 09:41:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:09 : epoch 69242828 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 24 09:41:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:09 : epoch 69242828 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:41:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:09 : epoch 69242828 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:41:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:09 : epoch 69242828 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:41:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:09 : epoch 69242828 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 24 09:41:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:09 : epoch 69242828 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:41:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:09 : epoch 69242828 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 24 09:41:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:09 : epoch 69242828 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 24 09:41:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:09 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faac0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:09 : epoch 69242828 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 24 09:41:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:09 : epoch 69242828 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 24 09:41:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:09 : epoch 69242828 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 24 09:41:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:09 : epoch 69242828 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 24 09:41:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:09 : epoch 69242828 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 24 09:41:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:09 : epoch 69242828 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 24 09:41:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:09 : epoch 69242828 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 24 09:41:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:09 : epoch 69242828 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 24 09:41:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:09 : epoch 69242828 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 24 09:41:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:09 : epoch 69242828 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 24 09:41:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:09 : epoch 69242828 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 09:41:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:09 : epoch 69242828 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 24 09:41:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:09 : epoch 69242828 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 09:41:09 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v418: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1022 B/s wr, 3 op/s
Nov 24 09:41:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:09 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaac001970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:09 compute-0 groupadd[190906]: group added to /etc/group: name=clevis, GID=990
Nov 24 09:41:09 compute-0 groupadd[190906]: group added to /etc/gshadow: name=clevis
Nov 24 09:41:09 compute-0 groupadd[190906]: new group: name=clevis, GID=990
Nov 24 09:41:09 compute-0 sshd-session[190901]: Invalid user devops from 209.38.206.249 port 52660
Nov 24 09:41:09 compute-0 useradd[190913]: new user: name=clevis, UID=990, GID=990, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Nov 24 09:41:09 compute-0 sshd-session[190901]: Connection closed by invalid user devops 209.38.206.249 port 52660 [preauth]
Nov 24 09:41:09 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:09 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:41:09 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:09.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:41:09 compute-0 usermod[190923]: add 'clevis' to group 'tss'
Nov 24 09:41:09 compute-0 usermod[190923]: add 'clevis' to shadow group 'tss'
Nov 24 09:41:10 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:10 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:10 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:10.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:10 compute-0 ceph-mon[74331]: pgmap v418: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1022 B/s wr, 3 op/s
Nov 24 09:41:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:10 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa94000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:41:10] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Nov 24 09:41:10 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:41:10] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Nov 24 09:41:11 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:11 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:11 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v419: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:41:11 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:11 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faac0001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:11 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:11 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:11 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:11.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:12 compute-0 polkitd[43367]: Reloading rules
Nov 24 09:41:12 compute-0 polkitd[43367]: Collecting garbage unconditionally...
Nov 24 09:41:12 compute-0 polkitd[43367]: Loading rules from directory /etc/polkit-1/rules.d
Nov 24 09:41:12 compute-0 polkitd[43367]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 24 09:41:12 compute-0 polkitd[43367]: Finished loading, compiling and executing 3 rules
Nov 24 09:41:12 compute-0 polkitd[43367]: Reloading rules
Nov 24 09:41:12 compute-0 polkitd[43367]: Collecting garbage unconditionally...
Nov 24 09:41:12 compute-0 polkitd[43367]: Loading rules from directory /etc/polkit-1/rules.d
Nov 24 09:41:12 compute-0 polkitd[43367]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 24 09:41:12 compute-0 polkitd[43367]: Finished loading, compiling and executing 3 rules
Nov 24 09:41:12 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:12 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:12 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:12.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:12 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/094112 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:41:12 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:12 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa940016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:12 compute-0 ceph-mon[74331]: pgmap v419: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:41:13 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:13 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaac002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:41:13 compute-0 groupadd[191114]: group added to /etc/group: name=ceph, GID=167
Nov 24 09:41:13 compute-0 groupadd[191114]: group added to /etc/gshadow: name=ceph
Nov 24 09:41:13 compute-0 groupadd[191114]: new group: name=ceph, GID=167
Nov 24 09:41:13 compute-0 useradd[191120]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Nov 24 09:41:13 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v420: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:41:13 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:13 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa900016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:13 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:13 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:41:13 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:13.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:41:14 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:14 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:14 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:14.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:14 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:14 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faac0001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:14 compute-0 ceph-mon[74331]: pgmap v420: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:41:14 compute-0 podman[191128]: 2025-11-24 09:41:14.833986958 +0000 UTC m=+0.108100207 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 24 09:41:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:15 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa940016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:41:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:41:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:41:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:41:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:41:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:41:15 compute-0 sshd-session[191155]: Invalid user ubuntu from 209.38.206.249 port 41844
Nov 24 09:41:15 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v421: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:41:15 compute-0 sshd-session[191155]: Connection closed by invalid user ubuntu 209.38.206.249 port 41844 [preauth]
Nov 24 09:41:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:15 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaac002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:15 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:41:15 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:15 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:15 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:15.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:16 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:16 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:41:16 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:16.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:41:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:16 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa900016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:16 compute-0 ceph-mon[74331]: pgmap v421: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:41:16 compute-0 sshd[1005]: Received signal 15; terminating.
Nov 24 09:41:16 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Nov 24 09:41:16 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Nov 24 09:41:16 compute-0 systemd[1]: sshd.service: Unit process 191843 (sshd-session) remains running after unit stopped.
Nov 24 09:41:16 compute-0 systemd[1]: sshd.service: Unit process 191844 (sshd-session) remains running after unit stopped.
Nov 24 09:41:16 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Nov 24 09:41:16 compute-0 systemd[1]: sshd.service: Consumed 4.468s CPU time, 33.6M memory peak, read 32.0K from disk, written 148.0K to disk.
Nov 24 09:41:16 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Nov 24 09:41:16 compute-0 systemd[1]: Stopping sshd-keygen.target...
Nov 24 09:41:16 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 24 09:41:16 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 24 09:41:16 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 24 09:41:16 compute-0 systemd[1]: Reached target sshd-keygen.target.
Nov 24 09:41:16 compute-0 systemd[1]: Starting OpenSSH server daemon...
Nov 24 09:41:16 compute-0 sshd[191849]: Server listening on 0.0.0.0 port 22.
Nov 24 09:41:16 compute-0 sshd[191849]: Server listening on :: port 22.
Nov 24 09:41:16 compute-0 systemd[1]: Started OpenSSH server daemon.
Nov 24 09:41:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:41:17.017Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:41:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:17 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faac00089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:17 compute-0 sshd-session[191843]: Invalid user openhabian from 209.38.206.249 port 41848
Nov 24 09:41:17 compute-0 sshd-session[191843]: Connection closed by invalid user openhabian 209.38.206.249 port 41848 [preauth]
Nov 24 09:41:17 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v422: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Nov 24 09:41:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:17 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa940016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:17 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:17 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:17 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:17.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:18 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:41:18 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 09:41:18 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 24 09:41:18 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:18 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:18 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:18.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:18 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaac002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:18 compute-0 systemd[1]: Reloading.
Nov 24 09:41:18 compute-0 systemd-rc-local-generator[192109]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:41:18 compute-0 systemd-sysv-generator[192112]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:41:18 compute-0 ceph-mon[74331]: pgmap v422: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Nov 24 09:41:19 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 24 09:41:19 compute-0 auditd[701]: Audit daemon rotating log files
Nov 24 09:41:19 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:19 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa900016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:19 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v423: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:41:19 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:19 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa940016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:19 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:19 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:19 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:19.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:41:20.550 165073 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:41:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:41:20.550 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:41:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:41:20.550 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:41:20 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:20 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:20 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:20.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:20 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faac00089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:41:20] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Nov 24 09:41:20 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:41:20] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Nov 24 09:41:21 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:21 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaac002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:21 compute-0 sudo[171647]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:21 compute-0 ceph-mon[74331]: pgmap v423: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:41:21 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v424: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:41:21 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:21 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:21 compute-0 podman[195704]: 2025-11-24 09:41:21.771878799 +0000 UTC m=+0.049541167 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 09:41:21 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:21 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:41:21 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:21.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:41:22 compute-0 sudo[196122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:41:22 compute-0 sudo[196122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:41:22 compute-0 sudo[196122]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:22 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:22 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:41:22 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:22.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:41:22 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:22 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa94002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:22 compute-0 sshd-session[196636]: Invalid user guest from 209.38.206.249 port 59080
Nov 24 09:41:22 compute-0 sshd-session[196636]: Connection closed by invalid user guest 209.38.206.249 port 59080 [preauth]
Nov 24 09:41:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:23 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faac00096e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:23 compute-0 ceph-mon[74331]: pgmap v424: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:41:23 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:41:23 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v425: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:41:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:23 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:23 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:23 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:41:23 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:23.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:41:24 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:24 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:24 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:24.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:24 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaac002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:25 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaac002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:25 compute-0 ceph-mon[74331]: pgmap v425: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:41:25 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v426: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:41:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:25 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaac002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:25 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:25 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:25 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:25.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:26 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 09:41:26 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 24 09:41:26 compute-0 systemd[1]: man-db-cache-update.service: Consumed 9.704s CPU time.
Nov 24 09:41:26 compute-0 systemd[1]: run-re62d2b15b2114b7083ca1c3fe7403ad9.service: Deactivated successfully.
Nov 24 09:41:26 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:26 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:26 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:26.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:26 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:41:27.018Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:41:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:27 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa94003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:27 compute-0 ceph-mon[74331]: pgmap v426: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:41:27 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v427: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:41:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:27 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaac002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:27 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:27 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:27 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:27.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:28 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:41:28 compute-0 sshd-session[200574]: Invalid user fa from 209.38.206.249 port 59092
Nov 24 09:41:28 compute-0 sshd-session[200574]: Connection closed by invalid user fa 209.38.206.249 port 59092 [preauth]
Nov 24 09:41:28 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:28 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:41:28 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:28.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:41:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:28 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaac002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:29 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:29 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaac002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:29 compute-0 ceph-mon[74331]: pgmap v427: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:41:29 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v428: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:41:29 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:29 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:29 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:29 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:29 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:29.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:30 compute-0 ceph-mon[74331]: pgmap v428: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:41:30 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:30 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:30 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:30.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:30 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faac00096e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:41:30] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Nov 24 09:41:30 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:41:30] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Nov 24 09:41:31 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:31 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa94003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:31 compute-0 sudo[200705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zoxsotojhlgvkanqublhkbptzqblozlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977290.6942332-968-221004892474614/AnsiballZ_systemd.py'
Nov 24 09:41:31 compute-0 sudo[200705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:41:31 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:41:31 compute-0 python3.9[200707]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 09:41:31 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v429: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:41:31 compute-0 systemd[1]: Reloading.
Nov 24 09:41:31 compute-0 systemd-sysv-generator[200739]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:41:31 compute-0 systemd-rc-local-generator[200734]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:41:31 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:31 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaac002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:31 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:31 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:41:31 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:31.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:41:31 compute-0 sudo[200705]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:32 compute-0 sudo[200896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbiitvgfbbgdnrpbinijaxgbnyunyvnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977292.059006-968-265158394781398/AnsiballZ_systemd.py'
Nov 24 09:41:32 compute-0 sudo[200896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:41:32 compute-0 ceph-mon[74331]: pgmap v429: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:41:32 compute-0 python3.9[200898]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 09:41:32 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:32 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:32 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:32.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:32 compute-0 systemd[1]: Reloading.
Nov 24 09:41:32 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:32 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa90003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:32 compute-0 systemd-sysv-generator[200932]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:41:32 compute-0 systemd-rc-local-generator[200928]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:41:33 compute-0 sudo[200896]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:33 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:33 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faac00096e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:41:33 compute-0 sudo[201087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-notftmwpfaxutcbjhwrfthdmoyryjrml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977293.1471133-968-153723871545377/AnsiballZ_systemd.py'
Nov 24 09:41:33 compute-0 sudo[201087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:41:33 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v430: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:41:33 compute-0 python3.9[201089]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 09:41:33 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:33 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faac00096e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:33 compute-0 systemd[1]: Reloading.
Nov 24 09:41:33 compute-0 systemd-rc-local-generator[201118]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:41:33 compute-0 systemd-sysv-generator[201123]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:41:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:33.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:34 compute-0 sudo[201087]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:34 compute-0 sudo[201278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfjzdxopiyjxcfzahdflcgzsxovqornn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977294.1965094-968-145245641689245/AnsiballZ_systemd.py'
Nov 24 09:41:34 compute-0 sudo[201278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:41:34 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:34 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:34 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:34.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:34 compute-0 ceph-mon[74331]: pgmap v430: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:41:34 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:34 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faac00096e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:34 compute-0 python3.9[201280]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 09:41:34 compute-0 systemd[1]: Reloading.
Nov 24 09:41:34 compute-0 systemd-rc-local-generator[201336]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:41:34 compute-0 systemd-sysv-generator[201340]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:41:35 compute-0 sudo[201282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:41:35 compute-0 sudo[201282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:41:35 compute-0 sudo[201282]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:35 compute-0 sudo[201278]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:35 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faac00096e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:35 compute-0 sudo[201348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:41:35 compute-0 sudo[201348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:41:35 compute-0 sshd-session[201344]: Invalid user elastic from 209.38.206.249 port 60030
Nov 24 09:41:35 compute-0 sshd-session[201344]: Connection closed by invalid user elastic 209.38.206.249 port 60030 [preauth]
Nov 24 09:41:35 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v431: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:41:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/094135 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:41:35 compute-0 sudo[201348]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:35 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faab4001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:35 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:41:35 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:41:35 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:41:35 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:41:35 compute-0 sudo[201429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:41:35 compute-0 sudo[201429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:41:35 compute-0 sudo[201429]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:35 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:35 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:35 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:35.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:35 compute-0 sudo[201454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 24 09:41:35 compute-0 sudo[201454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:41:36 compute-0 podman[201521]: 2025-11-24 09:41:36.338183669 +0000 UTC m=+0.041774987 container create 115dd12fc377737a105fbdd4b9e808deb6d58b8b3b7a2b9cc7d007b1a31c26e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:41:36 compute-0 systemd[1]: Started libpod-conmon-115dd12fc377737a105fbdd4b9e808deb6d58b8b3b7a2b9cc7d007b1a31c26e6.scope.
Nov 24 09:41:36 compute-0 podman[201521]: 2025-11-24 09:41:36.319419726 +0000 UTC m=+0.023011064 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:41:36 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:41:36 compute-0 podman[201521]: 2025-11-24 09:41:36.438551788 +0000 UTC m=+0.142143126 container init 115dd12fc377737a105fbdd4b9e808deb6d58b8b3b7a2b9cc7d007b1a31c26e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_heisenberg, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 24 09:41:36 compute-0 podman[201521]: 2025-11-24 09:41:36.448139508 +0000 UTC m=+0.151730826 container start 115dd12fc377737a105fbdd4b9e808deb6d58b8b3b7a2b9cc7d007b1a31c26e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:41:36 compute-0 podman[201521]: 2025-11-24 09:41:36.451573564 +0000 UTC m=+0.155164912 container attach 115dd12fc377737a105fbdd4b9e808deb6d58b8b3b7a2b9cc7d007b1a31c26e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_heisenberg, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Nov 24 09:41:36 compute-0 confident_heisenberg[201537]: 167 167
Nov 24 09:41:36 compute-0 systemd[1]: libpod-115dd12fc377737a105fbdd4b9e808deb6d58b8b3b7a2b9cc7d007b1a31c26e6.scope: Deactivated successfully.
Nov 24 09:41:36 compute-0 podman[201521]: 2025-11-24 09:41:36.455596705 +0000 UTC m=+0.159188023 container died 115dd12fc377737a105fbdd4b9e808deb6d58b8b3b7a2b9cc7d007b1a31c26e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_heisenberg, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Nov 24 09:41:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f29d88efb4f19d174b03b65c310b1dd030df7298b3072c7c2030ae98a5b8e6b-merged.mount: Deactivated successfully.
Nov 24 09:41:36 compute-0 podman[201521]: 2025-11-24 09:41:36.496267887 +0000 UTC m=+0.199859205 container remove 115dd12fc377737a105fbdd4b9e808deb6d58b8b3b7a2b9cc7d007b1a31c26e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_heisenberg, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Nov 24 09:41:36 compute-0 systemd[1]: libpod-conmon-115dd12fc377737a105fbdd4b9e808deb6d58b8b3b7a2b9cc7d007b1a31c26e6.scope: Deactivated successfully.
Nov 24 09:41:36 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:36 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:41:36 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:36.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:41:36 compute-0 podman[201561]: 2025-11-24 09:41:36.659703333 +0000 UTC m=+0.041551495 container create 767f1657a47e1e2a5586db4789fb8ad114b62da92749173a3d3cd09dacde737a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_chaum, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 24 09:41:36 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:36 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa94004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:36 compute-0 ceph-mon[74331]: pgmap v431: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:41:36 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:41:36 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:41:36 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:41:36 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:41:36 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:41:36 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:41:36 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:41:36 compute-0 systemd[1]: Started libpod-conmon-767f1657a47e1e2a5586db4789fb8ad114b62da92749173a3d3cd09dacde737a.scope.
Nov 24 09:41:36 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:41:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1ff03d657d3927a51c6454e58a33948a8f0b01623e36c648e8d90d3ef93b07f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:41:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1ff03d657d3927a51c6454e58a33948a8f0b01623e36c648e8d90d3ef93b07f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:41:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1ff03d657d3927a51c6454e58a33948a8f0b01623e36c648e8d90d3ef93b07f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:41:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1ff03d657d3927a51c6454e58a33948a8f0b01623e36c648e8d90d3ef93b07f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:41:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1ff03d657d3927a51c6454e58a33948a8f0b01623e36c648e8d90d3ef93b07f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:41:36 compute-0 podman[201561]: 2025-11-24 09:41:36.643035515 +0000 UTC m=+0.024883697 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:41:36 compute-0 podman[201561]: 2025-11-24 09:41:36.755557701 +0000 UTC m=+0.137405883 container init 767f1657a47e1e2a5586db4789fb8ad114b62da92749173a3d3cd09dacde737a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_chaum, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Nov 24 09:41:36 compute-0 podman[201561]: 2025-11-24 09:41:36.76423471 +0000 UTC m=+0.146082872 container start 767f1657a47e1e2a5586db4789fb8ad114b62da92749173a3d3cd09dacde737a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_chaum, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 24 09:41:36 compute-0 podman[201561]: 2025-11-24 09:41:36.771362089 +0000 UTC m=+0.153210271 container attach 767f1657a47e1e2a5586db4789fb8ad114b62da92749173a3d3cd09dacde737a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_chaum, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:41:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:41:37.019Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 09:41:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:41:37.019Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 09:41:37 compute-0 great_chaum[201577]: --> passed data devices: 0 physical, 1 LVM
Nov 24 09:41:37 compute-0 great_chaum[201577]: --> All data devices are unavailable
Nov 24 09:41:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:37 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaac002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:37 compute-0 systemd[1]: libpod-767f1657a47e1e2a5586db4789fb8ad114b62da92749173a3d3cd09dacde737a.scope: Deactivated successfully.
Nov 24 09:41:37 compute-0 podman[201561]: 2025-11-24 09:41:37.180975319 +0000 UTC m=+0.562823491 container died 767f1657a47e1e2a5586db4789fb8ad114b62da92749173a3d3cd09dacde737a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:41:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1ff03d657d3927a51c6454e58a33948a8f0b01623e36c648e8d90d3ef93b07f-merged.mount: Deactivated successfully.
Nov 24 09:41:37 compute-0 podman[201561]: 2025-11-24 09:41:37.223724134 +0000 UTC m=+0.605572296 container remove 767f1657a47e1e2a5586db4789fb8ad114b62da92749173a3d3cd09dacde737a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_chaum, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Nov 24 09:41:37 compute-0 systemd[1]: libpod-conmon-767f1657a47e1e2a5586db4789fb8ad114b62da92749173a3d3cd09dacde737a.scope: Deactivated successfully.
Nov 24 09:41:37 compute-0 sudo[201454]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:37 compute-0 sudo[201607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:41:37 compute-0 sudo[201607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:41:37 compute-0 sudo[201607]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:37 compute-0 sudo[201632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- lvm list --format json
Nov 24 09:41:37 compute-0 sudo[201632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:41:37 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v432: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:41:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:37 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaac002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:37 compute-0 podman[201700]: 2025-11-24 09:41:37.825340798 +0000 UTC m=+0.046602562 container create 1c9d47f955a034c8b6070a4bd3a5e35b1829aacc15194b9b878b598849e7248b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_wiles, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 24 09:41:37 compute-0 systemd[1]: Started libpod-conmon-1c9d47f955a034c8b6070a4bd3a5e35b1829aacc15194b9b878b598849e7248b.scope.
Nov 24 09:41:37 compute-0 podman[201700]: 2025-11-24 09:41:37.803766516 +0000 UTC m=+0.025028260 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:41:37 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:41:37 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:37 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:37 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:37.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:37 compute-0 podman[201700]: 2025-11-24 09:41:37.917323339 +0000 UTC m=+0.138585423 container init 1c9d47f955a034c8b6070a4bd3a5e35b1829aacc15194b9b878b598849e7248b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_wiles, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:41:37 compute-0 podman[201700]: 2025-11-24 09:41:37.9249117 +0000 UTC m=+0.146173444 container start 1c9d47f955a034c8b6070a4bd3a5e35b1829aacc15194b9b878b598849e7248b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Nov 24 09:41:37 compute-0 podman[201700]: 2025-11-24 09:41:37.928280864 +0000 UTC m=+0.149542588 container attach 1c9d47f955a034c8b6070a4bd3a5e35b1829aacc15194b9b878b598849e7248b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 24 09:41:37 compute-0 condescending_wiles[201717]: 167 167
Nov 24 09:41:37 compute-0 systemd[1]: libpod-1c9d47f955a034c8b6070a4bd3a5e35b1829aacc15194b9b878b598849e7248b.scope: Deactivated successfully.
Nov 24 09:41:37 compute-0 podman[201700]: 2025-11-24 09:41:37.931737811 +0000 UTC m=+0.152999535 container died 1c9d47f955a034c8b6070a4bd3a5e35b1829aacc15194b9b878b598849e7248b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Nov 24 09:41:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-672f4614026000f1ba4b2c23fc84a06ee27aab56a475fdad2c56a97afc174a4b-merged.mount: Deactivated successfully.
Nov 24 09:41:37 compute-0 podman[201700]: 2025-11-24 09:41:37.982730472 +0000 UTC m=+0.203992196 container remove 1c9d47f955a034c8b6070a4bd3a5e35b1829aacc15194b9b878b598849e7248b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 24 09:41:37 compute-0 systemd[1]: libpod-conmon-1c9d47f955a034c8b6070a4bd3a5e35b1829aacc15194b9b878b598849e7248b.scope: Deactivated successfully.
Nov 24 09:41:38 compute-0 podman[201814]: 2025-11-24 09:41:38.16732423 +0000 UTC m=+0.055971087 container create 3843f6f5d57b214ef6453f5f5e3d2b83aab20f2529e17303c2325800c983b264 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Nov 24 09:41:38 compute-0 systemd[1]: Started libpod-conmon-3843f6f5d57b214ef6453f5f5e3d2b83aab20f2529e17303c2325800c983b264.scope.
Nov 24 09:41:38 compute-0 podman[201814]: 2025-11-24 09:41:38.145273696 +0000 UTC m=+0.033920573 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:41:38 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:41:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a69b032a2dcb0918774f08ac1b57786c543343fb459906abc1e228cfa81189fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:41:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a69b032a2dcb0918774f08ac1b57786c543343fb459906abc1e228cfa81189fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:41:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a69b032a2dcb0918774f08ac1b57786c543343fb459906abc1e228cfa81189fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:41:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a69b032a2dcb0918774f08ac1b57786c543343fb459906abc1e228cfa81189fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:41:38 compute-0 podman[201814]: 2025-11-24 09:41:38.263999119 +0000 UTC m=+0.152645986 container init 3843f6f5d57b214ef6453f5f5e3d2b83aab20f2529e17303c2325800c983b264 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_edison, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:41:38 compute-0 podman[201814]: 2025-11-24 09:41:38.27160465 +0000 UTC m=+0.160251507 container start 3843f6f5d57b214ef6453f5f5e3d2b83aab20f2529e17303c2325800c983b264 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_edison, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 24 09:41:38 compute-0 podman[201814]: 2025-11-24 09:41:38.275951649 +0000 UTC m=+0.164598526 container attach 3843f6f5d57b214ef6453f5f5e3d2b83aab20f2529e17303c2325800c983b264 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid)
Nov 24 09:41:38 compute-0 sudo[201886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggbkzohnykuqyqplsgqrwultttcfqnhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977297.9295967-1055-177962475460871/AnsiballZ_systemd.py'
Nov 24 09:41:38 compute-0 sudo[201886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:41:38 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:41:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/094138 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:41:38 compute-0 vibrant_edison[201854]: {
Nov 24 09:41:38 compute-0 vibrant_edison[201854]:     "0": [
Nov 24 09:41:38 compute-0 vibrant_edison[201854]:         {
Nov 24 09:41:38 compute-0 vibrant_edison[201854]:             "devices": [
Nov 24 09:41:38 compute-0 vibrant_edison[201854]:                 "/dev/loop3"
Nov 24 09:41:38 compute-0 vibrant_edison[201854]:             ],
Nov 24 09:41:38 compute-0 vibrant_edison[201854]:             "lv_name": "ceph_lv0",
Nov 24 09:41:38 compute-0 vibrant_edison[201854]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:41:38 compute-0 vibrant_edison[201854]:             "lv_size": "21470642176",
Nov 24 09:41:38 compute-0 vibrant_edison[201854]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=84a084c3-61a7-5de7-8207-1f88efa59a64,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 24 09:41:38 compute-0 vibrant_edison[201854]:             "lv_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:41:38 compute-0 vibrant_edison[201854]:             "name": "ceph_lv0",
Nov 24 09:41:38 compute-0 vibrant_edison[201854]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:41:38 compute-0 vibrant_edison[201854]:             "tags": {
Nov 24 09:41:38 compute-0 vibrant_edison[201854]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:41:38 compute-0 vibrant_edison[201854]:                 "ceph.block_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:41:38 compute-0 vibrant_edison[201854]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 09:41:38 compute-0 vibrant_edison[201854]:                 "ceph.cluster_fsid": "84a084c3-61a7-5de7-8207-1f88efa59a64",
Nov 24 09:41:38 compute-0 vibrant_edison[201854]:                 "ceph.cluster_name": "ceph",
Nov 24 09:41:38 compute-0 vibrant_edison[201854]:                 "ceph.crush_device_class": "",
Nov 24 09:41:38 compute-0 vibrant_edison[201854]:                 "ceph.encrypted": "0",
Nov 24 09:41:38 compute-0 vibrant_edison[201854]:                 "ceph.osd_fsid": "4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c",
Nov 24 09:41:38 compute-0 vibrant_edison[201854]:                 "ceph.osd_id": "0",
Nov 24 09:41:38 compute-0 vibrant_edison[201854]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 09:41:38 compute-0 vibrant_edison[201854]:                 "ceph.type": "block",
Nov 24 09:41:38 compute-0 vibrant_edison[201854]:                 "ceph.vdo": "0",
Nov 24 09:41:38 compute-0 vibrant_edison[201854]:                 "ceph.with_tpm": "0"
Nov 24 09:41:38 compute-0 vibrant_edison[201854]:             },
Nov 24 09:41:38 compute-0 vibrant_edison[201854]:             "type": "block",
Nov 24 09:41:38 compute-0 vibrant_edison[201854]:             "vg_name": "ceph_vg0"
Nov 24 09:41:38 compute-0 vibrant_edison[201854]:         }
Nov 24 09:41:38 compute-0 vibrant_edison[201854]:     ]
Nov 24 09:41:38 compute-0 vibrant_edison[201854]: }
Nov 24 09:41:38 compute-0 systemd[1]: libpod-3843f6f5d57b214ef6453f5f5e3d2b83aab20f2529e17303c2325800c983b264.scope: Deactivated successfully.
Nov 24 09:41:38 compute-0 podman[201814]: 2025-11-24 09:41:38.570287634 +0000 UTC m=+0.458934501 container died 3843f6f5d57b214ef6453f5f5e3d2b83aab20f2529e17303c2325800c983b264 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:41:38 compute-0 python3.9[201888]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 09:41:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-a69b032a2dcb0918774f08ac1b57786c543343fb459906abc1e228cfa81189fb-merged.mount: Deactivated successfully.
Nov 24 09:41:38 compute-0 podman[201814]: 2025-11-24 09:41:38.634324132 +0000 UTC m=+0.522970979 container remove 3843f6f5d57b214ef6453f5f5e3d2b83aab20f2529e17303c2325800c983b264 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_edison, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:41:38 compute-0 systemd[1]: libpod-conmon-3843f6f5d57b214ef6453f5f5e3d2b83aab20f2529e17303c2325800c983b264.scope: Deactivated successfully.
Nov 24 09:41:38 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:38 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:41:38 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:38.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:41:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:38 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa94004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:38 compute-0 systemd[1]: Reloading.
Nov 24 09:41:38 compute-0 sudo[201632]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:38 compute-0 ceph-mon[74331]: pgmap v432: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:41:38 compute-0 systemd-sysv-generator[201962]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:41:38 compute-0 systemd-rc-local-generator[201958]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:41:39 compute-0 sudo[201909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:41:39 compute-0 sudo[201909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:41:39 compute-0 sudo[201909]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:39 compute-0 sudo[201886]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:39 compute-0 sudo[201968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- raw list --format json
Nov 24 09:41:39 compute-0 sudo[201968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:41:39 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:39 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faab4001bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:39 compute-0 sudo[202184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srjxjovyjvktfgdnstloniahhgvgokhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977299.1918392-1055-101546963926252/AnsiballZ_systemd.py'
Nov 24 09:41:39 compute-0 sudo[202184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:41:39 compute-0 podman[202186]: 2025-11-24 09:41:39.508218527 +0000 UTC m=+0.038609790 container create f9b495254b7c56aca7719a3682857ee3101af6a255f321cd7451f7c5ea7504f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_black, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:41:39 compute-0 systemd[1]: Started libpod-conmon-f9b495254b7c56aca7719a3682857ee3101af6a255f321cd7451f7c5ea7504f3.scope.
Nov 24 09:41:39 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:41:39 compute-0 podman[202186]: 2025-11-24 09:41:39.492289118 +0000 UTC m=+0.022680401 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:41:39 compute-0 podman[202186]: 2025-11-24 09:41:39.589662894 +0000 UTC m=+0.120054187 container init f9b495254b7c56aca7719a3682857ee3101af6a255f321cd7451f7c5ea7504f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_black, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:41:39 compute-0 podman[202186]: 2025-11-24 09:41:39.595549412 +0000 UTC m=+0.125940675 container start f9b495254b7c56aca7719a3682857ee3101af6a255f321cd7451f7c5ea7504f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_black, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:41:39 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v433: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Nov 24 09:41:39 compute-0 podman[202186]: 2025-11-24 09:41:39.599517102 +0000 UTC m=+0.129908385 container attach f9b495254b7c56aca7719a3682857ee3101af6a255f321cd7451f7c5ea7504f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_black, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:41:39 compute-0 epic_black[202203]: 167 167
Nov 24 09:41:39 compute-0 systemd[1]: libpod-f9b495254b7c56aca7719a3682857ee3101af6a255f321cd7451f7c5ea7504f3.scope: Deactivated successfully.
Nov 24 09:41:39 compute-0 podman[202186]: 2025-11-24 09:41:39.602428065 +0000 UTC m=+0.132819328 container died f9b495254b7c56aca7719a3682857ee3101af6a255f321cd7451f7c5ea7504f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 09:41:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-36a16936eb0c7a5204a7cd6fec78d19c94a3ee8708ace0441b8d3d8b3b7aec57-merged.mount: Deactivated successfully.
Nov 24 09:41:39 compute-0 podman[202186]: 2025-11-24 09:41:39.636858929 +0000 UTC m=+0.167250192 container remove f9b495254b7c56aca7719a3682857ee3101af6a255f321cd7451f7c5ea7504f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_black, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:41:39 compute-0 systemd[1]: libpod-conmon-f9b495254b7c56aca7719a3682857ee3101af6a255f321cd7451f7c5ea7504f3.scope: Deactivated successfully.
Nov 24 09:41:39 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:39 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faac00096e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:39 compute-0 python3.9[202188]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 09:41:39 compute-0 podman[202227]: 2025-11-24 09:41:39.784806906 +0000 UTC m=+0.037914943 container create 75aaf30f5380af99790706d79acb75455e0988708f70467c4511dc8e5a7209be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 24 09:41:39 compute-0 systemd[1]: Started libpod-conmon-75aaf30f5380af99790706d79acb75455e0988708f70467c4511dc8e5a7209be.scope.
Nov 24 09:41:39 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:41:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a787991cecdce9d68b58bde52358d8c57e5e61600d300835370ff7249842534/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:41:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a787991cecdce9d68b58bde52358d8c57e5e61600d300835370ff7249842534/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:41:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a787991cecdce9d68b58bde52358d8c57e5e61600d300835370ff7249842534/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:41:39 compute-0 systemd[1]: Reloading.
Nov 24 09:41:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a787991cecdce9d68b58bde52358d8c57e5e61600d300835370ff7249842534/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:41:39 compute-0 podman[202227]: 2025-11-24 09:41:39.768693941 +0000 UTC m=+0.021802008 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:41:39 compute-0 podman[202227]: 2025-11-24 09:41:39.86972334 +0000 UTC m=+0.122831407 container init 75aaf30f5380af99790706d79acb75455e0988708f70467c4511dc8e5a7209be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Nov 24 09:41:39 compute-0 podman[202227]: 2025-11-24 09:41:39.876032518 +0000 UTC m=+0.129140555 container start 75aaf30f5380af99790706d79acb75455e0988708f70467c4511dc8e5a7209be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_booth, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:41:39 compute-0 podman[202227]: 2025-11-24 09:41:39.87928114 +0000 UTC m=+0.132389207 container attach 75aaf30f5380af99790706d79acb75455e0988708f70467c4511dc8e5a7209be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_booth, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Nov 24 09:41:39 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:39 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:39 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:39.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:39 compute-0 systemd-rc-local-generator[202276]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:41:39 compute-0 systemd-sysv-generator[202279]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:41:40 compute-0 sudo[202184]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:40 compute-0 lvm[202454]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 09:41:40 compute-0 lvm[202454]: VG ceph_vg0 finished
Nov 24 09:41:40 compute-0 intelligent_booth[202246]: {}
Nov 24 09:41:40 compute-0 systemd[1]: libpod-75aaf30f5380af99790706d79acb75455e0988708f70467c4511dc8e5a7209be.scope: Deactivated successfully.
Nov 24 09:41:40 compute-0 systemd[1]: libpod-75aaf30f5380af99790706d79acb75455e0988708f70467c4511dc8e5a7209be.scope: Consumed 1.166s CPU time.
Nov 24 09:41:40 compute-0 podman[202227]: 2025-11-24 09:41:40.649054339 +0000 UTC m=+0.902162366 container died 75aaf30f5380af99790706d79acb75455e0988708f70467c4511dc8e5a7209be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:41:40 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:40 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:40 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:40.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a787991cecdce9d68b58bde52358d8c57e5e61600d300835370ff7249842534-merged.mount: Deactivated successfully.
Nov 24 09:41:40 compute-0 sudo[202509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdjkpxdwgfbnskzmemanwycnhqqpdbft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977300.4093714-1055-143546714258874/AnsiballZ_systemd.py'
Nov 24 09:41:40 compute-0 sudo[202509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:41:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:40 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa94004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:40 compute-0 podman[202227]: 2025-11-24 09:41:40.68850209 +0000 UTC m=+0.941610127 container remove 75aaf30f5380af99790706d79acb75455e0988708f70467c4511dc8e5a7209be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:41:40 compute-0 systemd[1]: libpod-conmon-75aaf30f5380af99790706d79acb75455e0988708f70467c4511dc8e5a7209be.scope: Deactivated successfully.
Nov 24 09:41:40 compute-0 ceph-mon[74331]: pgmap v433: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Nov 24 09:41:40 compute-0 sudo[201968]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:40 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:41:40 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:41:40 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:41:40 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:41:40 compute-0 sudo[202524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:41:40 compute-0 sudo[202524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:41:40 compute-0 sudo[202524]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:40 compute-0 python3.9[202522]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 09:41:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:41:40] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Nov 24 09:41:40 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:41:40] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Nov 24 09:41:41 compute-0 systemd[1]: Reloading.
Nov 24 09:41:41 compute-0 systemd-rc-local-generator[202577]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:41:41 compute-0 systemd-sysv-generator[202583]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:41:41 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:41 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaac002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:41 compute-0 sudo[202509]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:41 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v434: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:41:41 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:41 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faab8001110 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:41 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:41:41 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:41:41 compute-0 sudo[202737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgbueyjwvvrqlxxkicptddtafgpkjjjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977301.5159006-1055-75024828820665/AnsiballZ_systemd.py'
Nov 24 09:41:41 compute-0 sudo[202737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:41:41 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:41 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:41:41 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:41.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:41:42 compute-0 python3.9[202739]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 09:41:42 compute-0 sudo[202742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:41:42 compute-0 sudo[202742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:41:42 compute-0 sudo[202742]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:42 compute-0 sudo[202737]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:42 compute-0 sudo[202918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epczlmvgnrdwmsjsctovljqncgbgmrib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977302.3550766-1055-72975329543905/AnsiballZ_systemd.py'
Nov 24 09:41:42 compute-0 sudo[202918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:41:42 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:42 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:42 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:42.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:42 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faac00096e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:42 compute-0 ceph-mon[74331]: pgmap v434: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:41:42 compute-0 python3.9[202920]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 09:41:43 compute-0 systemd[1]: Reloading.
Nov 24 09:41:43 compute-0 systemd-sysv-generator[202954]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:41:43 compute-0 systemd-rc-local-generator[202950]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:41:43 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:43 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faac00096e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:41:43 compute-0 sudo[202918]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:43 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v435: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Nov 24 09:41:43 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:43 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faac00096e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:43 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:43 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:43 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:43.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:44 compute-0 sudo[203111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkturtrruoowfipitjfvfkdqsyheslkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977304.1557412-1163-140889462036073/AnsiballZ_systemd.py'
Nov 24 09:41:44 compute-0 sudo[203111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:41:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:44 : epoch 69242828 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:41:44 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:44 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:44 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:44.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:44 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa88000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:44 compute-0 python3.9[203113]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 09:41:44 compute-0 ceph-mon[74331]: pgmap v435: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Nov 24 09:41:44 compute-0 systemd[1]: Reloading.
Nov 24 09:41:44 compute-0 systemd-rc-local-generator[203142]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:41:44 compute-0 systemd-sysv-generator[203146]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:41:45 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Nov 24 09:41:45 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Nov 24 09:41:45 compute-0 sudo[203111]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:45 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaac003f60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:45 compute-0 podman[203152]: 2025-11-24 09:41:45.18708991 +0000 UTC m=+0.077608800 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 09:41:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Optimize plan auto_2025-11-24_09:41:45
Nov 24 09:41:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 09:41:45 compute-0 ceph-mgr[74626]: [balancer INFO root] do_upmap
Nov 24 09:41:45 compute-0 ceph-mgr[74626]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', 'images', 'default.rgw.log', '.rgw.root', '.nfs', 'backups', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta']
Nov 24 09:41:45 compute-0 ceph-mgr[74626]: [balancer INFO root] prepared 0/10 upmap changes
Nov 24 09:41:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:41:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:41:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:41:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:41:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:41:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:41:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 09:41:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:41:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 09:41:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:41:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:41:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:41:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:41:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:41:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:41:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:41:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:41:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:41:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 09:41:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:41:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:41:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:41:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 24 09:41:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:41:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 09:41:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:41:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:41:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:41:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 09:41:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:41:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 09:41:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 09:41:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:41:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:41:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:41:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:41:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 09:41:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:41:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:41:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:41:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:41:45 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v436: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Nov 24 09:41:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:45 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faab8001eb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:45 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:41:45 compute-0 sudo[203330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyfruuthihsnptdiusbwonjzodsvvyyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977305.56256-1187-97705486091387/AnsiballZ_systemd.py'
Nov 24 09:41:45 compute-0 sudo[203330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:41:45 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:45 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:41:45 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:45.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:41:46 compute-0 python3.9[203332]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 09:41:46 compute-0 sudo[203330]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:46 compute-0 sudo[203486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnncvyaduwalfthrhuhdpppomklvdlwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977306.3278456-1187-9043173555604/AnsiballZ_systemd.py'
Nov 24 09:41:46 compute-0 sudo[203486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:41:46 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:46 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:46 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:46.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:46 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faac00096e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:46 compute-0 ceph-mon[74331]: pgmap v436: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Nov 24 09:41:46 compute-0 python3.9[203488]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 09:41:46 compute-0 sudo[203486]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:41:47.020Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:41:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:47 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faac00096e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:47 compute-0 sudo[203642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmgvcxcuhxvaacerrrmqdvzllhgmpfxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977307.1034038-1187-31111195862648/AnsiballZ_systemd.py'
Nov 24 09:41:47 compute-0 sudo[203642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:41:47 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v437: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 511 B/s wr, 1 op/s
Nov 24 09:41:47 compute-0 python3.9[203644]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 09:41:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:47 : epoch 69242828 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:41:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:47 : epoch 69242828 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:41:47 compute-0 sudo[203642]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:47 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faac00096e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:47 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:47 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:41:47 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:47.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:41:48 compute-0 sudo[203798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgdyimccbalflucghlhcwnborrjjfqrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977308.0084825-1187-186751649437850/AnsiballZ_systemd.py'
Nov 24 09:41:48 compute-0 sudo[203798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:41:48 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:41:48 compute-0 python3.9[203800]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 09:41:48 compute-0 sudo[203798]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:48 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:48 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:41:48 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:48.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:41:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:48 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faac00096e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:48 compute-0 ceph-mon[74331]: pgmap v437: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 511 B/s wr, 1 op/s
Nov 24 09:41:49 compute-0 sudo[203953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnvivwohvphsyevriuowlzxainouoegf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977308.7669399-1187-9239405462476/AnsiballZ_systemd.py'
Nov 24 09:41:49 compute-0 sudo[203953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:41:49 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:49 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faab8001eb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:49 compute-0 python3.9[203955]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 09:41:49 compute-0 sudo[203953]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:49 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v438: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 511 B/s wr, 1 op/s
Nov 24 09:41:49 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:49 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaac003f60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:49 compute-0 sudo[204109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqcarruevttkephlnbpyhyxazctpypbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977309.5109944-1187-89753300155759/AnsiballZ_systemd.py'
Nov 24 09:41:49 compute-0 sudo[204109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:41:49 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:49 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:49 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:49.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:50 compute-0 python3.9[204111]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 09:41:50 compute-0 sudo[204109]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:50 compute-0 sudo[204265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvjwtdctymrzfwcbscnxzgetykbfudgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977310.2444453-1187-226531008967157/AnsiballZ_systemd.py'
Nov 24 09:41:50 compute-0 sudo[204265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:41:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:50 : epoch 69242828 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:41:50 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:50 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:50 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:50.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:50 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa880016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:50 compute-0 ceph-mon[74331]: pgmap v438: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 511 B/s wr, 1 op/s
Nov 24 09:41:50 compute-0 python3.9[204267]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 09:41:50 compute-0 sudo[204265]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:41:50] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Nov 24 09:41:50 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:41:50] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Nov 24 09:41:51 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:51 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faac00096e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:51 compute-0 sudo[204421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obaoeqhmrcxypniecthtulmrsuxbtwvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977311.1023319-1187-26585649298539/AnsiballZ_systemd.py'
Nov 24 09:41:51 compute-0 sudo[204421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:41:51 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v439: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:41:51 compute-0 python3.9[204423]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 09:41:51 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:51 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faab8002bc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:51 compute-0 sudo[204421]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:51 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:51 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:51 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:51.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:52 compute-0 sudo[204587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojsjiqvtfecytgeqzbpcmbrhpbadwpig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977311.8745306-1187-77410370269847/AnsiballZ_systemd.py'
Nov 24 09:41:52 compute-0 sudo[204587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:41:52 compute-0 podman[204550]: 2025-11-24 09:41:52.163568804 +0000 UTC m=+0.063483595 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 09:41:52 compute-0 python3.9[204595]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 09:41:52 compute-0 sudo[204587]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:52 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:52 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:52 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:52.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:52 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaac003f60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:52 compute-0 ceph-mon[74331]: pgmap v439: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:41:52 compute-0 sudo[204751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwpgelpzwcqgmnzgpzalzjcimisppslk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977312.6675777-1187-27441641102172/AnsiballZ_systemd.py'
Nov 24 09:41:52 compute-0 sudo[204751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:41:53 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:53 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa88001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:53 compute-0 python3.9[204753]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 09:41:53 compute-0 sudo[204751]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:53 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:41:53 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v440: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:41:53 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:53 : epoch 69242828 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:41:53 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:53 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faac00096e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:53 compute-0 sudo[204907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnwcqaioksvnyrkrdpsiwtputfzyicek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977313.4609835-1187-26302601613322/AnsiballZ_systemd.py'
Nov 24 09:41:53 compute-0 sudo[204907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:41:53 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:53 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:53 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:53.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:54 compute-0 python3.9[204909]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 09:41:54 compute-0 sudo[204907]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:54 compute-0 sudo[205063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujalwbjbvcfkoluivbksxkqhgasopqhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977314.3328545-1187-148963848075982/AnsiballZ_systemd.py'
Nov 24 09:41:54 compute-0 sudo[205063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:41:54 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:54 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:54 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:54.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:54 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:54 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faab8002bc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:54 compute-0 python3.9[205065]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 09:41:54 compute-0 ceph-mon[74331]: pgmap v440: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:41:54 compute-0 sudo[205063]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:55 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaac003f60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:55 compute-0 sudo[205219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvvfgypodxaqfcggzbnmsnhwjaczylnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977315.0739307-1187-154869810245567/AnsiballZ_systemd.py'
Nov 24 09:41:55 compute-0 sudo[205219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:41:55 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v441: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:41:55 compute-0 python3.9[205221]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 09:41:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:55 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa88001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:55 compute-0 sudo[205219]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:55 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:55 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:41:55 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:55.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:41:56 compute-0 sudo[205374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lplaixgxkgyhnkitulnganxtryjzokuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977315.9039242-1187-185233506318197/AnsiballZ_systemd.py'
Nov 24 09:41:56 compute-0 sudo[205374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:41:56 compute-0 python3.9[205376]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 09:41:56 compute-0 sudo[205374]: pam_unix(sudo:session): session closed for user root
Nov 24 09:41:56 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:56 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:56 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:56.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:56 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faac000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:56 : epoch 69242828 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:41:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:56 : epoch 69242828 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:41:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:41:57.021Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:41:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:41:57.021Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:41:57 compute-0 ceph-mon[74331]: pgmap v441: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:41:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:57 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faab80038d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:57 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v442: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 1.5 KiB/s wr, 4 op/s
Nov 24 09:41:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/094157 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:41:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:57 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaac003f60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:57 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:57 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:41:57 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:57.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:41:58 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:41:58 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:58 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:41:58 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:41:58.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:41:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:58 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa88001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:59 compute-0 ceph-mon[74331]: pgmap v442: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 1.5 KiB/s wr, 4 op/s
Nov 24 09:41:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:59 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faac000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:59 : epoch 69242828 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:41:59 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v443: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:41:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:41:59 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faab80038d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:41:59 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:41:59 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:41:59 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:41:59.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:42:00 compute-0 sudo[205533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlkooshcwygawrmthatmphiddvzpjkin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977319.9067357-1493-9174327093739/AnsiballZ_file.py'
Nov 24 09:42:00 compute-0 sudo[205533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:00 compute-0 python3.9[205535]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:42:00 compute-0 sudo[205533]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:00 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:00 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:00 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:00.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:42:00 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaac003f60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:00 compute-0 sudo[205686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqeoezsoxwuvzkvzylgacdpdrhrppcyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977320.5371478-1493-87518925497321/AnsiballZ_file.py'
Nov 24 09:42:00 compute-0 sudo[205686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:42:00] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Nov 24 09:42:00 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:42:00] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Nov 24 09:42:01 compute-0 python3.9[205688]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:42:01 compute-0 sudo[205686]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:01 compute-0 ceph-mon[74331]: pgmap v443: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:42:01 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:42:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:42:01 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa880032f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:01 compute-0 sudo[205839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsnvvlmosutgjcwlhwiujcsyvhfgxsey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977321.2012663-1493-187391875724891/AnsiballZ_file.py'
Nov 24 09:42:01 compute-0 sudo[205839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:01 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v444: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Nov 24 09:42:01 compute-0 python3.9[205841]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:42:01 compute-0 sudo[205839]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:42:01 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faac000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:01 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:01 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:01 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:01.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:02 compute-0 sudo[205991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhflgafaryxzypotyybxqjmhbsicsdlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977321.80325-1493-9637497588338/AnsiballZ_file.py'
Nov 24 09:42:02 compute-0 sudo[205991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:02 compute-0 python3.9[205993]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:42:02 compute-0 sudo[205991]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:02 compute-0 sudo[205994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:42:02 compute-0 sudo[205994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:42:02 compute-0 sudo[205994]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:02 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/094202 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:42:02 compute-0 sudo[206169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-javdmagnysmnqgpgrcdzmxveadgjtakv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977322.3864903-1493-270383501371682/AnsiballZ_file.py'
Nov 24 09:42:02 compute-0 sudo[206169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:02 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:02 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:42:02 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:02.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:42:02 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:42:02 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faac000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:02 compute-0 python3.9[206171]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:42:02 compute-0 sudo[206169]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:03 compute-0 ceph-mon[74331]: pgmap v444: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Nov 24 09:42:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:42:03 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaac003f60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:03 compute-0 sudo[206322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtnjkrgotjlaazahmfhucuiblybcfidu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977323.0215297-1493-145841618858650/AnsiballZ_file.py'
Nov 24 09:42:03 compute-0 sudo[206322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:03 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:42:03 compute-0 python3.9[206324]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:42:03 compute-0 sudo[206322]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:03 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v445: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 767 B/s wr, 2 op/s
Nov 24 09:42:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:42:03 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa880032f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:03 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:03 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:03 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:03.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:04 compute-0 sudo[206475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjcopmhecvofsdrbfqcgknaxooiwizwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977324.2051678-1622-2470229524426/AnsiballZ_stat.py'
Nov 24 09:42:04 compute-0 sudo[206475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:04 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:04 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:42:04 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:04.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:42:04 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:42:04 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faac000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:04 compute-0 python3.9[206477]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:42:04 compute-0 sudo[206475]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:05 compute-0 ceph-mon[74331]: pgmap v445: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 767 B/s wr, 2 op/s
Nov 24 09:42:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:42:05 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faac000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:05 compute-0 sudo[206601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-waoydznjuyjsinsbvvhqpgkpnvpopsep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977324.2051678-1622-2470229524426/AnsiballZ_copy.py'
Nov 24 09:42:05 compute-0 sudo[206601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:05 compute-0 python3.9[206603]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763977324.2051678-1622-2470229524426/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:05 compute-0 sudo[206601]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:05 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v446: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 767 B/s wr, 2 op/s
Nov 24 09:42:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:42:05 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaac003f60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:05 compute-0 sudo[206753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llzzxxhrwyasbnahtbukfdmpcsniyizt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977325.6220145-1622-141024640318468/AnsiballZ_stat.py'
Nov 24 09:42:05 compute-0 sudo[206753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:05 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:05 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:05 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:05.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:06 compute-0 python3.9[206755]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:42:06 compute-0 sudo[206753]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:06 compute-0 sudo[206879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snxcillxywsityzvsypjgkbnjxfspmdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977325.6220145-1622-141024640318468/AnsiballZ_copy.py'
Nov 24 09:42:06 compute-0 sudo[206879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:06 compute-0 python3.9[206881]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763977325.6220145-1622-141024640318468/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:06 compute-0 sudo[206879]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:06 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:06 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:06 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:06.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:42:06 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa88004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:42:07.022Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:42:07 compute-0 sudo[207031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-admmbwduwamljokgjwncnxbujkakpzad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977326.787715-1622-144460448522133/AnsiballZ_stat.py'
Nov 24 09:42:07 compute-0 sudo[207031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:07 compute-0 ceph-mon[74331]: pgmap v446: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 767 B/s wr, 2 op/s
Nov 24 09:42:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:42:07 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa88004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:07 compute-0 python3.9[207033]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:42:07 compute-0 sudo[207031]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:07 compute-0 sudo[207157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygmbwweojasyllmetyfdedhjwohjqjtx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977326.787715-1622-144460448522133/AnsiballZ_copy.py'
Nov 24 09:42:07 compute-0 sudo[207157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:07 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v447: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 767 B/s wr, 2 op/s
Nov 24 09:42:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:42:07 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faab80038d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:07 compute-0 python3.9[207159]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763977326.787715-1622-144460448522133/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:07 compute-0 sudo[207157]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:07 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:07 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:07 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:07.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:08 compute-0 sudo[207309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omngmdihyzedaegykrfiykajomfrdoct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977327.9389753-1622-240250460351655/AnsiballZ_stat.py'
Nov 24 09:42:08 compute-0 sudo[207309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:08 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:42:08 compute-0 python3.9[207312]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:42:08 compute-0 sudo[207309]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:42:08 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaac003f80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:08 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:08 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:42:08 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:08.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:42:09 compute-0 sudo[207435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpthxazpwjagrxmekucffgkkgbodsvry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977327.9389753-1622-240250460351655/AnsiballZ_copy.py'
Nov 24 09:42:09 compute-0 sudo[207435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:09 compute-0 ceph-mon[74331]: pgmap v447: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 767 B/s wr, 2 op/s
Nov 24 09:42:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:42:09 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa88004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:09 compute-0 python3.9[207437]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763977327.9389753-1622-240250460351655/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:09 compute-0 sudo[207435]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:09 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v448: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Nov 24 09:42:09 compute-0 sudo[207588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxdaiahtnmsrhryfbgonczpywyiqxrpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977329.363858-1622-13331895488431/AnsiballZ_stat.py'
Nov 24 09:42:09 compute-0 sudo[207588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:42:09 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa88004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:09 compute-0 python3.9[207590]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:42:09 compute-0 sudo[207588]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:09 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:09 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:09 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:09.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:10 compute-0 sudo[207713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zptqzjofbxthlkshzsximyfgtgcxwkve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977329.363858-1622-13331895488431/AnsiballZ_copy.py'
Nov 24 09:42:10 compute-0 sudo[207713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:10 compute-0 python3.9[207715]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763977329.363858-1622-13331895488431/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:10 compute-0 sudo[207713]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:42:10 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faab80038d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:10 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:10 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:10 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:10.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:10 compute-0 sudo[207866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eliiwdivrnihwhnwabxneagywykqnris ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977330.538509-1622-189383902069290/AnsiballZ_stat.py'
Nov 24 09:42:10 compute-0 sudo[207866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:10 compute-0 python3.9[207868]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:42:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:42:10] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Nov 24 09:42:10 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:42:10] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Nov 24 09:42:11 compute-0 sudo[207866]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:11 compute-0 ceph-mon[74331]: pgmap v448: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Nov 24 09:42:11 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:42:11 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaac003fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:11 compute-0 sudo[207992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icbzengwdyqyyolvpvyqeshpjwjzcady ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977330.538509-1622-189383902069290/AnsiballZ_copy.py'
Nov 24 09:42:11 compute-0 sudo[207992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:11 compute-0 python3.9[207994]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763977330.538509-1622-189383902069290/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:11 compute-0 sudo[207992]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:11 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v449: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s
Nov 24 09:42:11 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:42:11 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa88004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:11 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:11 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:42:11 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:11.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:42:11 compute-0 sudo[208144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xksnbobkzdazcoyghmmkzxbiecooqass ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977331.681281-1622-91703336127714/AnsiballZ_stat.py'
Nov 24 09:42:11 compute-0 sudo[208144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:12 compute-0 python3.9[208146]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:42:12 compute-0 sudo[208144]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:12 compute-0 sudo[208268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ruruncytrneslsknlgeksstgeaednubn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977331.681281-1622-91703336127714/AnsiballZ_copy.py'
Nov 24 09:42:12 compute-0 sudo[208268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:12 compute-0 python3.9[208270]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763977331.681281-1622-91703336127714/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:12 compute-0 sudo[208268]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:12 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:42:12 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa88004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:12 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:12 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:12 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:12.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:13 compute-0 sudo[208420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpwsfngawemutbfpgyrpydbkpwffjcys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977332.8323398-1622-255319650048786/AnsiballZ_stat.py'
Nov 24 09:42:13 compute-0 sudo[208420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:13 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:42:13 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faab80038d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:13 compute-0 ceph-mon[74331]: pgmap v449: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s
Nov 24 09:42:13 compute-0 python3.9[208422]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:42:13 compute-0 sudo[208420]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:42:13 compute-0 sudo[208546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sujlnsnwaqzrstjeeocaqeklhfhitfbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977332.8323398-1622-255319650048786/AnsiballZ_copy.py'
Nov 24 09:42:13 compute-0 sudo[208546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:13 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v450: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:42:13 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:42:13 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaac003fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:13 compute-0 python3.9[208548]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763977332.8323398-1622-255319650048786/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:13 compute-0 sudo[208546]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:13 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:13 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:13 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:13.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:14 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:42:14 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faaac003fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:14 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:14 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:42:14 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:14.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:42:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:42:15 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faac000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:15 compute-0 ceph-mon[74331]: pgmap v450: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:42:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:42:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:42:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:42:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:42:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:42:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:42:15 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v451: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:42:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:42:15 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faab80038d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:15 compute-0 podman[208578]: 2025-11-24 09:42:15.795654837 +0000 UTC m=+0.073798065 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Nov 24 09:42:15 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:15 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:15 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:15.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:16 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:42:16 compute-0 sudo[208732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgswjqomakwjpvylnyjcyckbxuonzkdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977336.181749-1961-82431604454963/AnsiballZ_command.py'
Nov 24 09:42:16 compute-0 sudo[208732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:16 compute-0 python3.9[208734]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Nov 24 09:42:16 compute-0 sudo[208732]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:42:16 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faab4001460 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:16 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:16 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:42:16 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:16.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:42:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:42:17.022Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:42:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:42:17.022Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 09:42:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:42:17.023Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:42:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:42:17 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa940033e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:17 compute-0 sudo[208886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egtaeiyrqsdaqshcxobdjxhykjfrcnun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977337.0479503-1988-103500936270255/AnsiballZ_file.py'
Nov 24 09:42:17 compute-0 sudo[208886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:17 compute-0 ceph-mon[74331]: pgmap v451: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:42:17 compute-0 python3.9[208888]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:17 compute-0 sudo[208886]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:17 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v452: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:42:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:42:17 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faac000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:17 compute-0 sudo[209038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgkhupjmuadqtkigyuacmqxdghfqbull ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977337.608285-1988-198826485165367/AnsiballZ_file.py'
Nov 24 09:42:17 compute-0 sudo[209038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:17 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:17 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:42:17 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:17.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:42:18 compute-0 python3.9[209040]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:18 compute-0 sudo[209038]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:18 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:42:18 compute-0 ceph-mon[74331]: pgmap v452: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:42:18 compute-0 sudo[209191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfsssyasyeajqulilojognaqdklupnqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977338.182545-1988-17787991880797/AnsiballZ_file.py'
Nov 24 09:42:18 compute-0 sudo[209191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:18 compute-0 python3.9[209193]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:18 compute-0 sudo[209191]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:42:18 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faab80038d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:18 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:18 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:42:18 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:18.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:42:19 compute-0 sudo[209343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nenwlsbivlmvkyxqbewxzysdujymhmmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977338.7788088-1988-21585711405795/AnsiballZ_file.py'
Nov 24 09:42:19 compute-0 sudo[209343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:19 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:42:19 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faab4001d80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:19 compute-0 python3.9[209345]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:19 compute-0 sudo[209343]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:19 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v453: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:42:19 compute-0 sudo[209496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjhsiuvdkbeggywtncgrwlycuewtefmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977339.4013453-1988-98354483679818/AnsiballZ_file.py'
Nov 24 09:42:19 compute-0 sudo[209496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:19 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:42:19 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faa940033e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:19 compute-0 python3.9[209498]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:19 compute-0 sudo[209496]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:19 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:19 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:19 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:19.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:20 compute-0 sudo[209648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-metfrymsqdrtmalhwnwiocdkqzmeoceb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977339.9811578-1988-116059752677634/AnsiballZ_file.py'
Nov 24 09:42:20 compute-0 sudo[209648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:20 compute-0 python3.9[209650]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:20 compute-0 sudo[209648]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:42:20.552 165073 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:42:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:42:20.552 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:42:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:42:20.552 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:42:20 compute-0 ceph-mon[74331]: pgmap v453: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:42:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[190761]: 24/11/2025 09:42:20 : epoch 69242828 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faac000a3f0 fd 48 proxy ignored for local
Nov 24 09:42:20 compute-0 kernel: ganesha.nfsd[190886]: segfault at 50 ip 00007fab6c81332e sp 00007fab3bffe210 error 4 in libntirpc.so.5.8[7fab6c7f8000+2c000] likely on CPU 7 (core 0, socket 7)
Nov 24 09:42:20 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 24 09:42:20 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:20 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:20 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:20.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:20 compute-0 systemd[1]: Started Process Core Dump (PID 209763/UID 0).
Nov 24 09:42:20 compute-0 sudo[209803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syuenyxbnbzfrsqaxbpiqnskrruxkeua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977340.5367362-1988-156768074382222/AnsiballZ_file.py'
Nov 24 09:42:20 compute-0 sudo[209803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:42:20] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Nov 24 09:42:20 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:42:20] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Nov 24 09:42:21 compute-0 python3.9[209805]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:21 compute-0 sudo[209803]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:21 compute-0 sudo[209956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nowffhawzaqfonuelrlmvslrbhenpwun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977341.166395-1988-227628461503857/AnsiballZ_file.py'
Nov 24 09:42:21 compute-0 sudo[209956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:21 compute-0 python3.9[209958]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:21 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v454: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 24 09:42:21 compute-0 sudo[209956]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:21 compute-0 systemd-coredump[209776]: Process 190765 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 41:
                                                    #0  0x00007fab6c81332e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 24 09:42:21 compute-0 systemd[1]: systemd-coredump@4-209763-0.service: Deactivated successfully.
Nov 24 09:42:21 compute-0 systemd[1]: systemd-coredump@4-209763-0.service: Consumed 1.016s CPU time.
Nov 24 09:42:21 compute-0 podman[210039]: 2025-11-24 09:42:21.877666688 +0000 UTC m=+0.027031760 container died 9ba91ae3e1e1e5f9865e5f38c289f288c02b1947600f66476b33f0b8ef6b5dee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:42:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9970c1bf71d5f4b0ffe61f67e7998c3dc31577e113673819809f7ef01474cea-merged.mount: Deactivated successfully.
Nov 24 09:42:21 compute-0 podman[210039]: 2025-11-24 09:42:21.918201636 +0000 UTC m=+0.067566688 container remove 9ba91ae3e1e1e5f9865e5f38c289f288c02b1947600f66476b33f0b8ef6b5dee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:42:21 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.2.0.compute-0.ssprex.service: Main process exited, code=exited, status=139/n/a
Nov 24 09:42:21 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:21 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:21 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:21.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:22 compute-0 sudo[210140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cghrsjxztflzngarxjkccsfjwkiizmqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977341.7722847-1988-132514552970510/AnsiballZ_file.py'
Nov 24 09:42:22 compute-0 sudo[210140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:22 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.2.0.compute-0.ssprex.service: Failed with result 'exit-code'.
Nov 24 09:42:22 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.2.0.compute-0.ssprex.service: Consumed 1.420s CPU time.
Nov 24 09:42:22 compute-0 python3.9[210149]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:22 compute-0 sudo[210140]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:22 compute-0 sudo[210183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:42:22 compute-0 sudo[210183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:42:22 compute-0 sudo[210183]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:22 compute-0 podman[210230]: 2025-11-24 09:42:22.454029609 +0000 UTC m=+0.076532554 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 24 09:42:22 compute-0 sudo[210351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfvkibhsxkgjdffnmdnqnxszbyytkmuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977342.3645165-1988-102510340663023/AnsiballZ_file.py'
Nov 24 09:42:22 compute-0 sudo[210351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:22 compute-0 ceph-mon[74331]: pgmap v454: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 24 09:42:22 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:22 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:42:22 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:22.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:42:22 compute-0 python3.9[210353]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:22 compute-0 sudo[210351]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:23 compute-0 sudo[210503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-traacjofzexnzghssynmkzjximuepnlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977342.9407198-1988-117755262903972/AnsiballZ_file.py'
Nov 24 09:42:23 compute-0 sudo[210503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:23 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:42:23 compute-0 python3.9[210505]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:23 compute-0 sudo[210503]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:23 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v455: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:42:23 compute-0 sudo[210656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aezngpkqdgeuleldfknmktehoffxedsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977343.5179534-1988-94611498803144/AnsiballZ_file.py'
Nov 24 09:42:23 compute-0 sudo[210656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:23 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:23 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:42:23 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:23.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:42:23 compute-0 python3.9[210658]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:23 compute-0 sudo[210656]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:24 compute-0 sudo[210809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygbeebokceqoobxqdtfcetalccqfjtvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977344.1324704-1988-64477263425285/AnsiballZ_file.py'
Nov 24 09:42:24 compute-0 sudo[210809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:24 compute-0 ceph-mon[74331]: pgmap v455: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:42:24 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:24 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:42:24 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:24.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:42:24 compute-0 python3.9[210811]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:24 compute-0 sudo[210809]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:25 compute-0 sudo[210962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpdknzpneejxjttcniihxgrvwfwmjvwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977344.9501145-1988-6546755929539/AnsiballZ_file.py'
Nov 24 09:42:25 compute-0 sudo[210962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:25 compute-0 python3.9[210964]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:25 compute-0 sudo[210962]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:25 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v456: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:42:25 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:25 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:25 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:25.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/094226 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:42:26 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:26 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:26 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:26.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:26 compute-0 ceph-mon[74331]: pgmap v456: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:42:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:42:27.024Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:42:27 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v457: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:42:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/094227 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:42:27 compute-0 sudo[211116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utnzdhyzreduekoistrypusshbjhpagb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977347.4401221-2285-248188575961614/AnsiballZ_stat.py'
Nov 24 09:42:27 compute-0 sudo[211116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:27 compute-0 python3.9[211118]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:42:27 compute-0 sudo[211116]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:27 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:27 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:42:27 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:27.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:42:28 compute-0 sudo[211240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixjelkxrnnwkqzjjcupdnaaffvqasolo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977347.4401221-2285-248188575961614/AnsiballZ_copy.py'
Nov 24 09:42:28 compute-0 sudo[211240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:28 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:42:28 compute-0 python3.9[211242]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763977347.4401221-2285-248188575961614/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:28 compute-0 sudo[211240]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:28 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:28 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:28 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:28.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:28 compute-0 ceph-mon[74331]: pgmap v457: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:42:28 compute-0 sudo[211392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvdyitgufizkgjownrzionlomytyzfjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977348.6284645-2285-224989620219910/AnsiballZ_stat.py'
Nov 24 09:42:28 compute-0 sudo[211392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:29 compute-0 python3.9[211394]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:42:29 compute-0 sudo[211392]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:29 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v458: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:42:29 compute-0 sudo[211516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvqvaigskpqgbuxiumejkofeizaejnyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977348.6284645-2285-224989620219910/AnsiballZ_copy.py'
Nov 24 09:42:29 compute-0 sudo[211516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:29 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:29 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:42:29 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:29.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:42:29 compute-0 python3.9[211518]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763977348.6284645-2285-224989620219910/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:29 compute-0 sudo[211516]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:30 compute-0 sudo[211669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msfmgapbkccquhiympdclplykrgaievt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977350.1162782-2285-193596436748846/AnsiballZ_stat.py'
Nov 24 09:42:30 compute-0 sudo[211669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:30 compute-0 python3.9[211671]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:42:30 compute-0 sudo[211669]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:30 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:30 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:42:30 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:30.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:42:30 compute-0 sudo[211792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-siejxcfzymdufvtgzsnkkbmmccbhltro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977350.1162782-2285-193596436748846/AnsiballZ_copy.py'
Nov 24 09:42:30 compute-0 sudo[211792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:30 compute-0 ceph-mon[74331]: pgmap v458: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:42:30 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:42:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:42:30] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Nov 24 09:42:30 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:42:30] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Nov 24 09:42:31 compute-0 python3.9[211794]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763977350.1162782-2285-193596436748846/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:31 compute-0 sudo[211792]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:31 compute-0 sudo[211945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqkthqydinscnxbtamkpihpboysrgkae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977351.296678-2285-49194574419158/AnsiballZ_stat.py'
Nov 24 09:42:31 compute-0 sudo[211945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:31 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v459: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:42:31 compute-0 python3.9[211947]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:42:31 compute-0 sudo[211945]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:31 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:31 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:42:31 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:31.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:42:32 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.2.0.compute-0.ssprex.service: Scheduled restart job, restart counter is at 5.
Nov 24 09:42:32 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ssprex for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:42:32 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.2.0.compute-0.ssprex.service: Consumed 1.420s CPU time.
Nov 24 09:42:32 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ssprex for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:42:32 compute-0 sudo[212068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysbmenoqisnspnqjlruqpeydmhlpnngu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977351.296678-2285-49194574419158/AnsiballZ_copy.py'
Nov 24 09:42:32 compute-0 sudo[212068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:32 compute-0 python3.9[212071]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763977351.296678-2285-49194574419158/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:32 compute-0 podman[212118]: 2025-11-24 09:42:32.281323884 +0000 UTC m=+0.042082688 container create a56abb98d214d14a158e60f98fb9f1de4024c78aaef5c622e277eb3dc23f922f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Nov 24 09:42:32 compute-0 sudo[212068]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ba2e271b16de74ce7faba6b706ff86468e237ba34eb6f6cc9c79cce17332a44/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 24 09:42:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ba2e271b16de74ce7faba6b706ff86468e237ba34eb6f6cc9c79cce17332a44/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:42:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ba2e271b16de74ce7faba6b706ff86468e237ba34eb6f6cc9c79cce17332a44/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:42:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ba2e271b16de74ce7faba6b706ff86468e237ba34eb6f6cc9c79cce17332a44/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ssprex-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:42:32 compute-0 podman[212118]: 2025-11-24 09:42:32.334469959 +0000 UTC m=+0.095228763 container init a56abb98d214d14a158e60f98fb9f1de4024c78aaef5c622e277eb3dc23f922f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:42:32 compute-0 podman[212118]: 2025-11-24 09:42:32.339173857 +0000 UTC m=+0.099932641 container start a56abb98d214d14a158e60f98fb9f1de4024c78aaef5c622e277eb3dc23f922f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Nov 24 09:42:32 compute-0 bash[212118]: a56abb98d214d14a158e60f98fb9f1de4024c78aaef5c622e277eb3dc23f922f
Nov 24 09:42:32 compute-0 podman[212118]: 2025-11-24 09:42:32.262562342 +0000 UTC m=+0.023321146 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:42:32 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ssprex for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:42:32 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:32 : epoch 69242888 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 24 09:42:32 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:32 : epoch 69242888 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 24 09:42:32 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:32 : epoch 69242888 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 24 09:42:32 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:32 : epoch 69242888 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 24 09:42:32 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:32 : epoch 69242888 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 24 09:42:32 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:32 : epoch 69242888 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 24 09:42:32 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:32 : epoch 69242888 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 24 09:42:32 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:32 : epoch 69242888 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:42:32 compute-0 sudo[212326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgewmmsetppweakroflcktvszznsyjor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977352.4233692-2285-25859317608326/AnsiballZ_stat.py'
Nov 24 09:42:32 compute-0 sudo[212326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:32 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:32 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:42:32 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:32.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:42:32 compute-0 python3.9[212328]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:42:32 compute-0 sudo[212326]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:32 compute-0 ceph-mon[74331]: pgmap v459: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:42:33 compute-0 sudo[212449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqoaaamiitjwnrazfbkmgnbuxkyyxdlg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977352.4233692-2285-25859317608326/AnsiballZ_copy.py'
Nov 24 09:42:33 compute-0 sudo[212449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:42:33 compute-0 python3.9[212452]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763977352.4233692-2285-25859317608326/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:33 compute-0 sudo[212449]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:33 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v460: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Nov 24 09:42:33 compute-0 sudo[212602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzbcuaupsbtyvalqdugqufbsmdikgpry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977353.5530665-2285-192649414383457/AnsiballZ_stat.py'
Nov 24 09:42:33 compute-0 sudo[212602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:33.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:33 compute-0 python3.9[212604]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:42:34 compute-0 sudo[212602]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:34 compute-0 sudo[212726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkvwokwulihoqbwbhbzusjizbbarfwtx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977353.5530665-2285-192649414383457/AnsiballZ_copy.py'
Nov 24 09:42:34 compute-0 sudo[212726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:34 compute-0 python3.9[212728]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763977353.5530665-2285-192649414383457/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:34 compute-0 sudo[212726]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:34 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:34 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:34 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:34.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:34 compute-0 sudo[212878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulkdeyxcnnkxkdvxjzvkclkxijdzbhql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977354.6609375-2285-134000328105525/AnsiballZ_stat.py'
Nov 24 09:42:34 compute-0 sudo[212878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:34 compute-0 ceph-mon[74331]: pgmap v460: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Nov 24 09:42:35 compute-0 python3.9[212880]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:42:35 compute-0 sudo[212878]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:35 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Nov 24 09:42:35 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:42:35.122828) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 09:42:35 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Nov 24 09:42:35 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977355122868, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 4203, "num_deletes": 502, "total_data_size": 8572583, "memory_usage": 8700160, "flush_reason": "Manual Compaction"}
Nov 24 09:42:35 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Nov 24 09:42:35 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977355220438, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 8337447, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13213, "largest_seqno": 17415, "table_properties": {"data_size": 8319676, "index_size": 12025, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 4677, "raw_key_size": 36611, "raw_average_key_size": 19, "raw_value_size": 8283149, "raw_average_value_size": 4458, "num_data_blocks": 525, "num_entries": 1858, "num_filter_entries": 1858, "num_deletions": 502, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976901, "oldest_key_time": 1763976901, "file_creation_time": 1763977355, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "42aa12d2-c531-4ddc-8c4c-bc0b5971346b", "db_session_id": "RORHLERH15LC1QL8D0I4", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:42:35 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 97684 microseconds, and 12473 cpu microseconds.
Nov 24 09:42:35 compute-0 ceph-mon[74331]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:42:35 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:42:35.220498) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 8337447 bytes OK
Nov 24 09:42:35 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:42:35.220538) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Nov 24 09:42:35 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:42:35.285085) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Nov 24 09:42:35 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:42:35.285180) EVENT_LOG_v1 {"time_micros": 1763977355285168, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 09:42:35 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:42:35.285215) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 09:42:35 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 8555800, prev total WAL file size 8555800, number of live WAL files 2.
Nov 24 09:42:35 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:42:35 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:42:35.287185) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Nov 24 09:42:35 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 09:42:35 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(8142KB)], [32(12MB)]
Nov 24 09:42:35 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977355287246, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 21278105, "oldest_snapshot_seqno": -1}
Nov 24 09:42:35 compute-0 sudo[213002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrwvzjyidxbulzpssfvbshwuryhkjfug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977354.6609375-2285-134000328105525/AnsiballZ_copy.py'
Nov 24 09:42:35 compute-0 sudo[213002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:35 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 5095 keys, 15469778 bytes, temperature: kUnknown
Nov 24 09:42:35 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977355475759, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 15469778, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15430917, "index_size": 24991, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12805, "raw_key_size": 127482, "raw_average_key_size": 25, "raw_value_size": 15333948, "raw_average_value_size": 3009, "num_data_blocks": 1048, "num_entries": 5095, "num_filter_entries": 5095, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976305, "oldest_key_time": 0, "file_creation_time": 1763977355, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "42aa12d2-c531-4ddc-8c4c-bc0b5971346b", "db_session_id": "RORHLERH15LC1QL8D0I4", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:42:35 compute-0 ceph-mon[74331]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:42:35 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:42:35.476015) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 15469778 bytes
Nov 24 09:42:35 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:42:35.479614) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 112.8 rd, 82.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(8.0, 12.3 +0.0 blob) out(14.8 +0.0 blob), read-write-amplify(4.4) write-amplify(1.9) OK, records in: 6117, records dropped: 1022 output_compression: NoCompression
Nov 24 09:42:35 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:42:35.479635) EVENT_LOG_v1 {"time_micros": 1763977355479626, "job": 14, "event": "compaction_finished", "compaction_time_micros": 188591, "compaction_time_cpu_micros": 29418, "output_level": 6, "num_output_files": 1, "total_output_size": 15469778, "num_input_records": 6117, "num_output_records": 5095, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 09:42:35 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:42:35 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977355481059, "job": 14, "event": "table_file_deletion", "file_number": 34}
Nov 24 09:42:35 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:42:35 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977355483246, "job": 14, "event": "table_file_deletion", "file_number": 32}
Nov 24 09:42:35 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:42:35.287125) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:42:35 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:42:35.483492) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:42:35 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:42:35.483498) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:42:35 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:42:35.483500) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:42:35 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:42:35.483506) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:42:35 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:42:35.483508) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:42:35 compute-0 python3.9[213004]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763977354.6609375-2285-134000328105525/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:35 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v461: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Nov 24 09:42:35 compute-0 sudo[213002]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:35 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:35 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:35 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:35.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:35 compute-0 sudo[213154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebbdwdfkriatujfkjjtizjuuhekaeznd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977355.757231-2285-111768315279246/AnsiballZ_stat.py'
Nov 24 09:42:35 compute-0 sudo[213154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:36 compute-0 python3.9[213156]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:42:36 compute-0 sudo[213154]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:36 compute-0 sudo[213278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swqnfkjtniykwpxrehjjwwlfkvuymxcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977355.757231-2285-111768315279246/AnsiballZ_copy.py'
Nov 24 09:42:36 compute-0 sudo[213278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:36 compute-0 python3.9[213280]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763977355.757231-2285-111768315279246/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:36 compute-0 sudo[213278]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:36 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:36 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:36 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:36.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:42:37.025Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:42:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:42:37.026Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:42:37 compute-0 sudo[213430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aotfwvwquyduibrjniqjbmaccagfknwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977356.7995536-2285-96288503207864/AnsiballZ_stat.py'
Nov 24 09:42:37 compute-0 sudo[213430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:37 compute-0 ceph-mon[74331]: pgmap v461: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Nov 24 09:42:37 compute-0 python3.9[213432]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:42:37 compute-0 sudo[213430]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:37 compute-0 sudo[213554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tngifbjrawzpvodefsaeahobhgltdiya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977356.7995536-2285-96288503207864/AnsiballZ_copy.py'
Nov 24 09:42:37 compute-0 sudo[213554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:37 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v462: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 341 B/s wr, 1 op/s
Nov 24 09:42:37 compute-0 python3.9[213556]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763977356.7995536-2285-96288503207864/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:37 compute-0 sudo[213554]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:37 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:37 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:37 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:37.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:38 compute-0 sudo[213706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtgtzbaugsnaxigkljuqiizfopmphbkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977357.9327726-2285-150228215962888/AnsiballZ_stat.py'
Nov 24 09:42:38 compute-0 sudo[213706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:38 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:42:38 compute-0 python3.9[213708]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:42:38 compute-0 sudo[213706]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/094238 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:42:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [NOTICE] 327/094238 (4) : haproxy version is 2.3.17-d1c9119
Nov 24 09:42:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [NOTICE] 327/094238 (4) : path to executable is /usr/local/sbin/haproxy
Nov 24 09:42:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [ALERT] 327/094238 (4) : backend 'backend' has no server available!
Nov 24 09:42:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:38 : epoch 69242888 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:42:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:38 : epoch 69242888 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:42:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:38 : epoch 69242888 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:42:38 compute-0 sudo[213830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzlasjnyffftkenbndgpxkdkctjmfeyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977357.9327726-2285-150228215962888/AnsiballZ_copy.py'
Nov 24 09:42:38 compute-0 sudo[213830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:38 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:38 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:38 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:38.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:38 compute-0 python3.9[213832]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763977357.9327726-2285-150228215962888/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:38 compute-0 sudo[213830]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:39 compute-0 ceph-mon[74331]: pgmap v462: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 341 B/s wr, 1 op/s
Nov 24 09:42:39 compute-0 sudo[213983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocmkjtygmdlergjnrswvzbktcmnkjzrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977359.0133886-2285-133758526377738/AnsiballZ_stat.py'
Nov 24 09:42:39 compute-0 sudo[213983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:39 compute-0 python3.9[213985]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:42:39 compute-0 sudo[213983]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:39 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v463: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 341 B/s wr, 1 op/s
Nov 24 09:42:39 compute-0 sudo[214106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hntuvxnxsudmwqrjztfcczlrjdojcxlq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977359.0133886-2285-133758526377738/AnsiballZ_copy.py'
Nov 24 09:42:39 compute-0 sudo[214106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:39 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:39 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:39 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:39.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:39 compute-0 python3.9[214108]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763977359.0133886-2285-133758526377738/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:40 compute-0 sudo[214106]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:40 compute-0 sudo[214259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ildauajmxzanzzgcsykesjgsnnvlkxdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977360.1511931-2285-218954298087322/AnsiballZ_stat.py'
Nov 24 09:42:40 compute-0 sudo[214259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:40 compute-0 python3.9[214261]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:42:40 compute-0 sudo[214259]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:40 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:40 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:40 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:40.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:42:40] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Nov 24 09:42:40 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:42:40] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Nov 24 09:42:41 compute-0 sudo[214382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buztwbxfmrcqqqcfvqwqzmhhcctfxfup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977360.1511931-2285-218954298087322/AnsiballZ_copy.py'
Nov 24 09:42:41 compute-0 sudo[214382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:41 compute-0 ceph-mon[74331]: pgmap v463: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 341 B/s wr, 1 op/s
Nov 24 09:42:41 compute-0 sudo[214385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:42:41 compute-0 sudo[214385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:42:41 compute-0 sudo[214385]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:41 compute-0 sudo[214411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 24 09:42:41 compute-0 sudo[214411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:42:41 compute-0 python3.9[214384]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763977360.1511931-2285-218954298087322/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:41 compute-0 sudo[214382]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:41 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v464: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Nov 24 09:42:41 compute-0 sudo[214670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmoxrnzyskdikbkjhmfclcgttrybsexb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977361.5812428-2285-81041430281987/AnsiballZ_stat.py'
Nov 24 09:42:41 compute-0 sudo[214670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:41 compute-0 podman[214632]: 2025-11-24 09:42:41.90212399 +0000 UTC m=+0.076181404 container exec 926e81c0f890a1c1ac5ebf5b0a3fc7d39273a3029701ecf933d5ab782a4c6bc4 (image=quay.io/ceph/ceph:v19, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mon-compute-0, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:42:41 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:41 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:41 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:41.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:42 compute-0 podman[214632]: 2025-11-24 09:42:42.032790687 +0000 UTC m=+0.206848121 container exec_died 926e81c0f890a1c1ac5ebf5b0a3fc7d39273a3029701ecf933d5ab782a4c6bc4 (image=quay.io/ceph/ceph:v19, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mon-compute-0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:42:42 compute-0 python3.9[214676]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:42:42 compute-0 sudo[214670]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:42 compute-0 sudo[214907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilovsvpjyrexdhriakqslphdejnpbopa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977361.5812428-2285-81041430281987/AnsiballZ_copy.py'
Nov 24 09:42:42 compute-0 sudo[214856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:42:42 compute-0 sudo[214907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:42 compute-0 sudo[214856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:42:42 compute-0 sudo[214856]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:42 compute-0 python3.9[214913]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763977361.5812428-2285-81041430281987/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:42 compute-0 sudo[214907]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:42 compute-0 podman[214943]: 2025-11-24 09:42:42.698031513 +0000 UTC m=+0.084753245 container exec c1042f9aaa96d1cc7323d0bb263b746783ae7f616fd1b71ffa56027caf075582 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:42:42 compute-0 podman[214943]: 2025-11-24 09:42:42.743542466 +0000 UTC m=+0.130264168 container exec_died c1042f9aaa96d1cc7323d0bb263b746783ae7f616fd1b71ffa56027caf075582 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:42:42 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:42 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:42:42 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:42.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:42:43 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:42 : epoch 69242888 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:42:43 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:43 : epoch 69242888 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:42:43 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:43 : epoch 69242888 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:42:43 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:43 : epoch 69242888 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:42:43 compute-0 podman[215119]: 2025-11-24 09:42:43.088272831 +0000 UTC m=+0.068725432 container exec a56abb98d214d14a158e60f98fb9f1de4024c78aaef5c622e277eb3dc23f922f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 24 09:42:43 compute-0 podman[215119]: 2025-11-24 09:42:43.10452963 +0000 UTC m=+0.084982211 container exec_died a56abb98d214d14a158e60f98fb9f1de4024c78aaef5c622e277eb3dc23f922f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:42:43 compute-0 sudo[215184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cetzkylplgtakxnesyspjpnfdurqntxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977362.8162587-2285-209744066659919/AnsiballZ_stat.py'
Nov 24 09:42:43 compute-0 sudo[215184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:43 compute-0 ceph-mon[74331]: pgmap v464: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Nov 24 09:42:43 compute-0 podman[215232]: 2025-11-24 09:42:43.337261688 +0000 UTC m=+0.053988423 container exec 6c3a81d73f056383702bf60c1dab3f213ae48261b4107ee30655cbadd5ed4114 (image=quay.io/ceph/haproxy:2.3, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf)
Nov 24 09:42:43 compute-0 python3.9[215193]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:42:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:42:43 compute-0 podman[215232]: 2025-11-24 09:42:43.356551256 +0000 UTC m=+0.073278011 container exec_died 6c3a81d73f056383702bf60c1dab3f213ae48261b4107ee30655cbadd5ed4114 (image=quay.io/ceph/haproxy:2.3, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf)
Nov 24 09:42:43 compute-0 sudo[215184]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:43 compute-0 podman[215345]: 2025-11-24 09:42:43.617643844 +0000 UTC m=+0.062081360 container exec da5e2e82794b556dfcd8ea30635453752d519b3ce5ab3e77ac09ab6f644d0021 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-0-mglptr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., version=2.2.4, build-date=2023-02-22T09:23:20, distribution-scope=public, description=keepalived for Ceph, name=keepalived, release=1793, architecture=x86_64, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, io.openshift.expose-services=, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Nov 24 09:42:43 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v465: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 426 B/s wr, 2 op/s
Nov 24 09:42:43 compute-0 podman[215345]: 2025-11-24 09:42:43.667598722 +0000 UTC m=+0.112036188 container exec_died da5e2e82794b556dfcd8ea30635453752d519b3ce5ab3e77ac09ab6f644d0021 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-0-mglptr, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, distribution-scope=public, name=keepalived, release=1793, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., version=2.2.4, build-date=2023-02-22T09:23:20, description=keepalived for Ceph)
Nov 24 09:42:43 compute-0 sudo[215458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aefwjnfxvwtnisihmtnbwllonocajhgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977362.8162587-2285-209744066659919/AnsiballZ_copy.py'
Nov 24 09:42:43 compute-0 sudo[215458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:43 compute-0 podman[215487]: 2025-11-24 09:42:43.937014526 +0000 UTC m=+0.072327335 container exec 333e8d52ac14c1ad2562a9b1108149f074ce2b54eb58b09f4ec22c7b717459e6 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:42:43 compute-0 podman[215487]: 2025-11-24 09:42:43.964786682 +0000 UTC m=+0.100099471 container exec_died 333e8d52ac14c1ad2562a9b1108149f074ce2b54eb58b09f4ec22c7b717459e6 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:42:43 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:43 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:43 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:43.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:43 compute-0 python3.9[215469]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763977362.8162587-2285-209744066659919/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:44 compute-0 sudo[215458]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:44 compute-0 podman[215586]: 2025-11-24 09:42:44.199152783 +0000 UTC m=+0.062825921 container exec 64e58e60bc23a7d57cc9d528e4c0a82e4df02b33e046975aeb8ef22ad0995bf2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 24 09:42:44 compute-0 podman[215586]: 2025-11-24 09:42:44.433300687 +0000 UTC m=+0.296973825 container exec_died 64e58e60bc23a7d57cc9d528e4c0a82e4df02b33e046975aeb8ef22ad0995bf2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 24 09:42:44 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:44 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:44 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:44.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:44 compute-0 podman[215699]: 2025-11-24 09:42:44.826280765 +0000 UTC m=+0.054995658 container exec 10beeaa631829ec8676854498a3516687cc150842a3e976767e7a8406d406beb (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:42:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:44 : epoch 69242888 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:42:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:44 : epoch 69242888 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:42:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:44 : epoch 69242888 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:42:44 compute-0 podman[215699]: 2025-11-24 09:42:44.866545013 +0000 UTC m=+0.095259916 container exec_died 10beeaa631829ec8676854498a3516687cc150842a3e976767e7a8406d406beb (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:42:44 compute-0 sudo[214411]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:44 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:42:44 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:42:44 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:42:44 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:42:45 compute-0 sudo[215741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:42:45 compute-0 sudo[215741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:42:45 compute-0 sudo[215741]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:45 compute-0 sudo[215766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:42:45 compute-0 sudo[215766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:42:45 compute-0 ceph-mon[74331]: pgmap v465: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 426 B/s wr, 2 op/s
Nov 24 09:42:45 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:42:45 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:42:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Optimize plan auto_2025-11-24_09:42:45
Nov 24 09:42:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 09:42:45 compute-0 ceph-mgr[74626]: [balancer INFO root] do_upmap
Nov 24 09:42:45 compute-0 ceph-mgr[74626]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'default.rgw.meta', 'backups', 'volumes', '.rgw.root', 'cephfs.cephfs.meta', '.nfs', 'vms', 'images', 'cephfs.cephfs.data', 'default.rgw.log']
Nov 24 09:42:45 compute-0 ceph-mgr[74626]: [balancer INFO root] prepared 0/10 upmap changes
Nov 24 09:42:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:42:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:42:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:42:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:42:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:42:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:42:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 09:42:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:42:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 09:42:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:42:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:42:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:42:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:42:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:42:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:42:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:42:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:42:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:42:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 09:42:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:42:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:42:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:42:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 24 09:42:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:42:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 09:42:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:42:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:42:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:42:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 09:42:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:42:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 09:42:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 09:42:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:42:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 09:42:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:42:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:42:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:42:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:42:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:42:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:42:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:42:45 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v466: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 426 B/s wr, 2 op/s
Nov 24 09:42:45 compute-0 sudo[215766]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:45 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:42:45 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:42:45 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:42:45 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:42:45 compute-0 sudo[215922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:42:45 compute-0 sudo[215922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:42:45 compute-0 sudo[215922]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:45 compute-0 sudo[215975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 24 09:42:45 compute-0 sudo[215975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:42:45 compute-0 podman[215973]: 2025-11-24 09:42:45.935471893 +0000 UTC m=+0.085382421 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Nov 24 09:42:45 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:45 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:42:45 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:45.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:42:46 compute-0 python3.9[215972]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:42:46 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:42:46 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:42:46 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:42:46 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:42:46 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:42:46 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:42:46 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:42:46 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:42:46 compute-0 podman[216095]: 2025-11-24 09:42:46.315434856 +0000 UTC m=+0.043531023 container create a26b14b3901d757b61da4d3b17138c26fcdd058846a3b7f2693e0563c9c48869 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_hopper, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Nov 24 09:42:46 compute-0 systemd[1]: Started libpod-conmon-a26b14b3901d757b61da4d3b17138c26fcdd058846a3b7f2693e0563c9c48869.scope.
Nov 24 09:42:46 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:42:46 compute-0 podman[216095]: 2025-11-24 09:42:46.294929427 +0000 UTC m=+0.023025604 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:42:46 compute-0 podman[216095]: 2025-11-24 09:42:46.400493038 +0000 UTC m=+0.128589225 container init a26b14b3901d757b61da4d3b17138c26fcdd058846a3b7f2693e0563c9c48869 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:42:46 compute-0 podman[216095]: 2025-11-24 09:42:46.411431691 +0000 UTC m=+0.139527878 container start a26b14b3901d757b61da4d3b17138c26fcdd058846a3b7f2693e0563c9c48869 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:42:46 compute-0 podman[216095]: 2025-11-24 09:42:46.415981218 +0000 UTC m=+0.144077395 container attach a26b14b3901d757b61da4d3b17138c26fcdd058846a3b7f2693e0563c9c48869 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_hopper, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:42:46 compute-0 systemd[1]: libpod-a26b14b3901d757b61da4d3b17138c26fcdd058846a3b7f2693e0563c9c48869.scope: Deactivated successfully.
Nov 24 09:42:46 compute-0 zen_hopper[216142]: 167 167
Nov 24 09:42:46 compute-0 conmon[216142]: conmon a26b14b3901d757b61da <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a26b14b3901d757b61da4d3b17138c26fcdd058846a3b7f2693e0563c9c48869.scope/container/memory.events
Nov 24 09:42:46 compute-0 podman[216095]: 2025-11-24 09:42:46.420297159 +0000 UTC m=+0.148393326 container died a26b14b3901d757b61da4d3b17138c26fcdd058846a3b7f2693e0563c9c48869 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_hopper, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 24 09:42:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-fad4c6461ac3341e08306fb53288531142f9e2cc550a45f95cfc05745e0f6017-merged.mount: Deactivated successfully.
Nov 24 09:42:46 compute-0 podman[216095]: 2025-11-24 09:42:46.469778314 +0000 UTC m=+0.197874491 container remove a26b14b3901d757b61da4d3b17138c26fcdd058846a3b7f2693e0563c9c48869 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:42:46 compute-0 systemd[1]: libpod-conmon-a26b14b3901d757b61da4d3b17138c26fcdd058846a3b7f2693e0563c9c48869.scope: Deactivated successfully.
Nov 24 09:42:46 compute-0 podman[216188]: 2025-11-24 09:42:46.676983694 +0000 UTC m=+0.048179702 container create 4a3c2b90128c0767dce439dbd4b76d7f6e03c68add2090b4e02eef9ba6d8d996 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_ptolemy, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:42:46 compute-0 systemd[1]: Started libpod-conmon-4a3c2b90128c0767dce439dbd4b76d7f6e03c68add2090b4e02eef9ba6d8d996.scope.
Nov 24 09:42:46 compute-0 podman[216188]: 2025-11-24 09:42:46.656776344 +0000 UTC m=+0.027972402 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:42:46 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:46 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:42:46 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:46.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:42:46 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:42:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7274d22e938d5c7d18ad12cac7f5f207a96f68eb61bd6406229b3b4ae8b573b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:42:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7274d22e938d5c7d18ad12cac7f5f207a96f68eb61bd6406229b3b4ae8b573b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:42:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7274d22e938d5c7d18ad12cac7f5f207a96f68eb61bd6406229b3b4ae8b573b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:42:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7274d22e938d5c7d18ad12cac7f5f207a96f68eb61bd6406229b3b4ae8b573b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:42:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7274d22e938d5c7d18ad12cac7f5f207a96f68eb61bd6406229b3b4ae8b573b6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:42:46 compute-0 podman[216188]: 2025-11-24 09:42:46.783320995 +0000 UTC m=+0.154517033 container init 4a3c2b90128c0767dce439dbd4b76d7f6e03c68add2090b4e02eef9ba6d8d996 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_ptolemy, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2)
Nov 24 09:42:46 compute-0 podman[216188]: 2025-11-24 09:42:46.793014405 +0000 UTC m=+0.164210413 container start 4a3c2b90128c0767dce439dbd4b76d7f6e03c68add2090b4e02eef9ba6d8d996 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1)
Nov 24 09:42:46 compute-0 podman[216188]: 2025-11-24 09:42:46.796282509 +0000 UTC m=+0.167478517 container attach 4a3c2b90128c0767dce439dbd4b76d7f6e03c68add2090b4e02eef9ba6d8d996 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_ptolemy, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:42:46 compute-0 sudo[216283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aomwljikysotvpfrfodcofznfpydwcsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977366.337697-2903-70903571134133/AnsiballZ_seboolean.py'
Nov 24 09:42:46 compute-0 sudo[216283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:42:47.027Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:42:47 compute-0 python3.9[216285]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Nov 24 09:42:47 compute-0 intelligent_ptolemy[216229]: --> passed data devices: 0 physical, 1 LVM
Nov 24 09:42:47 compute-0 intelligent_ptolemy[216229]: --> All data devices are unavailable
Nov 24 09:42:47 compute-0 systemd[1]: libpod-4a3c2b90128c0767dce439dbd4b76d7f6e03c68add2090b4e02eef9ba6d8d996.scope: Deactivated successfully.
Nov 24 09:42:47 compute-0 podman[216188]: 2025-11-24 09:42:47.158389232 +0000 UTC m=+0.529585240 container died 4a3c2b90128c0767dce439dbd4b76d7f6e03c68add2090b4e02eef9ba6d8d996 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:42:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-7274d22e938d5c7d18ad12cac7f5f207a96f68eb61bd6406229b3b4ae8b573b6-merged.mount: Deactivated successfully.
Nov 24 09:42:47 compute-0 podman[216188]: 2025-11-24 09:42:47.207334143 +0000 UTC m=+0.578530151 container remove 4a3c2b90128c0767dce439dbd4b76d7f6e03c68add2090b4e02eef9ba6d8d996 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_ptolemy, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Nov 24 09:42:47 compute-0 systemd[1]: libpod-conmon-4a3c2b90128c0767dce439dbd4b76d7f6e03c68add2090b4e02eef9ba6d8d996.scope: Deactivated successfully.
Nov 24 09:42:47 compute-0 sudo[215975]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:47 compute-0 ceph-mon[74331]: pgmap v466: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 426 B/s wr, 2 op/s
Nov 24 09:42:47 compute-0 sudo[216309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:42:47 compute-0 sudo[216309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:42:47 compute-0 sudo[216309]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:47 compute-0 sudo[216334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- lvm list --format json
Nov 24 09:42:47 compute-0 sudo[216334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:42:47 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v467: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 767 B/s wr, 3 op/s
Nov 24 09:42:47 compute-0 podman[216398]: 2025-11-24 09:42:47.812678685 +0000 UTC m=+0.076656296 container create 48371e7aad3614ea635d227bcc31e51fde1e750a86b65e51241fac695637961c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_burnell, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:42:47 compute-0 dbus-broker-launch[810]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Nov 24 09:42:47 compute-0 podman[216398]: 2025-11-24 09:42:47.759753292 +0000 UTC m=+0.023730923 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:42:47 compute-0 systemd[1]: Started libpod-conmon-48371e7aad3614ea635d227bcc31e51fde1e750a86b65e51241fac695637961c.scope.
Nov 24 09:42:47 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:42:47 compute-0 podman[216398]: 2025-11-24 09:42:47.913698799 +0000 UTC m=+0.177676500 container init 48371e7aad3614ea635d227bcc31e51fde1e750a86b65e51241fac695637961c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_burnell, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Nov 24 09:42:47 compute-0 podman[216398]: 2025-11-24 09:42:47.922617589 +0000 UTC m=+0.186595200 container start 48371e7aad3614ea635d227bcc31e51fde1e750a86b65e51241fac695637961c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_burnell, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:42:47 compute-0 interesting_burnell[216414]: 167 167
Nov 24 09:42:47 compute-0 systemd[1]: libpod-48371e7aad3614ea635d227bcc31e51fde1e750a86b65e51241fac695637961c.scope: Deactivated successfully.
Nov 24 09:42:47 compute-0 conmon[216414]: conmon 48371e7aad3614ea635d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-48371e7aad3614ea635d227bcc31e51fde1e750a86b65e51241fac695637961c.scope/container/memory.events
Nov 24 09:42:47 compute-0 podman[216398]: 2025-11-24 09:42:47.932176485 +0000 UTC m=+0.196154186 container attach 48371e7aad3614ea635d227bcc31e51fde1e750a86b65e51241fac695637961c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_burnell, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:42:47 compute-0 podman[216398]: 2025-11-24 09:42:47.932968915 +0000 UTC m=+0.196946566 container died 48371e7aad3614ea635d227bcc31e51fde1e750a86b65e51241fac695637961c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_burnell, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:42:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c61e22406ad5aa6d92af01de546d79170f0b700006f5c3a817820f99616555c-merged.mount: Deactivated successfully.
Nov 24 09:42:47 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:47 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:47 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:47.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:48 compute-0 podman[216398]: 2025-11-24 09:42:48.086975384 +0000 UTC m=+0.350952995 container remove 48371e7aad3614ea635d227bcc31e51fde1e750a86b65e51241fac695637961c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_burnell, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:42:48 compute-0 systemd[1]: libpod-conmon-48371e7aad3614ea635d227bcc31e51fde1e750a86b65e51241fac695637961c.scope: Deactivated successfully.
Nov 24 09:42:48 compute-0 sudo[216283]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:48 compute-0 podman[216443]: 2025-11-24 09:42:48.27961008 +0000 UTC m=+0.072585192 container create d609cdeaea74675d11e0b757ae0dcf78d815d5850cde0eb58e62b27a84ab609c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_feistel, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:42:48 compute-0 systemd[1]: Started libpod-conmon-d609cdeaea74675d11e0b757ae0dcf78d815d5850cde0eb58e62b27a84ab609c.scope.
Nov 24 09:42:48 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:42:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a2339f598cea58e8a821c5f0a974a0b9243976b0175669ed6c924c972f32d28/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:42:48 compute-0 podman[216443]: 2025-11-24 09:42:48.244862824 +0000 UTC m=+0.037837966 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:42:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a2339f598cea58e8a821c5f0a974a0b9243976b0175669ed6c924c972f32d28/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:42:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a2339f598cea58e8a821c5f0a974a0b9243976b0175669ed6c924c972f32d28/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:42:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a2339f598cea58e8a821c5f0a974a0b9243976b0175669ed6c924c972f32d28/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:42:48 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:42:48 compute-0 podman[216443]: 2025-11-24 09:42:48.354776917 +0000 UTC m=+0.147752059 container init d609cdeaea74675d11e0b757ae0dcf78d815d5850cde0eb58e62b27a84ab609c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_feistel, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:42:48 compute-0 podman[216443]: 2025-11-24 09:42:48.360907745 +0000 UTC m=+0.153882847 container start d609cdeaea74675d11e0b757ae0dcf78d815d5850cde0eb58e62b27a84ab609c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_feistel, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Nov 24 09:42:48 compute-0 podman[216443]: 2025-11-24 09:42:48.36423662 +0000 UTC m=+0.157211732 container attach d609cdeaea74675d11e0b757ae0dcf78d815d5850cde0eb58e62b27a84ab609c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:42:48 compute-0 ceph-mon[74331]: pgmap v467: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 767 B/s wr, 3 op/s
Nov 24 09:42:48 compute-0 relaxed_feistel[216484]: {
Nov 24 09:42:48 compute-0 relaxed_feistel[216484]:     "0": [
Nov 24 09:42:48 compute-0 relaxed_feistel[216484]:         {
Nov 24 09:42:48 compute-0 relaxed_feistel[216484]:             "devices": [
Nov 24 09:42:48 compute-0 relaxed_feistel[216484]:                 "/dev/loop3"
Nov 24 09:42:48 compute-0 relaxed_feistel[216484]:             ],
Nov 24 09:42:48 compute-0 relaxed_feistel[216484]:             "lv_name": "ceph_lv0",
Nov 24 09:42:48 compute-0 relaxed_feistel[216484]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:42:48 compute-0 relaxed_feistel[216484]:             "lv_size": "21470642176",
Nov 24 09:42:48 compute-0 relaxed_feistel[216484]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=84a084c3-61a7-5de7-8207-1f88efa59a64,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 24 09:42:48 compute-0 relaxed_feistel[216484]:             "lv_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:42:48 compute-0 relaxed_feistel[216484]:             "name": "ceph_lv0",
Nov 24 09:42:48 compute-0 relaxed_feistel[216484]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:42:48 compute-0 relaxed_feistel[216484]:             "tags": {
Nov 24 09:42:48 compute-0 relaxed_feistel[216484]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:42:48 compute-0 relaxed_feistel[216484]:                 "ceph.block_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:42:48 compute-0 relaxed_feistel[216484]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 09:42:48 compute-0 relaxed_feistel[216484]:                 "ceph.cluster_fsid": "84a084c3-61a7-5de7-8207-1f88efa59a64",
Nov 24 09:42:48 compute-0 relaxed_feistel[216484]:                 "ceph.cluster_name": "ceph",
Nov 24 09:42:48 compute-0 relaxed_feistel[216484]:                 "ceph.crush_device_class": "",
Nov 24 09:42:48 compute-0 relaxed_feistel[216484]:                 "ceph.encrypted": "0",
Nov 24 09:42:48 compute-0 relaxed_feistel[216484]:                 "ceph.osd_fsid": "4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c",
Nov 24 09:42:48 compute-0 relaxed_feistel[216484]:                 "ceph.osd_id": "0",
Nov 24 09:42:48 compute-0 relaxed_feistel[216484]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 09:42:48 compute-0 relaxed_feistel[216484]:                 "ceph.type": "block",
Nov 24 09:42:48 compute-0 relaxed_feistel[216484]:                 "ceph.vdo": "0",
Nov 24 09:42:48 compute-0 relaxed_feistel[216484]:                 "ceph.with_tpm": "0"
Nov 24 09:42:48 compute-0 relaxed_feistel[216484]:             },
Nov 24 09:42:48 compute-0 relaxed_feistel[216484]:             "type": "block",
Nov 24 09:42:48 compute-0 relaxed_feistel[216484]:             "vg_name": "ceph_vg0"
Nov 24 09:42:48 compute-0 relaxed_feistel[216484]:         }
Nov 24 09:42:48 compute-0 relaxed_feistel[216484]:     ]
Nov 24 09:42:48 compute-0 relaxed_feistel[216484]: }
Nov 24 09:42:48 compute-0 systemd[1]: libpod-d609cdeaea74675d11e0b757ae0dcf78d815d5850cde0eb58e62b27a84ab609c.scope: Deactivated successfully.
Nov 24 09:42:48 compute-0 podman[216443]: 2025-11-24 09:42:48.686743443 +0000 UTC m=+0.479718565 container died d609cdeaea74675d11e0b757ae0dcf78d815d5850cde0eb58e62b27a84ab609c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_feistel, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:42:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a2339f598cea58e8a821c5f0a974a0b9243976b0175669ed6c924c972f32d28-merged.mount: Deactivated successfully.
Nov 24 09:42:48 compute-0 podman[216443]: 2025-11-24 09:42:48.746907854 +0000 UTC m=+0.539882976 container remove d609cdeaea74675d11e0b757ae0dcf78d815d5850cde0eb58e62b27a84ab609c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_feistel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:42:48 compute-0 systemd[1]: libpod-conmon-d609cdeaea74675d11e0b757ae0dcf78d815d5850cde0eb58e62b27a84ab609c.scope: Deactivated successfully.
Nov 24 09:42:48 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:48 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:42:48 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:48.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:42:48 compute-0 sudo[216334]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:48 compute-0 sudo[216557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:42:48 compute-0 sudo[216557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:42:48 compute-0 sudo[216557]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:48 compute-0 sudo[216605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- raw list --format json
Nov 24 09:42:48 compute-0 sudo[216605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:42:49 compute-0 sudo[216680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnwnnfcuxwnwapmxyilrssdrfixfqljb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977368.7386298-2927-55148655263289/AnsiballZ_copy.py'
Nov 24 09:42:49 compute-0 sudo[216680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:49 compute-0 python3.9[216682]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:49 compute-0 sudo[216680]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:49 compute-0 podman[216724]: 2025-11-24 09:42:49.321866172 +0000 UTC m=+0.046265273 container create b141f95d965fa52661425040324bfdb111c8c7756c8b425a6743ee66131081df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:42:49 compute-0 systemd[1]: Started libpod-conmon-b141f95d965fa52661425040324bfdb111c8c7756c8b425a6743ee66131081df.scope.
Nov 24 09:42:49 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:42:49 compute-0 podman[216724]: 2025-11-24 09:42:49.301349504 +0000 UTC m=+0.025748625 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:42:49 compute-0 podman[216724]: 2025-11-24 09:42:49.404286306 +0000 UTC m=+0.128685427 container init b141f95d965fa52661425040324bfdb111c8c7756c8b425a6743ee66131081df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Nov 24 09:42:49 compute-0 podman[216724]: 2025-11-24 09:42:49.412680483 +0000 UTC m=+0.137079584 container start b141f95d965fa52661425040324bfdb111c8c7756c8b425a6743ee66131081df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_lovelace, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:42:49 compute-0 podman[216724]: 2025-11-24 09:42:49.416538252 +0000 UTC m=+0.140937353 container attach b141f95d965fa52661425040324bfdb111c8c7756c8b425a6743ee66131081df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_lovelace, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 24 09:42:49 compute-0 strange_lovelace[216764]: 167 167
Nov 24 09:42:49 compute-0 systemd[1]: libpod-b141f95d965fa52661425040324bfdb111c8c7756c8b425a6743ee66131081df.scope: Deactivated successfully.
Nov 24 09:42:49 compute-0 podman[216724]: 2025-11-24 09:42:49.42033493 +0000 UTC m=+0.144734031 container died b141f95d965fa52661425040324bfdb111c8c7756c8b425a6743ee66131081df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_lovelace, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 24 09:42:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-621a113ea74bf5012453d825605f3ac25306060e123600e209b1bf0b05034696-merged.mount: Deactivated successfully.
Nov 24 09:42:49 compute-0 podman[216724]: 2025-11-24 09:42:49.464219891 +0000 UTC m=+0.188618992 container remove b141f95d965fa52661425040324bfdb111c8c7756c8b425a6743ee66131081df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_lovelace, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 09:42:49 compute-0 systemd[1]: libpod-conmon-b141f95d965fa52661425040324bfdb111c8c7756c8b425a6743ee66131081df.scope: Deactivated successfully.
Nov 24 09:42:49 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v468: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 426 B/s wr, 2 op/s
Nov 24 09:42:49 compute-0 podman[216871]: 2025-11-24 09:42:49.631180574 +0000 UTC m=+0.046624852 container create 2739d6c68d07596f8a71ca67703ceb5b269399fda4b2a1a5bb3bbe04e1290af3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 24 09:42:49 compute-0 systemd[1]: Started libpod-conmon-2739d6c68d07596f8a71ca67703ceb5b269399fda4b2a1a5bb3bbe04e1290af3.scope.
Nov 24 09:42:49 compute-0 sudo[216930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpjzgmdmuiusadexljultxuuxeumwprb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977369.4068882-2927-174555413210306/AnsiballZ_copy.py'
Nov 24 09:42:49 compute-0 sudo[216930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:49 compute-0 podman[216871]: 2025-11-24 09:42:49.61083348 +0000 UTC m=+0.026277788 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:42:49 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:42:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6371f62059a625dbe2762b42d02a99658a47c42fd54e385f91e7522df80ed7ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:42:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6371f62059a625dbe2762b42d02a99658a47c42fd54e385f91e7522df80ed7ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:42:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6371f62059a625dbe2762b42d02a99658a47c42fd54e385f91e7522df80ed7ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:42:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6371f62059a625dbe2762b42d02a99658a47c42fd54e385f91e7522df80ed7ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:42:49 compute-0 podman[216871]: 2025-11-24 09:42:49.743885889 +0000 UTC m=+0.159330197 container init 2739d6c68d07596f8a71ca67703ceb5b269399fda4b2a1a5bb3bbe04e1290af3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:42:49 compute-0 podman[216871]: 2025-11-24 09:42:49.753847616 +0000 UTC m=+0.169291894 container start 2739d6c68d07596f8a71ca67703ceb5b269399fda4b2a1a5bb3bbe04e1290af3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_montalcini, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 09:42:49 compute-0 podman[216871]: 2025-11-24 09:42:49.75713126 +0000 UTC m=+0.172575558 container attach 2739d6c68d07596f8a71ca67703ceb5b269399fda4b2a1a5bb3bbe04e1290af3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:42:49 compute-0 python3.9[216934]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:49 compute-0 sudo[216930]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:49 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:49 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:49 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:49.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:50 compute-0 sudo[217152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnhgcgkjxjkkcbasytdcnluifcicnnfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977370.0482519-2927-86302916688724/AnsiballZ_copy.py'
Nov 24 09:42:50 compute-0 sudo[217152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:50 compute-0 lvm[217160]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 09:42:50 compute-0 lvm[217160]: VG ceph_vg0 finished
Nov 24 09:42:50 compute-0 clever_montalcini[216932]: {}
Nov 24 09:42:50 compute-0 systemd[1]: libpod-2739d6c68d07596f8a71ca67703ceb5b269399fda4b2a1a5bb3bbe04e1290af3.scope: Deactivated successfully.
Nov 24 09:42:50 compute-0 podman[216871]: 2025-11-24 09:42:50.459420581 +0000 UTC m=+0.874864869 container died 2739d6c68d07596f8a71ca67703ceb5b269399fda4b2a1a5bb3bbe04e1290af3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_montalcini, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:42:50 compute-0 systemd[1]: libpod-2739d6c68d07596f8a71ca67703ceb5b269399fda4b2a1a5bb3bbe04e1290af3.scope: Consumed 1.059s CPU time.
Nov 24 09:42:50 compute-0 python3.9[217155]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-6371f62059a625dbe2762b42d02a99658a47c42fd54e385f91e7522df80ed7ca-merged.mount: Deactivated successfully.
Nov 24 09:42:50 compute-0 sudo[217152]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:50 compute-0 podman[216871]: 2025-11-24 09:42:50.511066562 +0000 UTC m=+0.926510840 container remove 2739d6c68d07596f8a71ca67703ceb5b269399fda4b2a1a5bb3bbe04e1290af3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_montalcini, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 24 09:42:50 compute-0 systemd[1]: libpod-conmon-2739d6c68d07596f8a71ca67703ceb5b269399fda4b2a1a5bb3bbe04e1290af3.scope: Deactivated successfully.
Nov 24 09:42:50 compute-0 sudo[216605]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:50 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:42:50 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:42:50 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:42:50 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:42:50 compute-0 sudo[217225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:42:50 compute-0 sudo[217225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:42:50 compute-0 sudo[217225]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:50 compute-0 ceph-mon[74331]: pgmap v468: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 426 B/s wr, 2 op/s
Nov 24 09:42:50 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:42:50 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:42:50 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:50 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:42:50 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:50.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:42:50 compute-0 sudo[217352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkcsorpdzklblhrxiawcjtkacwixbeou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977370.642552-2927-43208022589701/AnsiballZ_copy.py'
Nov 24 09:42:50 compute-0 sudo[217352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:50 : epoch 69242888 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:42:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:50 : epoch 69242888 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 24 09:42:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:50 : epoch 69242888 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 24 09:42:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:50 : epoch 69242888 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 24 09:42:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:50 : epoch 69242888 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 24 09:42:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:50 : epoch 69242888 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 24 09:42:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:50 : epoch 69242888 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 24 09:42:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:50 : epoch 69242888 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:42:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:50 : epoch 69242888 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:42:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:50 : epoch 69242888 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:42:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:50 : epoch 69242888 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 24 09:42:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:50 : epoch 69242888 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:42:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:50 : epoch 69242888 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 24 09:42:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:50 : epoch 69242888 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 24 09:42:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:50 : epoch 69242888 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 24 09:42:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:50 : epoch 69242888 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 24 09:42:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:50 : epoch 69242888 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 24 09:42:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:50 : epoch 69242888 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 24 09:42:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:50 : epoch 69242888 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 24 09:42:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:50 : epoch 69242888 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 24 09:42:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:50 : epoch 69242888 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 24 09:42:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:50 : epoch 69242888 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 24 09:42:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:50 : epoch 69242888 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 24 09:42:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:50 : epoch 69242888 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 24 09:42:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:50 : epoch 69242888 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 09:42:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:50 : epoch 69242888 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 24 09:42:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:50 : epoch 69242888 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 09:42:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:50 : epoch 69242888 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:42:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:42:50] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Nov 24 09:42:50 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:42:50] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Nov 24 09:42:51 compute-0 python3.9[217354]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:51 compute-0 sudo[217352]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:51 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:51 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf3c000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:51 compute-0 sudo[217520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brtomfwfyqseqzqyimdanlwwvlutsjjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977371.2222874-2927-138181511290030/AnsiballZ_copy.py'
Nov 24 09:42:51 compute-0 sudo[217520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:51 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v469: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Nov 24 09:42:51 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/094251 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:42:51 compute-0 python3.9[217522]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:51 compute-0 sudo[217520]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:51 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:51 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf3c000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:51 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:51 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:42:51 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:51.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:42:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:52 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf18000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:52 compute-0 ceph-mon[74331]: pgmap v469: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Nov 24 09:42:52 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:52 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:42:52 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:52.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:42:52 compute-0 podman[217589]: 2025-11-24 09:42:52.799890562 +0000 UTC m=+0.063750963 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Nov 24 09:42:52 compute-0 sudo[217689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjooikxwawhuexfxeyvmbnlianviygzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977372.6675208-3035-213402226041056/AnsiballZ_copy.py'
Nov 24 09:42:52 compute-0 sudo[217689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:53 compute-0 python3.9[217691]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:53 compute-0 sudo[217689]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:53 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:53 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf38001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:53 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:42:53 compute-0 sudo[217842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxnupzzmcjqifxsqjyygtvksnxuuysbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977373.3374434-3035-227269396365386/AnsiballZ_copy.py'
Nov 24 09:42:53 compute-0 sudo[217842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:53 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v470: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:42:53 compute-0 python3.9[217844]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:53 compute-0 sudo[217842]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:53 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:53 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf20000d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:53 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:53 : epoch 69242888 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:42:53 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:53 : epoch 69242888 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:42:53 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:53 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:53 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:53.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:54 compute-0 sudo[217994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsqieoagakpcoodumjcbkxizibwljqhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977373.878181-3035-139975052023472/AnsiballZ_copy.py'
Nov 24 09:42:54 compute-0 sudo[217994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:54 compute-0 python3.9[217996]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:54 compute-0 sudo[217994]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:54 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/094254 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:42:54 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:54 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf3c002070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:54 compute-0 ceph-mon[74331]: pgmap v470: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:42:54 compute-0 sudo[218147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuisikzcksqkuluwwulzslsgwqmlsbbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977374.4786007-3035-269544452929562/AnsiballZ_copy.py'
Nov 24 09:42:54 compute-0 sudo[218147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:54 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:54 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:54 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:54.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:54 compute-0 python3.9[218149]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:54 compute-0 sudo[218147]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:55 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf180016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:55 compute-0 sudo[218300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdebixqjukzkyctygjwmbcufqcnxiqjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977375.1105075-3035-182321929122542/AnsiballZ_copy.py'
Nov 24 09:42:55 compute-0 sudo[218300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:55 compute-0 python3.9[218302]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:42:55 compute-0 sudo[218300]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:55 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v471: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:42:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:55 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf380025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:55 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:55 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:55 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:55.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:56 compute-0 sudo[218453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyxtvemevnsjgbhtexyrvgjxsrobtyqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977376.2496455-3143-131411623199491/AnsiballZ_systemd.py'
Nov 24 09:42:56 compute-0 sudo[218453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:56 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf20001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:56 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:56 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:42:56 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:56.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:42:56 compute-0 ceph-mon[74331]: pgmap v471: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:42:56 compute-0 python3.9[218455]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 09:42:56 compute-0 systemd[1]: Reloading.
Nov 24 09:42:56 compute-0 systemd-sysv-generator[218486]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:42:56 compute-0 systemd-rc-local-generator[218483]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:42:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:56 : epoch 69242888 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:42:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:42:57.031Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:42:57 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Nov 24 09:42:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:57 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf3c002070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:57 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Nov 24 09:42:57 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Nov 24 09:42:57 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Nov 24 09:42:57 compute-0 systemd[1]: Starting libvirt logging daemon...
Nov 24 09:42:57 compute-0 systemd[1]: Started libvirt logging daemon.
Nov 24 09:42:57 compute-0 sudo[218453]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:57 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v472: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Nov 24 09:42:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:57 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf180016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:57 compute-0 sudo[218647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjrxazjmnoqmdvuucsjczepnwmyzvtpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977377.5405285-3143-272962516021016/AnsiballZ_systemd.py'
Nov 24 09:42:57 compute-0 sudo[218647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:57 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:57 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:42:57 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:57.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:42:58 compute-0 python3.9[218649]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 09:42:58 compute-0 systemd[1]: Reloading.
Nov 24 09:42:58 compute-0 systemd-rc-local-generator[218675]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:42:58 compute-0 systemd-sysv-generator[218678]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:42:58 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:42:58 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Nov 24 09:42:58 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Nov 24 09:42:58 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Nov 24 09:42:58 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Nov 24 09:42:58 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Nov 24 09:42:58 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Nov 24 09:42:58 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Nov 24 09:42:58 compute-0 systemd[1]: Started libvirt nodedev daemon.
Nov 24 09:42:58 compute-0 sudo[218647]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:58 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf380025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:58 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:58 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 24 09:42:58 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:42:58.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 24 09:42:58 compute-0 ceph-mon[74331]: pgmap v472: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Nov 24 09:42:58 compute-0 sudo[218864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jywkumnccfxuuhwqippubzxofcmpogpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977378.7321327-3143-47751882041451/AnsiballZ_systemd.py'
Nov 24 09:42:59 compute-0 sudo[218864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:42:59 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Nov 24 09:42:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:59 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf20001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:59 compute-0 python3.9[218866]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 09:42:59 compute-0 systemd[1]: Reloading.
Nov 24 09:42:59 compute-0 systemd-rc-local-generator[218895]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:42:59 compute-0 systemd-sysv-generator[218898]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:42:59 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Nov 24 09:42:59 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v473: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 852 B/s wr, 3 op/s
Nov 24 09:42:59 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Nov 24 09:42:59 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Nov 24 09:42:59 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Nov 24 09:42:59 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Nov 24 09:42:59 compute-0 systemd[1]: Starting libvirt proxy daemon...
Nov 24 09:42:59 compute-0 systemd[1]: Started libvirt proxy daemon.
Nov 24 09:42:59 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Nov 24 09:42:59 compute-0 sudo[218864]: pam_unix(sudo:session): session closed for user root
Nov 24 09:42:59 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Nov 24 09:42:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:42:59 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf3c002070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:42:59 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:42:59 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:42:59 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:42:59.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:43:00 compute-0 sudo[219087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eogizpvywsuisutigdlzpntggmuxyjkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977379.8740907-3143-245358063667854/AnsiballZ_systemd.py'
Nov 24 09:43:00 compute-0 sudo[219087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/094300 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:43:00 compute-0 python3.9[219089]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 09:43:00 compute-0 systemd[1]: Reloading.
Nov 24 09:43:00 compute-0 setroubleshoot[218867]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 0ea43800-6c5b-406b-9d8c-e0cb652f03e0
Nov 24 09:43:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:43:00 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf180016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:00 compute-0 setroubleshoot[218867]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Nov 24 09:43:00 compute-0 setroubleshoot[218867]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 0ea43800-6c5b-406b-9d8c-e0cb652f03e0
Nov 24 09:43:00 compute-0 setroubleshoot[218867]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Nov 24 09:43:00 compute-0 systemd-sysv-generator[219116]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:43:00 compute-0 systemd-rc-local-generator[219113]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:43:00 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:00 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:00 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:00.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:00 compute-0 ceph-mon[74331]: pgmap v473: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 852 B/s wr, 3 op/s
Nov 24 09:43:00 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:43:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:43:00] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Nov 24 09:43:00 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:43:00] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Nov 24 09:43:01 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Nov 24 09:43:01 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Nov 24 09:43:01 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Nov 24 09:43:01 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Nov 24 09:43:01 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Nov 24 09:43:01 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Nov 24 09:43:01 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Nov 24 09:43:01 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Nov 24 09:43:01 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Nov 24 09:43:01 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Nov 24 09:43:01 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Nov 24 09:43:01 compute-0 systemd[1]: Started libvirt QEMU daemon.
Nov 24 09:43:01 compute-0 sudo[219087]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:43:01 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf380032d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:01 compute-0 sudo[219304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enkgvvvfceossgwgnkjeyfjijwjqbtht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977381.2367043-3143-266024414904167/AnsiballZ_systemd.py'
Nov 24 09:43:01 compute-0 sudo[219304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:01 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v474: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 853 B/s wr, 3 op/s
Nov 24 09:43:01 compute-0 python3.9[219306]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 09:43:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:43:01 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf180016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:01 compute-0 systemd[1]: Reloading.
Nov 24 09:43:01 compute-0 systemd-rc-local-generator[219332]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:43:01 compute-0 systemd-sysv-generator[219335]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:43:01 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:01 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:43:01 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:01.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:43:02 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Nov 24 09:43:02 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Nov 24 09:43:02 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Nov 24 09:43:02 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Nov 24 09:43:02 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Nov 24 09:43:02 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Nov 24 09:43:02 compute-0 systemd[1]: Starting libvirt secret daemon...
Nov 24 09:43:02 compute-0 systemd[1]: Started libvirt secret daemon.
Nov 24 09:43:02 compute-0 sudo[219304]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:02 compute-0 sudo[219392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:43:02 compute-0 sudo[219392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:43:02 compute-0 sudo[219392]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:02 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:43:02 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf180016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:02 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:02 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:43:02 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:02.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:43:02 compute-0 ceph-mon[74331]: pgmap v474: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 853 B/s wr, 3 op/s
Nov 24 09:43:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:43:03 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf3c0095a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:03 compute-0 sudo[219543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emsucfwozlurtqsbpvrzclzgekmwkbnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977383.0825503-3254-37580216849811/AnsiballZ_file.py'
Nov 24 09:43:03 compute-0 sudo[219543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:03 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:43:03 compute-0 python3.9[219545]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:43:03 compute-0 sudo[219543]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:03 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v475: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 170 B/s wr, 0 op/s
Nov 24 09:43:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:43:03 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf380032d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:03 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:03 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:03 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:03.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:04 compute-0 sudo[219695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbxpsoukopafnenbyjhusooegzdggzuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977383.8964467-3278-115494152932204/AnsiballZ_find.py'
Nov 24 09:43:04 compute-0 sudo[219695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:04 compute-0 python3.9[219697]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 24 09:43:04 compute-0 sudo[219695]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:04 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:43:04 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf180016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:04 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:04 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:04 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:04.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:04 compute-0 ceph-mon[74331]: pgmap v475: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 170 B/s wr, 0 op/s
Nov 24 09:43:04 compute-0 sudo[219848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rscovianpqajbkaodftinlnpcismfgjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977384.75211-3302-251836160319657/AnsiballZ_command.py'
Nov 24 09:43:05 compute-0 sudo[219848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:05 compute-0 python3.9[219850]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:43:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:43:05 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf200028c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:05 compute-0 sudo[219848]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:05 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v476: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 170 B/s wr, 0 op/s
Nov 24 09:43:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:43:05 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf3c0095a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:05 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:05 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:43:05 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:05.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:43:06 compute-0 python3.9[220005]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 24 09:43:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:43:06 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf38003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:06 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:06 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:06 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:06.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:06 compute-0 ceph-mon[74331]: pgmap v476: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 170 B/s wr, 0 op/s
Nov 24 09:43:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:43:07.033Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:43:07 compute-0 python3.9[220156]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:43:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:43:07 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf180036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:07 compute-0 python3.9[220278]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763977386.6926293-3359-84334523116122/.source.xml follow=False _original_basename=secret.xml.j2 checksum=50e2d7af60e90224d932c14cb656694b42455a32 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:43:07 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v477: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 170 B/s wr, 0 op/s
Nov 24 09:43:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:43:07 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf180036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:07 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:07 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:43:07 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:07.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:43:08 compute-0 sudo[220429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yplrvwwbwpdnxrycuvcpvxyeucyvwfcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977388.0085373-3404-200394884936275/AnsiballZ_command.py'
Nov 24 09:43:08 compute-0 sudo[220429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:08 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:43:08 compute-0 python3.9[220431]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 84a084c3-61a7-5de7-8207-1f88efa59a64
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:43:08 compute-0 polkitd[43367]: Registered Authentication Agent for unix-process:220433:333609 (system bus name :1.2882 [pkttyagent --process 220433 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Nov 24 09:43:08 compute-0 polkitd[43367]: Unregistered Authentication Agent for unix-process:220433:333609 (system bus name :1.2882, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Nov 24 09:43:08 compute-0 polkitd[43367]: Registered Authentication Agent for unix-process:220432:333608 (system bus name :1.2883 [pkttyagent --process 220432 --notify-fd 5 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Nov 24 09:43:08 compute-0 polkitd[43367]: Unregistered Authentication Agent for unix-process:220432:333608 (system bus name :1.2883, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Nov 24 09:43:08 compute-0 sudo[220429]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:43:08 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf200028c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:08 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:08 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:08 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:08.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:08 compute-0 ceph-mon[74331]: pgmap v477: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 170 B/s wr, 0 op/s
Nov 24 09:43:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:43:09 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf200028c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:09 compute-0 python3.9[220593]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:43:09 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v478: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:43:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:43:09 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf38003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:09 compute-0 sudo[220744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzcupawulugnpanbllcuujgpzolcsyle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977389.697735-3452-19648211300987/AnsiballZ_command.py'
Nov 24 09:43:09 compute-0 sudo[220744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:09 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:09 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:09 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:09.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:10 compute-0 sudo[220744]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:43:10 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf3c00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:10 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Nov 24 09:43:10 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:10 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:10 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:10.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:10 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Nov 24 09:43:10 compute-0 ceph-mon[74331]: pgmap v478: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:43:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:43:10] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Nov 24 09:43:10 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:43:10] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Nov 24 09:43:11 compute-0 sudo[220898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azuucysudrkjwyhhwhfyjycjuddcxsvi ; FSID=84a084c3-61a7-5de7-8207-1f88efa59a64 KEY=AQBLJCRpAAAAABAAXAzKB5itq82KD4bRedT2Ig== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977390.8027315-3476-269601916774783/AnsiballZ_command.py'
Nov 24 09:43:11 compute-0 sudo[220898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:11 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:43:11 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf200028c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:11 compute-0 polkitd[43367]: Registered Authentication Agent for unix-process:220902:333887 (system bus name :1.2886 [pkttyagent --process 220902 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Nov 24 09:43:11 compute-0 polkitd[43367]: Unregistered Authentication Agent for unix-process:220902:333887 (system bus name :1.2886, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Nov 24 09:43:11 compute-0 sudo[220898]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:11 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v479: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:43:11 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:43:11 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf200028c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:11 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:11 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:43:11 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:11.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:43:12 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:43:12 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf38003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:12 compute-0 sudo[221058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgujrspewpazlxxexgunutuiltsfoqvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977392.5146546-3500-56288416411040/AnsiballZ_copy.py'
Nov 24 09:43:12 compute-0 sudo[221058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:12 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:12 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:43:12 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:12.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:43:12 compute-0 python3.9[221060]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:43:12 compute-0 sudo[221058]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:12 compute-0 ceph-mon[74331]: pgmap v479: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:43:13 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:43:13 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf38003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:43:13 compute-0 sudo[221211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlcyoehbdknhfeodebxrjpeucpuhlehy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977393.304611-3524-54743382051137/AnsiballZ_stat.py'
Nov 24 09:43:13 compute-0 sudo[221211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:13 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v480: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:43:13 compute-0 python3.9[221213]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:43:13 compute-0 sudo[221211]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:13 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:43:13 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf38003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:13 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:13 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:13 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:13.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:14 compute-0 sudo[221334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jharbunjoiqkmerxjgrzmisouvnmrrcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977393.304611-3524-54743382051137/AnsiballZ_copy.py'
Nov 24 09:43:14 compute-0 sudo[221334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:14 compute-0 python3.9[221336]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1763977393.304611-3524-54743382051137/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:43:14 compute-0 sudo[221334]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:14 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:43:14 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf14000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:14 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:14 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:43:14 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:14.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:43:14 compute-0 ceph-mon[74331]: pgmap v480: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:43:15 compute-0 sudo[221487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgjolaykyteibudorobjasbkggpeirrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977394.7767117-3572-53450444864975/AnsiballZ_file.py'
Nov 24 09:43:15 compute-0 sudo[221487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:15 compute-0 python3.9[221489]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:43:15 compute-0 sudo[221487]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:43:15 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf180036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:43:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:43:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:43:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:43:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:43:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:43:15 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v481: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:43:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:43:15 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf38003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:15 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:43:15 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:15 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:43:15 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:15.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:43:16 compute-0 sudo[221653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmrzbgkntfqosjxumnsnajevqepouzoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977395.5720904-3596-151928980094445/AnsiballZ_stat.py'
Nov 24 09:43:16 compute-0 sudo[221653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:16 compute-0 podman[221614]: 2025-11-24 09:43:16.139814025 +0000 UTC m=+0.091902920 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 09:43:16 compute-0 python3.9[221662]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:43:16 compute-0 sudo[221653]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:16 compute-0 sudo[221746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzccphynihhzevslfwyejiehedxnoypy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977395.5720904-3596-151928980094445/AnsiballZ_file.py'
Nov 24 09:43:16 compute-0 sudo[221746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:16 compute-0 python3.9[221748]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:43:16 compute-0 sudo[221746]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:43:16 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf3c00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:16 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:16 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:16 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:16.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:17 compute-0 ceph-mon[74331]: pgmap v481: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:43:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:43:17.034Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:43:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:43:17 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf3c00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:17 compute-0 sudo[221899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvyrmmjsybnkbrpxzqtwpnbjohtoxruc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977397.1341443-3632-34171814502976/AnsiballZ_stat.py'
Nov 24 09:43:17 compute-0 sudo[221899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:17 compute-0 python3.9[221901]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:43:17 compute-0 sudo[221899]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:17 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v482: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:43:17 compute-0 sudo[221977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhvoslvdzszqzsdwobxkhifewjsvbbwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977397.1341443-3632-34171814502976/AnsiballZ_file.py'
Nov 24 09:43:17 compute-0 sudo[221977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:43:17 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf18003880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:17 compute-0 python3.9[221979]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.nm0kshym recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:43:17 compute-0 sudo[221977]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:17 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:17 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:17 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:17.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:18 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:43:18 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Nov 24 09:43:18 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:43:18.367762) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 09:43:18 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Nov 24 09:43:18 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977398367800, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 635, "num_deletes": 252, "total_data_size": 934265, "memory_usage": 946960, "flush_reason": "Manual Compaction"}
Nov 24 09:43:18 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Nov 24 09:43:18 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977398372759, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 646954, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17416, "largest_seqno": 18050, "table_properties": {"data_size": 643929, "index_size": 933, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7731, "raw_average_key_size": 19, "raw_value_size": 637731, "raw_average_value_size": 1647, "num_data_blocks": 41, "num_entries": 387, "num_filter_entries": 387, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763977356, "oldest_key_time": 1763977356, "file_creation_time": 1763977398, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "42aa12d2-c531-4ddc-8c4c-bc0b5971346b", "db_session_id": "RORHLERH15LC1QL8D0I4", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:43:18 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 5039 microseconds, and 2239 cpu microseconds.
Nov 24 09:43:18 compute-0 ceph-mon[74331]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:43:18 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:43:18.372799) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 646954 bytes OK
Nov 24 09:43:18 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:43:18.372822) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Nov 24 09:43:18 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:43:18.374300) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Nov 24 09:43:18 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:43:18.374314) EVENT_LOG_v1 {"time_micros": 1763977398374310, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 09:43:18 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:43:18.374328) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 09:43:18 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 930924, prev total WAL file size 930924, number of live WAL files 2.
Nov 24 09:43:18 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:43:18 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:43:18.374833) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323532' seq:72057594037927935, type:22 .. '6D67727374617400353035' seq:0, type:0; will stop at (end)
Nov 24 09:43:18 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 09:43:18 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(631KB)], [35(14MB)]
Nov 24 09:43:18 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977398374866, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 16116732, "oldest_snapshot_seqno": -1}
Nov 24 09:43:18 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4981 keys, 12264077 bytes, temperature: kUnknown
Nov 24 09:43:18 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977398445773, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 12264077, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12230061, "index_size": 20466, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12485, "raw_key_size": 125487, "raw_average_key_size": 25, "raw_value_size": 12139170, "raw_average_value_size": 2437, "num_data_blocks": 850, "num_entries": 4981, "num_filter_entries": 4981, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976305, "oldest_key_time": 0, "file_creation_time": 1763977398, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "42aa12d2-c531-4ddc-8c4c-bc0b5971346b", "db_session_id": "RORHLERH15LC1QL8D0I4", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:43:18 compute-0 ceph-mon[74331]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:43:18 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:43:18.446024) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 12264077 bytes
Nov 24 09:43:18 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:43:18.447489) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 227.1 rd, 172.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 14.8 +0.0 blob) out(11.7 +0.0 blob), read-write-amplify(43.9) write-amplify(19.0) OK, records in: 5482, records dropped: 501 output_compression: NoCompression
Nov 24 09:43:18 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:43:18.447516) EVENT_LOG_v1 {"time_micros": 1763977398447504, "job": 16, "event": "compaction_finished", "compaction_time_micros": 70979, "compaction_time_cpu_micros": 23819, "output_level": 6, "num_output_files": 1, "total_output_size": 12264077, "num_input_records": 5482, "num_output_records": 4981, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 09:43:18 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:43:18 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977398447796, "job": 16, "event": "table_file_deletion", "file_number": 37}
Nov 24 09:43:18 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:43:18 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977398451821, "job": 16, "event": "table_file_deletion", "file_number": 35}
Nov 24 09:43:18 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:43:18.374774) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:43:18 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:43:18.451931) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:43:18 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:43:18.451939) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:43:18 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:43:18.451941) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:43:18 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:43:18.451944) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:43:18 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:43:18.451946) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:43:18 compute-0 sudo[222130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-takfqngthxcqzuzbqdsthxohpakieojq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977398.4864788-3668-69279469655946/AnsiballZ_stat.py'
Nov 24 09:43:18 compute-0 sudo[222130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:43:18 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf38003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:18 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:18 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:18 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:18.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:18 compute-0 python3.9[222132]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:43:18 compute-0 sudo[222130]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:19 compute-0 ceph-mon[74331]: pgmap v482: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:43:19 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:43:19 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf140016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:19 compute-0 sudo[222209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtnvzezcfamdsouzfhcrefitvcgmffic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977398.4864788-3668-69279469655946/AnsiballZ_file.py'
Nov 24 09:43:19 compute-0 sudo[222209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:19 compute-0 python3.9[222211]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:43:19 compute-0 sudo[222209]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:19 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v483: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:43:19 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:43:19 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf3c00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:20 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:20 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:20 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:19.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:20 compute-0 sudo[222361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iocysndkapbllmtrlivqvrilvdjiltyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977399.7642038-3707-16486835183913/AnsiballZ_command.py'
Nov 24 09:43:20 compute-0 sudo[222361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:20 compute-0 python3.9[222363]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:43:20 compute-0 sudo[222361]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:43:20.553 165073 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:43:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:43:20.554 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:43:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:43:20.554 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:43:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:43:20 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf3c00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:20 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:20 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:43:20 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:20.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:43:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:43:20] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Nov 24 09:43:20 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:43:20] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Nov 24 09:43:21 compute-0 ceph-mon[74331]: pgmap v483: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:43:21 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:43:21 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf3c00a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:21 compute-0 sudo[222516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyybombmhlvyaxjtmyhdxwmjwrutisqa ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763977400.6608198-3731-251919448530566/AnsiballZ_edpm_nftables_from_files.py'
Nov 24 09:43:21 compute-0 sudo[222516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:21 compute-0 python3[222518]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 24 09:43:21 compute-0 sudo[222516]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:21 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v484: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:43:21 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:43:21 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf14001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:22 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:22 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:22 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:22.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:22 compute-0 sudo[222668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-moxjsrcmootuatqvupxcfnjukiqvwurk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977401.89036-3755-166035772201867/AnsiballZ_stat.py'
Nov 24 09:43:22 compute-0 sudo[222668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:22 compute-0 python3.9[222670]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:43:22 compute-0 sudo[222668]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:22 compute-0 sudo[222747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijryhfoabheiqpjutqjnloucigaiqmrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977401.89036-3755-166035772201867/AnsiballZ_file.py'
Nov 24 09:43:22 compute-0 sudo[222747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:22 compute-0 sudo[222750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:43:22 compute-0 sudo[222750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:43:22 compute-0 sudo[222750]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:22 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:43:22 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf180038c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:22 compute-0 python3.9[222749]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:43:22 compute-0 sudo[222747]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:22 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:22 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:22 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:22.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:23 compute-0 ceph-mon[74331]: pgmap v484: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:43:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:43:23 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf38003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:23 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:43:23 compute-0 podman[222900]: 2025-11-24 09:43:23.558776548 +0000 UTC m=+0.047326641 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 09:43:23 compute-0 sudo[222943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhjcinguacctwrkceljvqfxzkijrrljg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977403.2453396-3791-43992312416852/AnsiballZ_stat.py'
Nov 24 09:43:23 compute-0 sudo[222943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:23 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v485: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:43:23 compute-0 python3.9[222947]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:43:23 compute-0 sudo[222943]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:43:23 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf0c000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:23 compute-0 sudo[223023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isjcpwwjblugvkimqouloedwqunbzbhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977403.2453396-3791-43992312416852/AnsiballZ_file.py'
Nov 24 09:43:23 compute-0 sudo[223023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:24 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:24 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:24 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:24.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:24 compute-0 python3.9[223025]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:43:24 compute-0 sudo[223023]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:43:24 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf14001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:24 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:24 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:43:24 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:24.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:43:24 compute-0 sudo[223176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kocwlsrthulygpwipfmvzzabkssmmsgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977404.602759-3827-66694592355880/AnsiballZ_stat.py'
Nov 24 09:43:24 compute-0 sudo[223176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:25 compute-0 python3.9[223178]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:43:25 compute-0 sudo[223176]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:25 compute-0 ceph-mon[74331]: pgmap v485: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:43:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:43:25 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf14001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:25 compute-0 sudo[223255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vaoptlyehchizzapwtsxifmqwikzdryl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977404.602759-3827-66694592355880/AnsiballZ_file.py'
Nov 24 09:43:25 compute-0 sudo[223255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:25 compute-0 python3.9[223257]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:43:25 compute-0 sudo[223255]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:25 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v486: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:43:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:43:25 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf38003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:26 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:26 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:43:26 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:26.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:43:26 compute-0 sudo[223408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuhhqnpizjmathkkvbzlnbogfdcxpdtv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977405.992594-3863-154551802704350/AnsiballZ_stat.py'
Nov 24 09:43:26 compute-0 sudo[223408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:26 compute-0 python3.9[223410]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:43:26 compute-0 sudo[223408]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:26 compute-0 sudo[223486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpoinepqkwulkafrcswudetljrlkjfdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977405.992594-3863-154551802704350/AnsiballZ_file.py'
Nov 24 09:43:26 compute-0 sudo[223486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:43:26 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf0c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:26 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:26 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:26 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:26.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:26 compute-0 python3.9[223488]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:43:26 compute-0 sudo[223486]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:43:27.036Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:43:27 compute-0 ceph-mon[74331]: pgmap v486: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:43:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:43:27 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf14001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:27 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v487: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:43:27 compute-0 sudo[223639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-valujeiffvphasxbkyaogobgnqrvvdhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977407.348688-3899-204297028592/AnsiballZ_stat.py'
Nov 24 09:43:27 compute-0 sudo[223639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:43:27 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf14001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:27 compute-0 python3.9[223641]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:43:28 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:28 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:28 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:28.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:28 compute-0 sudo[223639]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:28 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:43:28 compute-0 sudo[223765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqqrjyzcnvgqvpxunwfobdnrcdwmnbqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977407.348688-3899-204297028592/AnsiballZ_copy.py'
Nov 24 09:43:28 compute-0 sudo[223765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:28 compute-0 python3.9[223767]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763977407.348688-3899-204297028592/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:43:28 compute-0 sudo[223765]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:43:28 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf38003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:28 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:28 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:28 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:28.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:29 compute-0 ceph-mon[74331]: pgmap v487: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:43:29 compute-0 sudo[223917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gibfvsvuuekjmodzygylecnlmmkkccyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977408.9137673-3944-80130454472481/AnsiballZ_file.py'
Nov 24 09:43:29 compute-0 sudo[223917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:29 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:43:29 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf0c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:29 compute-0 python3.9[223919]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:43:29 compute-0 sudo[223917]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:29 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v488: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:43:29 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:43:29 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf0c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:29 compute-0 sudo[224070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfwoqyywvdhgnzjwfwiqmtxrrtpdqjld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977409.6809487-3968-229998436311417/AnsiballZ_command.py'
Nov 24 09:43:29 compute-0 sudo[224070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:30 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:30 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:43:30 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:30.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:43:30 compute-0 python3.9[224072]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:43:30 compute-0 sudo[224070]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:43:30 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf0c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:30 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:30 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:30 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:30.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:43:30] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Nov 24 09:43:30 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:43:30] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Nov 24 09:43:31 compute-0 sudo[224226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrzdzpaxquycjckqmjdzboopyriotucf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977410.6260176-3992-23085472883320/AnsiballZ_blockinfile.py'
Nov 24 09:43:31 compute-0 sudo[224226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:31 compute-0 ceph-mon[74331]: pgmap v488: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:43:31 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:43:31 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:43:31 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf38003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:31 compute-0 python3.9[224228]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:43:31 compute-0 sudo[224226]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:31 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v489: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:43:31 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:43:31 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf0c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:31 compute-0 sudo[224379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtygbglslogeobgsdaggqozdpzdopgql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977411.6805766-4019-75651202310679/AnsiballZ_command.py'
Nov 24 09:43:31 compute-0 sudo[224379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:32 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:32 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:32 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:32.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:32 compute-0 python3.9[224381]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:43:32 compute-0 sudo[224379]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:32 compute-0 sudo[224533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgfmczowxqzjwpjpfqmkcxwryacyqymp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977412.4614313-4043-8665739287563/AnsiballZ_stat.py'
Nov 24 09:43:32 compute-0 sudo[224533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:32 compute-0 kernel: ganesha.nfsd[222775]: segfault at 50 ip 00007fdfe9cd132e sp 00007fdfaf7fd210 error 4 in libntirpc.so.5.8[7fdfe9cb6000+2c000] likely on CPU 5 (core 0, socket 5)
Nov 24 09:43:32 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 24 09:43:32 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[212135]: 24/11/2025 09:43:32 : epoch 69242888 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf0c0016a0 fd 38 proxy ignored for local
Nov 24 09:43:32 compute-0 systemd[1]: Started Process Core Dump (PID 224536/UID 0).
Nov 24 09:43:32 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:32 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:43:32 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:32.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:43:32 compute-0 python3.9[224535]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:43:32 compute-0 sudo[224533]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:33 compute-0 ceph-mon[74331]: pgmap v489: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:43:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:43:33 compute-0 sudo[224690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycmjrchhvwybffpixgotmoydhxrgqygn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977413.2563272-4067-27709575278718/AnsiballZ_command.py'
Nov 24 09:43:33 compute-0 sudo[224690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:33 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v490: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:43:33 compute-0 python3.9[224692]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:43:33 compute-0 sudo[224690]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:33 compute-0 systemd-coredump[224537]: Process 212157 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 56:
                                                    #0  0x00007fdfe9cd132e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 24 09:43:33 compute-0 systemd[1]: systemd-coredump@5-224536-0.service: Deactivated successfully.
Nov 24 09:43:33 compute-0 systemd[1]: systemd-coredump@5-224536-0.service: Consumed 1.042s CPU time.
Nov 24 09:43:33 compute-0 podman[224724]: 2025-11-24 09:43:33.933429357 +0000 UTC m=+0.028750011 container died a56abb98d214d14a158e60f98fb9f1de4024c78aaef5c622e277eb3dc23f922f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Nov 24 09:43:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ba2e271b16de74ce7faba6b706ff86468e237ba34eb6f6cc9c79cce17332a44-merged.mount: Deactivated successfully.
Nov 24 09:43:33 compute-0 podman[224724]: 2025-11-24 09:43:33.97307352 +0000 UTC m=+0.068394164 container remove a56abb98d214d14a158e60f98fb9f1de4024c78aaef5c622e277eb3dc23f922f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Nov 24 09:43:33 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.2.0.compute-0.ssprex.service: Main process exited, code=exited, status=139/n/a
Nov 24 09:43:34 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:34 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:34 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:34.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:34 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.2.0.compute-0.ssprex.service: Failed with result 'exit-code'.
Nov 24 09:43:34 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.2.0.compute-0.ssprex.service: Consumed 1.355s CPU time.
Nov 24 09:43:34 compute-0 sudo[224894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhbcxmeuaagcmkkrbxxbndipvirmtcoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977414.1005225-4091-269581421929262/AnsiballZ_file.py'
Nov 24 09:43:34 compute-0 sudo[224894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:34 compute-0 python3.9[224896]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:43:34 compute-0 sudo[224894]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:34 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:34 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:34 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:34.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:35 compute-0 ceph-mon[74331]: pgmap v490: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:43:35 compute-0 sudo[225047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxvrcbqarehxhbanpumhmbuuorhpsnih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977414.956771-4115-11791485016114/AnsiballZ_stat.py'
Nov 24 09:43:35 compute-0 sudo[225047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:35 compute-0 python3.9[225049]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:43:35 compute-0 sudo[225047]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:35 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v491: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:43:35 compute-0 sudo[225170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzbfhvpocxvpphkrzzxfecqcfivybove ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977414.956771-4115-11791485016114/AnsiballZ_copy.py'
Nov 24 09:43:35 compute-0 sudo[225170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:35 compute-0 python3.9[225172]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763977414.956771-4115-11791485016114/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:43:35 compute-0 sudo[225170]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:36 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:36 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:43:36 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:36.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:43:36 compute-0 sudo[225323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjroshanihwebfrsurdwqbjcxgstkdhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977416.4363093-4160-48365298907996/AnsiballZ_stat.py'
Nov 24 09:43:36 compute-0 sudo[225323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:36 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:36 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:43:36 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:36.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:43:36 compute-0 python3.9[225325]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:43:36 compute-0 sudo[225323]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:43:37.036Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:43:37 compute-0 ceph-mon[74331]: pgmap v491: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:43:37 compute-0 sudo[225447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzgdgbulprumohagipsimcsvmjsqxbiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977416.4363093-4160-48365298907996/AnsiballZ_copy.py'
Nov 24 09:43:37 compute-0 sudo[225447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:37 compute-0 python3.9[225449]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763977416.4363093-4160-48365298907996/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:43:37 compute-0 sudo[225447]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:37 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v492: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:43:38 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:38 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 24 09:43:38 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:38.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 24 09:43:38 compute-0 sudo[225600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocbldsngukwmocymalrlwmqozifttvya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977418.0063176-4205-67315227201151/AnsiballZ_stat.py'
Nov 24 09:43:38 compute-0 sudo[225600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:38 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:43:38 compute-0 python3.9[225602]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:43:38 compute-0 sudo[225600]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/094338 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:43:38 compute-0 sudo[225723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtuxffityghxwleqtmacfmpduxwymrdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977418.0063176-4205-67315227201151/AnsiballZ_copy.py'
Nov 24 09:43:38 compute-0 sudo[225723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:38 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:38 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:38 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:38.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:38 compute-0 python3.9[225725]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763977418.0063176-4205-67315227201151/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:43:38 compute-0 sudo[225723]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:39 compute-0 ceph-mon[74331]: pgmap v492: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:43:39 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v493: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:43:39 compute-0 sudo[225876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzqfrcaeasshxemokzkcpzkfvbfxququ ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977419.498224-4250-164656158853368/AnsiballZ_systemd.py'
Nov 24 09:43:39 compute-0 sudo[225876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:40 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:40 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:40 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:40.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:40 compute-0 python3.9[225878]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:43:40 compute-0 systemd[1]: Reloading.
Nov 24 09:43:40 compute-0 systemd-rc-local-generator[225906]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:43:40 compute-0 systemd-sysv-generator[225910]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:43:40 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Nov 24 09:43:40 compute-0 sudo[225876]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:40 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:40 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:40 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:40.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:43:40] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Nov 24 09:43:40 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:43:40] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Nov 24 09:43:41 compute-0 sudo[226068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scemuvlztqfzroenpmdbgcflalkwdfdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977420.8633473-4274-37131731701154/AnsiballZ_systemd.py'
Nov 24 09:43:41 compute-0 sudo[226068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:41 compute-0 ceph-mon[74331]: pgmap v493: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:43:41 compute-0 python3.9[226070]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 24 09:43:41 compute-0 systemd[1]: Reloading.
Nov 24 09:43:41 compute-0 systemd-rc-local-generator[226096]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:43:41 compute-0 systemd-sysv-generator[226100]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:43:41 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v494: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:43:41 compute-0 systemd[1]: Reloading.
Nov 24 09:43:41 compute-0 systemd-rc-local-generator[226136]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:43:41 compute-0 systemd-sysv-generator[226140]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:43:42 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:42 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:42 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:42.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:42 compute-0 sudo[226068]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:42 compute-0 sshd-session[165251]: Connection closed by 192.168.122.30 port 41930
Nov 24 09:43:42 compute-0 sshd-session[165247]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:43:42 compute-0 systemd[1]: session-53.scope: Deactivated successfully.
Nov 24 09:43:42 compute-0 systemd[1]: session-53.scope: Consumed 3min 23.415s CPU time.
Nov 24 09:43:42 compute-0 systemd-logind[822]: Session 53 logged out. Waiting for processes to exit.
Nov 24 09:43:42 compute-0 systemd-logind[822]: Removed session 53.
Nov 24 09:43:42 compute-0 sudo[226169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:43:42 compute-0 sudo[226169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:43:42 compute-0 sudo[226169]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:42 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:42 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:42 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:42.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:43:43 compute-0 ceph-mon[74331]: pgmap v494: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:43:43 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v495: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:43:44 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:44 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:44 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:44.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:44 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.2.0.compute-0.ssprex.service: Scheduled restart job, restart counter is at 6.
Nov 24 09:43:44 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ssprex for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:43:44 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.2.0.compute-0.ssprex.service: Consumed 1.355s CPU time.
Nov 24 09:43:44 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ssprex for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:43:44 compute-0 podman[226248]: 2025-11-24 09:43:44.449588165 +0000 UTC m=+0.044148838 container create df9f8985edb66069b61b85b3d81a0dab55381b41763cd3aad0cf5a40e5727bbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 24 09:43:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e80afccf69c092751e552cbb0f9727659470ceedd45d18549d247b34002db41/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 24 09:43:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e80afccf69c092751e552cbb0f9727659470ceedd45d18549d247b34002db41/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:43:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e80afccf69c092751e552cbb0f9727659470ceedd45d18549d247b34002db41/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:43:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e80afccf69c092751e552cbb0f9727659470ceedd45d18549d247b34002db41/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ssprex-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:43:44 compute-0 podman[226248]: 2025-11-24 09:43:44.509237623 +0000 UTC m=+0.103798316 container init df9f8985edb66069b61b85b3d81a0dab55381b41763cd3aad0cf5a40e5727bbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:43:44 compute-0 podman[226248]: 2025-11-24 09:43:44.514364695 +0000 UTC m=+0.108925368 container start df9f8985edb66069b61b85b3d81a0dab55381b41763cd3aad0cf5a40e5727bbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 24 09:43:44 compute-0 bash[226248]: df9f8985edb66069b61b85b3d81a0dab55381b41763cd3aad0cf5a40e5727bbe
Nov 24 09:43:44 compute-0 podman[226248]: 2025-11-24 09:43:44.432130896 +0000 UTC m=+0.026691589 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:43:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:43:44 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 24 09:43:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:43:44 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 24 09:43:44 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ssprex for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:43:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:43:44 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 24 09:43:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:43:44 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 24 09:43:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:43:44 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 24 09:43:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:43:44 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 24 09:43:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:43:44 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 24 09:43:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:43:44 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:43:44 compute-0 ceph-mon[74331]: pgmap v495: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:43:44 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:44 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:44 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:44.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Optimize plan auto_2025-11-24_09:43:45
Nov 24 09:43:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 09:43:45 compute-0 ceph-mgr[74626]: [balancer INFO root] do_upmap
Nov 24 09:43:45 compute-0 ceph-mgr[74626]: [balancer INFO root] pools ['vms', 'default.rgw.control', 'cephfs.cephfs.data', 'images', '.rgw.root', '.mgr', 'default.rgw.meta', 'default.rgw.log', '.nfs', 'cephfs.cephfs.meta', 'volumes', 'backups']
Nov 24 09:43:45 compute-0 ceph-mgr[74626]: [balancer INFO root] prepared 0/10 upmap changes
Nov 24 09:43:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:43:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:43:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:43:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:43:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:43:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:43:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 09:43:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:43:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 09:43:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:43:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:43:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:43:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:43:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:43:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:43:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:43:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:43:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:43:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 09:43:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:43:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:43:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:43:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 24 09:43:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:43:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 09:43:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:43:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:43:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:43:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 09:43:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:43:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 09:43:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 09:43:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:43:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 09:43:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:43:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:43:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:43:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:43:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:43:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:43:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:43:45 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v496: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:43:45 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:43:46 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:46 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:46 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:46.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:46 compute-0 ceph-mon[74331]: pgmap v496: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:43:46 compute-0 podman[226308]: 2025-11-24 09:43:46.804947525 +0000 UTC m=+0.084218306 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 24 09:43:46 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:46 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:46 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:46.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:43:47.037Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:43:47 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v497: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:43:48 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:48 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:48 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:48.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:48 compute-0 sshd-session[226335]: Accepted publickey for zuul from 192.168.122.30 port 42592 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 09:43:48 compute-0 systemd-logind[822]: New session 54 of user zuul.
Nov 24 09:43:48 compute-0 systemd[1]: Started Session 54 of User zuul.
Nov 24 09:43:48 compute-0 sshd-session[226335]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:43:48 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:43:48 compute-0 ceph-mon[74331]: pgmap v497: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:43:48 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:48 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:48 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:48.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:49 compute-0 python3.9[226489]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:43:49 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v498: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:43:50 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:50 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:50 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:50.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:50 compute-0 python3.9[226644]: ansible-ansible.builtin.service_facts Invoked
Nov 24 09:43:50 compute-0 network[226662]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 09:43:50 compute-0 network[226663]: 'network-scripts' will be removed from distribution in near future.
Nov 24 09:43:50 compute-0 network[226664]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 09:43:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:43:50 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:43:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:43:50 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:43:50 compute-0 ceph-mon[74331]: pgmap v498: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:43:50 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:50 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:50 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:50.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:43:50] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Nov 24 09:43:50 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:43:50] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Nov 24 09:43:51 compute-0 sudo[226670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:43:51 compute-0 sudo[226670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:43:51 compute-0 sudo[226670]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:51 compute-0 sudo[226697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:43:51 compute-0 sudo[226697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:43:51 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v499: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:43:51 compute-0 sudo[226697]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:51 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:43:51 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:43:52 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:43:52 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:43:52 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:52 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:52 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:52.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:52 compute-0 sudo[226794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:43:52 compute-0 sudo[226794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:43:52 compute-0 sudo[226794]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:52 compute-0 sudo[226822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 24 09:43:52 compute-0 sudo[226822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:43:52 compute-0 podman[226912]: 2025-11-24 09:43:52.588701294 +0000 UTC m=+0.053558360 container create a2569773a3e8ff0fba76c51eb6e837f536a05a654cd536978d51cb3f7bb40c63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_benz, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:43:52 compute-0 systemd[1]: Started libpod-conmon-a2569773a3e8ff0fba76c51eb6e837f536a05a654cd536978d51cb3f7bb40c63.scope.
Nov 24 09:43:52 compute-0 podman[226912]: 2025-11-24 09:43:52.563514361 +0000 UTC m=+0.028371457 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:43:52 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:43:52 compute-0 podman[226912]: 2025-11-24 09:43:52.694592999 +0000 UTC m=+0.159450085 container init a2569773a3e8ff0fba76c51eb6e837f536a05a654cd536978d51cb3f7bb40c63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Nov 24 09:43:52 compute-0 podman[226912]: 2025-11-24 09:43:52.702699853 +0000 UTC m=+0.167556919 container start a2569773a3e8ff0fba76c51eb6e837f536a05a654cd536978d51cb3f7bb40c63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_benz, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Nov 24 09:43:52 compute-0 podman[226912]: 2025-11-24 09:43:52.705976639 +0000 UTC m=+0.170833705 container attach a2569773a3e8ff0fba76c51eb6e837f536a05a654cd536978d51cb3f7bb40c63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Nov 24 09:43:52 compute-0 loving_benz[226934]: 167 167
Nov 24 09:43:52 compute-0 podman[226912]: 2025-11-24 09:43:52.710833526 +0000 UTC m=+0.175690592 container died a2569773a3e8ff0fba76c51eb6e837f536a05a654cd536978d51cb3f7bb40c63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_benz, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:43:52 compute-0 systemd[1]: libpod-a2569773a3e8ff0fba76c51eb6e837f536a05a654cd536978d51cb3f7bb40c63.scope: Deactivated successfully.
Nov 24 09:43:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-d33d258ab49b9b1b0a1b390772456c45b3a3ff419bec8501107ba3107d3a51d5-merged.mount: Deactivated successfully.
Nov 24 09:43:52 compute-0 podman[226912]: 2025-11-24 09:43:52.745672692 +0000 UTC m=+0.210529748 container remove a2569773a3e8ff0fba76c51eb6e837f536a05a654cd536978d51cb3f7bb40c63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_benz, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:43:52 compute-0 systemd[1]: libpod-conmon-a2569773a3e8ff0fba76c51eb6e837f536a05a654cd536978d51cb3f7bb40c63.scope: Deactivated successfully.
Nov 24 09:43:52 compute-0 ceph-mon[74331]: pgmap v499: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:43:52 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:43:52 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:43:52 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:43:52 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:43:52 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:43:52 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:43:52 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:43:52 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:52 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 24 09:43:52 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:52.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 24 09:43:52 compute-0 podman[226968]: 2025-11-24 09:43:52.945645012 +0000 UTC m=+0.053640751 container create 730b05e8ac3bbd09ed00e020e41fc624bdcb1120161ffe25af332cca7142066b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_jones, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:43:52 compute-0 systemd[1]: Started libpod-conmon-730b05e8ac3bbd09ed00e020e41fc624bdcb1120161ffe25af332cca7142066b.scope.
Nov 24 09:43:53 compute-0 podman[226968]: 2025-11-24 09:43:52.922209426 +0000 UTC m=+0.030205185 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:43:53 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:43:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efc89b0767e81fc941440c4988b99f947efb3048c9a72952733758bc8c5c5943/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:43:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efc89b0767e81fc941440c4988b99f947efb3048c9a72952733758bc8c5c5943/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:43:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efc89b0767e81fc941440c4988b99f947efb3048c9a72952733758bc8c5c5943/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:43:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efc89b0767e81fc941440c4988b99f947efb3048c9a72952733758bc8c5c5943/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:43:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efc89b0767e81fc941440c4988b99f947efb3048c9a72952733758bc8c5c5943/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:43:53 compute-0 podman[226968]: 2025-11-24 09:43:53.037493049 +0000 UTC m=+0.145488808 container init 730b05e8ac3bbd09ed00e020e41fc624bdcb1120161ffe25af332cca7142066b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:43:53 compute-0 podman[226968]: 2025-11-24 09:43:53.047017299 +0000 UTC m=+0.155013038 container start 730b05e8ac3bbd09ed00e020e41fc624bdcb1120161ffe25af332cca7142066b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_jones, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:43:53 compute-0 podman[226968]: 2025-11-24 09:43:53.050289735 +0000 UTC m=+0.158285464 container attach 730b05e8ac3bbd09ed00e020e41fc624bdcb1120161ffe25af332cca7142066b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_jones, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Nov 24 09:43:53 compute-0 optimistic_jones[226990]: --> passed data devices: 0 physical, 1 LVM
Nov 24 09:43:53 compute-0 optimistic_jones[226990]: --> All data devices are unavailable
Nov 24 09:43:53 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:43:53 compute-0 systemd[1]: libpod-730b05e8ac3bbd09ed00e020e41fc624bdcb1120161ffe25af332cca7142066b.scope: Deactivated successfully.
Nov 24 09:43:53 compute-0 podman[227025]: 2025-11-24 09:43:53.461988474 +0000 UTC m=+0.030636317 container died 730b05e8ac3bbd09ed00e020e41fc624bdcb1120161ffe25af332cca7142066b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_jones, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1)
Nov 24 09:43:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-efc89b0767e81fc941440c4988b99f947efb3048c9a72952733758bc8c5c5943-merged.mount: Deactivated successfully.
Nov 24 09:43:53 compute-0 podman[227025]: 2025-11-24 09:43:53.507218993 +0000 UTC m=+0.075866856 container remove 730b05e8ac3bbd09ed00e020e41fc624bdcb1120161ffe25af332cca7142066b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_jones, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 24 09:43:53 compute-0 systemd[1]: libpod-conmon-730b05e8ac3bbd09ed00e020e41fc624bdcb1120161ffe25af332cca7142066b.scope: Deactivated successfully.
Nov 24 09:43:53 compute-0 sudo[226822]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:53 compute-0 sudo[227046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:43:53 compute-0 sudo[227046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:43:53 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v500: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:43:53 compute-0 sudo[227046]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:53 compute-0 sudo[227101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- lvm list --format json
Nov 24 09:43:53 compute-0 sudo[227101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:43:53 compute-0 podman[227090]: 2025-11-24 09:43:53.742906153 +0000 UTC m=+0.083745244 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:43:54 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:54 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:54 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:54.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:54 compute-0 podman[227180]: 2025-11-24 09:43:54.120558046 +0000 UTC m=+0.053709723 container create 910d649a582c065075fbc1329754036107cafa375f44cfa15f38c0088642a7ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_shirley, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:43:54 compute-0 systemd[1]: Started libpod-conmon-910d649a582c065075fbc1329754036107cafa375f44cfa15f38c0088642a7ff.scope.
Nov 24 09:43:54 compute-0 podman[227180]: 2025-11-24 09:43:54.095952209 +0000 UTC m=+0.029103896 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:43:54 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:43:54 compute-0 podman[227180]: 2025-11-24 09:43:54.213740797 +0000 UTC m=+0.146892494 container init 910d649a582c065075fbc1329754036107cafa375f44cfa15f38c0088642a7ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Nov 24 09:43:54 compute-0 podman[227180]: 2025-11-24 09:43:54.222120078 +0000 UTC m=+0.155271745 container start 910d649a582c065075fbc1329754036107cafa375f44cfa15f38c0088642a7ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_shirley, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:43:54 compute-0 podman[227180]: 2025-11-24 09:43:54.226021961 +0000 UTC m=+0.159173618 container attach 910d649a582c065075fbc1329754036107cafa375f44cfa15f38c0088642a7ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_shirley, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Nov 24 09:43:54 compute-0 objective_shirley[227197]: 167 167
Nov 24 09:43:54 compute-0 systemd[1]: libpod-910d649a582c065075fbc1329754036107cafa375f44cfa15f38c0088642a7ff.scope: Deactivated successfully.
Nov 24 09:43:54 compute-0 podman[227180]: 2025-11-24 09:43:54.231492935 +0000 UTC m=+0.164644622 container died 910d649a582c065075fbc1329754036107cafa375f44cfa15f38c0088642a7ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_shirley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Nov 24 09:43:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-dcb0fcb9b3e686028e8db4454be0507421fefc48a4da8b0a387b13a9c2910514-merged.mount: Deactivated successfully.
Nov 24 09:43:54 compute-0 podman[227180]: 2025-11-24 09:43:54.273030457 +0000 UTC m=+0.206182124 container remove 910d649a582c065075fbc1329754036107cafa375f44cfa15f38c0088642a7ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Nov 24 09:43:54 compute-0 systemd[1]: libpod-conmon-910d649a582c065075fbc1329754036107cafa375f44cfa15f38c0088642a7ff.scope: Deactivated successfully.
Nov 24 09:43:54 compute-0 podman[227245]: 2025-11-24 09:43:54.446043028 +0000 UTC m=+0.036233554 container create ccdc8a7d7bcfb3a56ed08252b66e017eb7240e111169c727f3b6c6d6e5ebb404 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Nov 24 09:43:54 compute-0 systemd[1]: Started libpod-conmon-ccdc8a7d7bcfb3a56ed08252b66e017eb7240e111169c727f3b6c6d6e5ebb404.scope.
Nov 24 09:43:54 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:43:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fe021fda332a56d3ecf1cbd86a435ff96b2400ba3caac9eebc9d33ae22cfa39/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:43:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fe021fda332a56d3ecf1cbd86a435ff96b2400ba3caac9eebc9d33ae22cfa39/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:43:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fe021fda332a56d3ecf1cbd86a435ff96b2400ba3caac9eebc9d33ae22cfa39/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:43:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fe021fda332a56d3ecf1cbd86a435ff96b2400ba3caac9eebc9d33ae22cfa39/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:43:54 compute-0 podman[227245]: 2025-11-24 09:43:54.431114485 +0000 UTC m=+0.021305031 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:43:54 compute-0 podman[227245]: 2025-11-24 09:43:54.535552182 +0000 UTC m=+0.125742728 container init ccdc8a7d7bcfb3a56ed08252b66e017eb7240e111169c727f3b6c6d6e5ebb404 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 24 09:43:54 compute-0 podman[227245]: 2025-11-24 09:43:54.568374055 +0000 UTC m=+0.158564581 container start ccdc8a7d7bcfb3a56ed08252b66e017eb7240e111169c727f3b6c6d6e5ebb404 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:43:54 compute-0 podman[227245]: 2025-11-24 09:43:54.572396851 +0000 UTC m=+0.162587407 container attach ccdc8a7d7bcfb3a56ed08252b66e017eb7240e111169c727f3b6c6d6e5ebb404 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 09:43:54 compute-0 sudo[227368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkysnwznjsiaorxsffykvawcrvunhadb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977434.3956819-101-189173234283987/AnsiballZ_setup.py'
Nov 24 09:43:54 compute-0 sudo[227368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:54 compute-0 ceph-mon[74331]: pgmap v500: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:43:54 compute-0 goofy_chaum[227291]: {
Nov 24 09:43:54 compute-0 goofy_chaum[227291]:     "0": [
Nov 24 09:43:54 compute-0 goofy_chaum[227291]:         {
Nov 24 09:43:54 compute-0 goofy_chaum[227291]:             "devices": [
Nov 24 09:43:54 compute-0 goofy_chaum[227291]:                 "/dev/loop3"
Nov 24 09:43:54 compute-0 goofy_chaum[227291]:             ],
Nov 24 09:43:54 compute-0 goofy_chaum[227291]:             "lv_name": "ceph_lv0",
Nov 24 09:43:54 compute-0 goofy_chaum[227291]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:43:54 compute-0 goofy_chaum[227291]:             "lv_size": "21470642176",
Nov 24 09:43:54 compute-0 goofy_chaum[227291]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=84a084c3-61a7-5de7-8207-1f88efa59a64,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 24 09:43:54 compute-0 goofy_chaum[227291]:             "lv_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:43:54 compute-0 goofy_chaum[227291]:             "name": "ceph_lv0",
Nov 24 09:43:54 compute-0 goofy_chaum[227291]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:43:54 compute-0 goofy_chaum[227291]:             "tags": {
Nov 24 09:43:54 compute-0 goofy_chaum[227291]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:43:54 compute-0 goofy_chaum[227291]:                 "ceph.block_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:43:54 compute-0 goofy_chaum[227291]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 09:43:54 compute-0 goofy_chaum[227291]:                 "ceph.cluster_fsid": "84a084c3-61a7-5de7-8207-1f88efa59a64",
Nov 24 09:43:54 compute-0 goofy_chaum[227291]:                 "ceph.cluster_name": "ceph",
Nov 24 09:43:54 compute-0 goofy_chaum[227291]:                 "ceph.crush_device_class": "",
Nov 24 09:43:54 compute-0 goofy_chaum[227291]:                 "ceph.encrypted": "0",
Nov 24 09:43:54 compute-0 goofy_chaum[227291]:                 "ceph.osd_fsid": "4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c",
Nov 24 09:43:54 compute-0 goofy_chaum[227291]:                 "ceph.osd_id": "0",
Nov 24 09:43:54 compute-0 goofy_chaum[227291]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 09:43:54 compute-0 goofy_chaum[227291]:                 "ceph.type": "block",
Nov 24 09:43:54 compute-0 goofy_chaum[227291]:                 "ceph.vdo": "0",
Nov 24 09:43:54 compute-0 goofy_chaum[227291]:                 "ceph.with_tpm": "0"
Nov 24 09:43:54 compute-0 goofy_chaum[227291]:             },
Nov 24 09:43:54 compute-0 goofy_chaum[227291]:             "type": "block",
Nov 24 09:43:54 compute-0 goofy_chaum[227291]:             "vg_name": "ceph_vg0"
Nov 24 09:43:54 compute-0 goofy_chaum[227291]:         }
Nov 24 09:43:54 compute-0 goofy_chaum[227291]:     ]
Nov 24 09:43:54 compute-0 goofy_chaum[227291]: }
Nov 24 09:43:54 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:54 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:43:54 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:54.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:43:54 compute-0 systemd[1]: libpod-ccdc8a7d7bcfb3a56ed08252b66e017eb7240e111169c727f3b6c6d6e5ebb404.scope: Deactivated successfully.
Nov 24 09:43:54 compute-0 podman[227245]: 2025-11-24 09:43:54.873042149 +0000 UTC m=+0.463232685 container died ccdc8a7d7bcfb3a56ed08252b66e017eb7240e111169c727f3b6c6d6e5ebb404 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_chaum, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:43:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-2fe021fda332a56d3ecf1cbd86a435ff96b2400ba3caac9eebc9d33ae22cfa39-merged.mount: Deactivated successfully.
Nov 24 09:43:54 compute-0 podman[227245]: 2025-11-24 09:43:54.92784627 +0000 UTC m=+0.518036796 container remove ccdc8a7d7bcfb3a56ed08252b66e017eb7240e111169c727f3b6c6d6e5ebb404 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_chaum, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:43:54 compute-0 systemd[1]: libpod-conmon-ccdc8a7d7bcfb3a56ed08252b66e017eb7240e111169c727f3b6c6d6e5ebb404.scope: Deactivated successfully.
Nov 24 09:43:54 compute-0 sudo[227101]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:55 compute-0 sudo[227388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:43:55 compute-0 sudo[227388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:43:55 compute-0 sudo[227388]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:55 compute-0 python3.9[227370]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 09:43:55 compute-0 sudo[227413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- raw list --format json
Nov 24 09:43:55 compute-0 sudo[227413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:43:55 compute-0 sudo[227368]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:55 compute-0 podman[227485]: 2025-11-24 09:43:55.502461694 +0000 UTC m=+0.036748937 container create a25b4b4a465d95ed5d7297525c7bd7083c934d1b4771eb3db583b5adc5bf845f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:43:55 compute-0 systemd[1]: Started libpod-conmon-a25b4b4a465d95ed5d7297525c7bd7083c934d1b4771eb3db583b5adc5bf845f.scope.
Nov 24 09:43:55 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:43:55 compute-0 podman[227485]: 2025-11-24 09:43:55.486847063 +0000 UTC m=+0.021134326 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:43:55 compute-0 podman[227485]: 2025-11-24 09:43:55.588554579 +0000 UTC m=+0.122841852 container init a25b4b4a465d95ed5d7297525c7bd7083c934d1b4771eb3db583b5adc5bf845f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_mclean, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Nov 24 09:43:55 compute-0 podman[227485]: 2025-11-24 09:43:55.596885118 +0000 UTC m=+0.131172361 container start a25b4b4a465d95ed5d7297525c7bd7083c934d1b4771eb3db583b5adc5bf845f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_mclean, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Nov 24 09:43:55 compute-0 podman[227485]: 2025-11-24 09:43:55.601239463 +0000 UTC m=+0.135526726 container attach a25b4b4a465d95ed5d7297525c7bd7083c934d1b4771eb3db583b5adc5bf845f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid)
Nov 24 09:43:55 compute-0 trusting_mclean[227508]: 167 167
Nov 24 09:43:55 compute-0 systemd[1]: libpod-a25b4b4a465d95ed5d7297525c7bd7083c934d1b4771eb3db583b5adc5bf845f.scope: Deactivated successfully.
Nov 24 09:43:55 compute-0 podman[227485]: 2025-11-24 09:43:55.603394109 +0000 UTC m=+0.137681352 container died a25b4b4a465d95ed5d7297525c7bd7083c934d1b4771eb3db583b5adc5bf845f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_mclean, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Nov 24 09:43:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b850230a610b9686ff5af9db3a8f80940982ca24cc726ae0c55832163b055bc-merged.mount: Deactivated successfully.
Nov 24 09:43:55 compute-0 podman[227485]: 2025-11-24 09:43:55.642518828 +0000 UTC m=+0.176806061 container remove a25b4b4a465d95ed5d7297525c7bd7083c934d1b4771eb3db583b5adc5bf845f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_mclean, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1)
Nov 24 09:43:55 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v501: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:43:55 compute-0 systemd[1]: libpod-conmon-a25b4b4a465d95ed5d7297525c7bd7083c934d1b4771eb3db583b5adc5bf845f.scope: Deactivated successfully.
Nov 24 09:43:55 compute-0 sudo[227593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yiogrrdgwxecymvtjsbtvmulinidiweo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977434.3956819-101-189173234283987/AnsiballZ_dnf.py'
Nov 24 09:43:55 compute-0 sudo[227593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:43:55 compute-0 podman[227601]: 2025-11-24 09:43:55.828999204 +0000 UTC m=+0.050292154 container create bb03ba35383905284ce84ee2621d510696f763997d061b0579dcb87d299d2be3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_bardeen, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:43:55 compute-0 systemd[1]: Started libpod-conmon-bb03ba35383905284ce84ee2621d510696f763997d061b0579dcb87d299d2be3.scope.
Nov 24 09:43:55 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:43:55 compute-0 podman[227601]: 2025-11-24 09:43:55.80645289 +0000 UTC m=+0.027745890 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:43:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2129f5b0ede66944a28f375776f506a5778c4d05423d6f73b252e68c572ab1e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:43:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2129f5b0ede66944a28f375776f506a5778c4d05423d6f73b252e68c572ab1e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:43:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2129f5b0ede66944a28f375776f506a5778c4d05423d6f73b252e68c572ab1e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:43:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2129f5b0ede66944a28f375776f506a5778c4d05423d6f73b252e68c572ab1e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:43:55 compute-0 podman[227601]: 2025-11-24 09:43:55.915048867 +0000 UTC m=+0.136341847 container init bb03ba35383905284ce84ee2621d510696f763997d061b0579dcb87d299d2be3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Nov 24 09:43:55 compute-0 podman[227601]: 2025-11-24 09:43:55.926370554 +0000 UTC m=+0.147663504 container start bb03ba35383905284ce84ee2621d510696f763997d061b0579dcb87d299d2be3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 24 09:43:55 compute-0 podman[227601]: 2025-11-24 09:43:55.930750219 +0000 UTC m=+0.152043199 container attach bb03ba35383905284ce84ee2621d510696f763997d061b0579dcb87d299d2be3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:43:56 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:56 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:43:56 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:56.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:43:56 compute-0 python3.9[227595]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 09:43:56 compute-0 lvm[227694]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 09:43:56 compute-0 lvm[227694]: VG ceph_vg0 finished
Nov 24 09:43:56 compute-0 laughing_bardeen[227618]: {}
Nov 24 09:43:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:43:56 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:43:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:43:56 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 24 09:43:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:43:56 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 24 09:43:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:43:56 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 24 09:43:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:43:56 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 24 09:43:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:43:56 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 24 09:43:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:43:56 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 24 09:43:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:43:56 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:43:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:43:56 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:43:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:43:56 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:43:56 compute-0 systemd[1]: libpod-bb03ba35383905284ce84ee2621d510696f763997d061b0579dcb87d299d2be3.scope: Deactivated successfully.
Nov 24 09:43:56 compute-0 systemd[1]: libpod-bb03ba35383905284ce84ee2621d510696f763997d061b0579dcb87d299d2be3.scope: Consumed 1.154s CPU time.
Nov 24 09:43:56 compute-0 podman[227601]: 2025-11-24 09:43:56.652194835 +0000 UTC m=+0.873487815 container died bb03ba35383905284ce84ee2621d510696f763997d061b0579dcb87d299d2be3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_bardeen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 24 09:43:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:43:56 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 24 09:43:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:43:56 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:43:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:43:56 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 24 09:43:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:43:56 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 24 09:43:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:43:56 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 24 09:43:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:43:56 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 24 09:43:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:43:56 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 24 09:43:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:43:56 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 24 09:43:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:43:56 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 24 09:43:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:43:56 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 24 09:43:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:43:56 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 24 09:43:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:43:56 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 24 09:43:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:43:56 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 24 09:43:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:43:56 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 24 09:43:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:43:56 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 09:43:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:43:56 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 24 09:43:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:43:56 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 09:43:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-2129f5b0ede66944a28f375776f506a5778c4d05423d6f73b252e68c572ab1e8-merged.mount: Deactivated successfully.
Nov 24 09:43:56 compute-0 podman[227601]: 2025-11-24 09:43:56.697354044 +0000 UTC m=+0.918646994 container remove bb03ba35383905284ce84ee2621d510696f763997d061b0579dcb87d299d2be3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_bardeen, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Nov 24 09:43:56 compute-0 systemd[1]: libpod-conmon-bb03ba35383905284ce84ee2621d510696f763997d061b0579dcb87d299d2be3.scope: Deactivated successfully.
Nov 24 09:43:56 compute-0 sudo[227413]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:56 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:43:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:43:56 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7b8000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:56 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:43:56 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:43:56 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:43:56 compute-0 ceph-mon[74331]: pgmap v501: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:43:56 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:43:56 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:43:56 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:56 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:56 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:56.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:56 compute-0 sudo[227722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:43:56 compute-0 sudo[227722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:43:56 compute-0 sudo[227722]: pam_unix(sudo:session): session closed for user root
Nov 24 09:43:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:43:57.038Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:43:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:43:57 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7a40016e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:57 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v502: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:43:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:43:57 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc788000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:58 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:58 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:43:58 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:43:58.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:43:58 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:43:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/094358 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:43:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:43:58 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc798000fb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:58 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:43:58 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:43:58 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:43:58.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:43:58 compute-0 ceph-mon[74331]: pgmap v502: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:43:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:43:59 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7ac001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:43:59 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v503: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:43:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:43:59 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7a4002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:00 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:00 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:00 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:00.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:44:00 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7880016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:00 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:00 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:00 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:00.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:00 compute-0 ceph-mon[74331]: pgmap v503: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:44:00 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:44:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:44:00] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Nov 24 09:44:00 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:44:00] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Nov 24 09:44:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:44:01 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc798001cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:01 compute-0 sudo[227593]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:01 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v504: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:44:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:44:01 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7ac001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:02 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:02 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:44:02 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:02.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:44:02 compute-0 sudo[227901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eoispuwgwnbjilxfijknigmzgwdxiyor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977441.7148302-137-264221150272489/AnsiballZ_stat.py'
Nov 24 09:44:02 compute-0 sudo[227901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:02 compute-0 python3.9[227903]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:44:02 compute-0 sudo[227901]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:02 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:44:02 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7a4002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:02 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:02 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 24 09:44:02 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:02.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 24 09:44:02 compute-0 sudo[227981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:44:02 compute-0 sudo[227981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:44:02 compute-0 sudo[227981]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:02 compute-0 ceph-mon[74331]: pgmap v504: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:44:03 compute-0 sudo[228079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzvxzsttxsavfojqhfdtxnxysvsjpfit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977442.660409-167-7230610963031/AnsiballZ_command.py'
Nov 24 09:44:03 compute-0 sudo[228079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:03 compute-0 python3.9[228081]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:44:03 compute-0 sudo[228079]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:44:03 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7880016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:03 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:44:03 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v505: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:44:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:44:03 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc798001cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:03 compute-0 sudo[228233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjgsentucqewmzszzcbdlvojdnvyswlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977443.7060645-197-140674153107676/AnsiballZ_stat.py'
Nov 24 09:44:03 compute-0 sudo[228233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:04 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:04 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:04 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:04.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:04 compute-0 python3.9[228235]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:44:04 compute-0 sudo[228233]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:04 compute-0 sudo[228386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-doqfnbbzawcdezgsfbvqdhnjojimzhpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977444.4975634-221-42106194689867/AnsiballZ_command.py'
Nov 24 09:44:04 compute-0 sudo[228386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:04 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:44:04 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7ac002910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:04 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:04 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:04 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:04.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:04 compute-0 ceph-mon[74331]: pgmap v505: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:44:04 compute-0 python3.9[228388]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:44:04 compute-0 sudo[228386]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:44:05 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7a4002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:05 compute-0 sudo[228540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snxsezgcxaarwzpxzzickzewpxcizvme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977445.3189192-245-53736512019970/AnsiballZ_stat.py'
Nov 24 09:44:05 compute-0 sudo[228540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:05 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v506: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:44:05 compute-0 python3.9[228542]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:44:05 compute-0 sudo[228540]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:44:05 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7880016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:06 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:06 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:06 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:06.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:06 compute-0 sudo[228663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwcikxfahzvytozsblamrrvokowfbsyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977445.3189192-245-53736512019970/AnsiballZ_copy.py'
Nov 24 09:44:06 compute-0 sudo[228663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:06 compute-0 python3.9[228665]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763977445.3189192-245-53736512019970/.source.iscsi _original_basename=.e_ukf_cy follow=False checksum=0daba7002dfdf0fdf2794db25361deed70d0915c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:44:06 compute-0 sudo[228663]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:44:06 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7980029c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:06 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:06 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:06 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:06.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:06 compute-0 ceph-mon[74331]: pgmap v506: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:44:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:44:07.039Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:44:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:44:07.040Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:44:07 compute-0 sudo[228816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crrqzskecvkziuxlykbnqomftgfbmwtq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977446.7938035-290-203262429075022/AnsiballZ_file.py'
Nov 24 09:44:07 compute-0 sudo[228816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:44:07 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7ac002910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:07 compute-0 python3.9[228818]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:44:07 compute-0 sudo[228816]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:07 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v507: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:44:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:44:07 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7a4002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:08 compute-0 sudo[228969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tayaklsfdpoegedzlsbejsddtgbqijpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977447.6098313-314-24574295042116/AnsiballZ_lineinfile.py'
Nov 24 09:44:08 compute-0 sudo[228969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:08 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:08 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:08 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:08.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:08 compute-0 python3.9[228971]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:44:08 compute-0 sudo[228969]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:08 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:44:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:44:08 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc788002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:08 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:08 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:08 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:08.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:08 compute-0 ceph-mon[74331]: pgmap v507: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:44:09 compute-0 sudo[229122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csautpprfkuvffebcwgwgcbayhyvbact ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977448.5629163-341-216835011586364/AnsiballZ_systemd_service.py'
Nov 24 09:44:09 compute-0 sudo[229122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:44:09 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7980029c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:09 compute-0 python3.9[229124]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:44:09 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Nov 24 09:44:09 compute-0 sudo[229122]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:09 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v508: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:44:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:44:09 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7ac002910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:10 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:10 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:10 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:10.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:10 compute-0 sudo[229280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzwzsmllfcswhypvdprscxrfuvsxanav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977449.9745061-365-204197176920582/AnsiballZ_systemd_service.py'
Nov 24 09:44:10 compute-0 sudo[229280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:10 compute-0 python3.9[229282]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:44:10 compute-0 systemd[1]: Reloading.
Nov 24 09:44:10 compute-0 systemd-rc-local-generator[229310]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:44:10 compute-0 systemd-sysv-generator[229315]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:44:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:44:10 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7a4002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:10 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:10 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:10 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:10.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:10 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 24 09:44:10 compute-0 systemd[1]: Starting Open-iSCSI...
Nov 24 09:44:10 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Nov 24 09:44:10 compute-0 systemd[1]: Started Open-iSCSI.
Nov 24 09:44:10 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Nov 24 09:44:10 compute-0 ceph-mon[74331]: pgmap v508: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:44:10 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Nov 24 09:44:10 compute-0 sudo[229280]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:44:10] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Nov 24 09:44:10 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:44:10] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Nov 24 09:44:11 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:44:11 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc788002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:11 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v509: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:44:11 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:44:11 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7980029c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:12 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:12 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:12 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:12.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:12 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:44:12 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7ac002910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:12 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:12 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:12 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:12.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:12 compute-0 ceph-mon[74331]: pgmap v509: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:44:13 compute-0 sudo[229482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvjubeaticekfucjkbpekbxuvjemfcbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977452.78495-398-214915549729943/AnsiballZ_service_facts.py'
Nov 24 09:44:13 compute-0 sudo[229482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:13 compute-0 python3.9[229484]: ansible-ansible.builtin.service_facts Invoked
Nov 24 09:44:13 compute-0 network[229502]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 09:44:13 compute-0 network[229503]: 'network-scripts' will be removed from distribution in near future.
Nov 24 09:44:13 compute-0 network[229504]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 09:44:13 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:44:13 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7a4002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:44:13 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v510: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:44:13 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:44:13 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc788002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:14 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:14 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:44:14 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:14.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:44:14 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[226264]: 24/11/2025 09:44:14 : epoch 692428d0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7980029c0 fd 38 proxy ignored for local
Nov 24 09:44:14 compute-0 kernel: ganesha.nfsd[227705]: segfault at 50 ip 00007fc8681f832e sp 00007fc82fffe210 error 4 in libntirpc.so.5.8[7fc8681dd000+2c000] likely on CPU 3 (core 0, socket 3)
Nov 24 09:44:14 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 24 09:44:14 compute-0 systemd[1]: Started Process Core Dump (PID 229553/UID 0).
Nov 24 09:44:14 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:14 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 24 09:44:14 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:14.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 24 09:44:14 compute-0 ceph-mon[74331]: pgmap v510: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:44:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:44:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:44:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:44:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:44:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:44:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:44:15 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v511: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:44:15 compute-0 systemd-coredump[229555]: Process 226268 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 47:
                                                    #0  0x00007fc8681f832e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 24 09:44:15 compute-0 systemd[1]: systemd-coredump@6-229553-0.service: Deactivated successfully.
Nov 24 09:44:15 compute-0 systemd[1]: systemd-coredump@6-229553-0.service: Consumed 1.080s CPU time.
Nov 24 09:44:15 compute-0 podman[229570]: 2025-11-24 09:44:15.979274823 +0000 UTC m=+0.026429716 container died df9f8985edb66069b61b85b3d81a0dab55381b41763cd3aad0cf5a40e5727bbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:44:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e80afccf69c092751e552cbb0f9727659470ceedd45d18549d247b34002db41-merged.mount: Deactivated successfully.
Nov 24 09:44:16 compute-0 podman[229570]: 2025-11-24 09:44:16.017618551 +0000 UTC m=+0.064773424 container remove df9f8985edb66069b61b85b3d81a0dab55381b41763cd3aad0cf5a40e5727bbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 24 09:44:16 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:44:16 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.2.0.compute-0.ssprex.service: Main process exited, code=exited, status=139/n/a
Nov 24 09:44:16 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:16 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:16 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:16.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:16 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.2.0.compute-0.ssprex.service: Failed with result 'exit-code'.
Nov 24 09:44:16 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.2.0.compute-0.ssprex.service: Consumed 1.325s CPU time.
Nov 24 09:44:16 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:16 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:16 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:16.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:16 compute-0 podman[229659]: 2025-11-24 09:44:16.945388355 +0000 UTC m=+0.093348157 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 09:44:17 compute-0 ceph-mon[74331]: pgmap v511: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:44:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:44:17.040Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:44:17 compute-0 sudo[229482]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:17 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v512: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:44:17 compute-0 sudo[229849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwummfhpylgwbvkkxcjotzehlaamtekk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977457.6813905-428-230233466031119/AnsiballZ_file.py'
Nov 24 09:44:17 compute-0 sudo[229849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:18 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:18 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:18 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:18.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:18 compute-0 python3.9[229851]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 24 09:44:18 compute-0 sudo[229849]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:18 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:44:18 compute-0 sudo[230002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbwfqwpmrcxxtgcjrnuozvuzfhtvigit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977458.4946942-452-111091544170378/AnsiballZ_modprobe.py'
Nov 24 09:44:18 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:18 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 24 09:44:18 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:18.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 24 09:44:18 compute-0 sudo[230002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:19 compute-0 ceph-mon[74331]: pgmap v512: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:44:19 compute-0 python3.9[230004]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Nov 24 09:44:19 compute-0 sudo[230002]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:19 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v513: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:44:19 compute-0 sudo[230159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejlnllzbcgyqncrpgjlpoapcakxjvmvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977459.3982651-476-33129973050071/AnsiballZ_stat.py'
Nov 24 09:44:19 compute-0 sudo[230159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:19 compute-0 python3.9[230161]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:44:19 compute-0 sudo[230159]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:20 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:20 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 24 09:44:20 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:20.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 24 09:44:20 compute-0 sudo[230282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwveajqyljefbljoomkdacoyibvrnzss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977459.3982651-476-33129973050071/AnsiballZ_copy.py'
Nov 24 09:44:20 compute-0 sudo[230282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:20 compute-0 python3.9[230284]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763977459.3982651-476-33129973050071/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:44:20 compute-0 sudo[230282]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:44:20.554 165073 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:44:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:44:20.555 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:44:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:44:20.555 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:44:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/094420 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:44:20 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:20 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 24 09:44:20 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:20.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 24 09:44:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:44:20] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Nov 24 09:44:20 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:44:20] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Nov 24 09:44:21 compute-0 ceph-mon[74331]: pgmap v513: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:44:21 compute-0 sudo[230435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmpdaahucgmmndbqssuaovvekhvnjsjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977460.9267123-524-29979791598748/AnsiballZ_lineinfile.py'
Nov 24 09:44:21 compute-0 sudo[230435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:21 compute-0 python3.9[230437]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:44:21 compute-0 sudo[230435]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:21 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v514: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:44:22 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:22 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:22 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:22.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:22 compute-0 sudo[230589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybbdgfydjzaanakmiknazdoshqieaecb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977461.7199821-548-100170958251425/AnsiballZ_systemd.py'
Nov 24 09:44:22 compute-0 sudo[230589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:22 compute-0 python3.9[230591]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 09:44:22 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 24 09:44:22 compute-0 systemd[1]: Stopped Load Kernel Modules.
Nov 24 09:44:22 compute-0 systemd[1]: Stopping Load Kernel Modules...
Nov 24 09:44:22 compute-0 systemd[1]: Starting Load Kernel Modules...
Nov 24 09:44:22 compute-0 systemd[1]: Finished Load Kernel Modules.
Nov 24 09:44:22 compute-0 sudo[230589]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:22 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:22 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:22 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:22.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:22 compute-0 sudo[230641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:44:22 compute-0 sudo[230641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:44:22 compute-0 sudo[230641]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:23 compute-0 ceph-mon[74331]: pgmap v514: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:44:23 compute-0 sudo[230770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plkggvomybkngrdnvryydmdnycethasb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977462.9526901-572-234961002931864/AnsiballZ_file.py'
Nov 24 09:44:23 compute-0 sudo[230770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:23 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:44:23 compute-0 python3.9[230773]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:44:23 compute-0 sudo[230770]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:23 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v515: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:44:24 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:24 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:24 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:24.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:24 compute-0 podman[230897]: 2025-11-24 09:44:24.133444012 +0000 UTC m=+0.047471650 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 24 09:44:24 compute-0 sudo[230940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjagqlvolhzfidtjuzyyovgjtjvvezgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977463.8548653-599-232053360814052/AnsiballZ_stat.py'
Nov 24 09:44:24 compute-0 sudo[230940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:24 compute-0 python3.9[230944]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:44:24 compute-0 sudo[230940]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:24 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:24 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:44:24 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:24.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:44:24 compute-0 sudo[231095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvaxahcbmcftjjauwcyppdkvpgmjwvhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977464.7059953-626-151179412332252/AnsiballZ_stat.py'
Nov 24 09:44:24 compute-0 sudo[231095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:25 compute-0 ceph-mon[74331]: pgmap v515: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:44:25 compute-0 python3.9[231097]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:44:25 compute-0 sudo[231095]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:25 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v516: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:44:25 compute-0 sudo[231248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-voyxnfimflwjkuwjorhrcaoadeakwzhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977465.520027-650-43380092818753/AnsiballZ_stat.py'
Nov 24 09:44:25 compute-0 sudo[231248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:25 compute-0 python3.9[231250]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:44:25 compute-0 sudo[231248]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:26 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:26 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.002000051s ======
Nov 24 09:44:26 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:26.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000051s
Nov 24 09:44:26 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.2.0.compute-0.ssprex.service: Scheduled restart job, restart counter is at 7.
Nov 24 09:44:26 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ssprex for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:44:26 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.2.0.compute-0.ssprex.service: Consumed 1.325s CPU time.
Nov 24 09:44:26 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ssprex for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:44:26 compute-0 sudo[231396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdznafvfzuxclvmticjekxepnbuhqeuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977465.520027-650-43380092818753/AnsiballZ_copy.py'
Nov 24 09:44:26 compute-0 sudo[231396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:26 compute-0 podman[231422]: 2025-11-24 09:44:26.438559423 +0000 UTC m=+0.037088386 container create e52b5521c1ca85472f90d84d542c2f965eec7792e33978d03021d5570540aca5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 24 09:44:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b7c7091bee56c0867016abdbe9e6cf981db5df26d3faf4ee292150e91aa4387/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 24 09:44:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b7c7091bee56c0867016abdbe9e6cf981db5df26d3faf4ee292150e91aa4387/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:44:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b7c7091bee56c0867016abdbe9e6cf981db5df26d3faf4ee292150e91aa4387/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:44:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b7c7091bee56c0867016abdbe9e6cf981db5df26d3faf4ee292150e91aa4387/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ssprex-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:44:26 compute-0 podman[231422]: 2025-11-24 09:44:26.509215742 +0000 UTC m=+0.107744745 container init e52b5521c1ca85472f90d84d542c2f965eec7792e33978d03021d5570540aca5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:44:26 compute-0 podman[231422]: 2025-11-24 09:44:26.515000804 +0000 UTC m=+0.113529777 container start e52b5521c1ca85472f90d84d542c2f965eec7792e33978d03021d5570540aca5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 24 09:44:26 compute-0 podman[231422]: 2025-11-24 09:44:26.420753715 +0000 UTC m=+0.019282688 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:44:26 compute-0 bash[231422]: e52b5521c1ca85472f90d84d542c2f965eec7792e33978d03021d5570540aca5
Nov 24 09:44:26 compute-0 python3.9[231403]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763977465.520027-650-43380092818753/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:44:26 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ssprex for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:44:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:26 : epoch 692428fa : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 24 09:44:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:26 : epoch 692428fa : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 24 09:44:26 compute-0 sudo[231396]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:26 : epoch 692428fa : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 24 09:44:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:26 : epoch 692428fa : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 24 09:44:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:26 : epoch 692428fa : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 24 09:44:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:26 : epoch 692428fa : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 24 09:44:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:26 : epoch 692428fa : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 24 09:44:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:26 : epoch 692428fa : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:44:26 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:26 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 24 09:44:26 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:26.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 24 09:44:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:44:27.041Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:44:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:44:27.042Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:44:27 compute-0 ceph-mon[74331]: pgmap v516: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:44:27 compute-0 sudo[231629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrscdmeqvhyjfcofqbxprnnkmsyqnxba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977466.9113727-695-1941818597797/AnsiballZ_command.py'
Nov 24 09:44:27 compute-0 sudo[231629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:27 compute-0 python3.9[231631]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:44:27 compute-0 sudo[231629]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:27 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v517: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:44:28 compute-0 sudo[231783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iavbtyzicldoosggdndlzigsotwvipya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977467.7727761-719-119977133680710/AnsiballZ_lineinfile.py'
Nov 24 09:44:28 compute-0 sudo[231783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:28 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:28 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 24 09:44:28 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:28.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 24 09:44:28 compute-0 python3.9[231785]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:44:28 compute-0 sudo[231783]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:28 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:44:28 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:28 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 24 09:44:28 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:28.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 24 09:44:28 compute-0 sudo[231936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uenhohxzphdkddocmblajzndlmmnbkxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977468.4942777-743-40358995248679/AnsiballZ_replace.py'
Nov 24 09:44:28 compute-0 sudo[231936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:29 compute-0 ceph-mon[74331]: pgmap v517: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:44:29 compute-0 python3.9[231938]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:44:29 compute-0 sudo[231936]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:29 compute-0 sudo[232089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpxjqwfrftaviiggcmpzvvfodcxfruwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977469.3558452-767-54142225712792/AnsiballZ_replace.py'
Nov 24 09:44:29 compute-0 sudo[232089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:29 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v518: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:44:29 compute-0 python3.9[232091]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:44:29 compute-0 sudo[232089]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:30 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:30 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:30 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:30.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:30 compute-0 sudo[232242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myskduiviqnundhphotlqxrighfdweyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977470.2037902-794-168207986344172/AnsiballZ_lineinfile.py'
Nov 24 09:44:30 compute-0 sudo[232242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:30 compute-0 python3.9[232244]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:44:30 compute-0 sudo[232242]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:30 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:30 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:30 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:30.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:30 compute-0 sudo[232394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqcxhyumfsphfthquguevccciuqkbiqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977470.7465951-794-21195883902409/AnsiballZ_lineinfile.py'
Nov 24 09:44:30 compute-0 sudo[232394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:44:30] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Nov 24 09:44:30 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:44:30] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Nov 24 09:44:31 compute-0 python3.9[232396]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:44:31 compute-0 ceph-mon[74331]: pgmap v518: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:44:31 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:44:31 compute-0 sudo[232394]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:31 compute-0 sudo[232547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxkerldtnohrnzkispwhhmcfdopyxjxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977471.2938776-794-176045359113380/AnsiballZ_lineinfile.py'
Nov 24 09:44:31 compute-0 sudo[232547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:31 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v519: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:44:31 compute-0 python3.9[232549]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:44:31 compute-0 sudo[232547]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:32 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:32 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:32 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:32.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:32 compute-0 sudo[232699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqwlsppgxvsrokkgspmdkyojxvgdxmdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977471.9033458-794-218493215281772/AnsiballZ_lineinfile.py'
Nov 24 09:44:32 compute-0 sudo[232699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:32 compute-0 python3.9[232701]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:44:32 compute-0 sudo[232699]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:32 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:32 : epoch 692428fa : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:44:32 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:32 : epoch 692428fa : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:44:32 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:32 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:32 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:32.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:33 compute-0 ceph-mon[74331]: pgmap v519: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:44:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:44:33 compute-0 sudo[232853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpxsqsloohwvpqultadasaallrynusgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977473.130218-881-190919061074482/AnsiballZ_stat.py'
Nov 24 09:44:33 compute-0 sudo[232853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:33 compute-0 python3.9[232855]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:44:33 compute-0 sudo[232853]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:33 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v520: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:44:34 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:34 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:34 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:34.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:34 compute-0 sudo[233007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsxllhcxucpkaydfxztynvceqbphiyzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977473.9174955-905-69702645407787/AnsiballZ_file.py'
Nov 24 09:44:34 compute-0 sudo[233007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:34 compute-0 python3.9[233009]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:44:34 compute-0 sudo[233007]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:34 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:34 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:34 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:34.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:35 compute-0 sudo[233160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cocmmxinuxaltmtodxivgrruyskchtgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977474.8823898-932-208219998293718/AnsiballZ_file.py'
Nov 24 09:44:35 compute-0 sudo[233160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:35 compute-0 ceph-mon[74331]: pgmap v520: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:44:35 compute-0 python3.9[233162]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:44:35 compute-0 sudo[233160]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:35 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v521: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:44:35 compute-0 sudo[233313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzfrtjictkgkhksmvwuouxhpecgqvqhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977475.6212163-956-33858559978391/AnsiballZ_stat.py'
Nov 24 09:44:35 compute-0 sudo[233313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:36 compute-0 python3.9[233315]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:44:36 compute-0 sudo[233313]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:36 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:36 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:36 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:36.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:36 compute-0 sudo[233392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afyuwzonohxlupyrlzqhjmywyfynrggj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977475.6212163-956-33858559978391/AnsiballZ_file.py'
Nov 24 09:44:36 compute-0 sudo[233392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:36 compute-0 python3.9[233394]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:44:36 compute-0 sudo[233392]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:36 compute-0 sudo[233544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inagmmsczegncbfebagchtfmwetbtkuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977476.5953238-956-232441679286816/AnsiballZ_stat.py'
Nov 24 09:44:36 compute-0 sudo[233544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:36 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:36 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 24 09:44:36 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:36.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 24 09:44:37 compute-0 python3.9[233546]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:44:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:44:37.043Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:44:37 compute-0 sudo[233544]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:37 compute-0 ceph-mon[74331]: pgmap v521: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:44:37 compute-0 sudo[233623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-daushjnxzxcdqatmztnjftrhwmpslvnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977476.5953238-956-232441679286816/AnsiballZ_file.py'
Nov 24 09:44:37 compute-0 sudo[233623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:37 compute-0 python3.9[233625]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:44:37 compute-0 sudo[233623]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:37 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v522: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:44:38 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:38 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 24 09:44:38 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:38.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 24 09:44:38 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:44:38 compute-0 sudo[233776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-papphzstbyadiszaxejemvbjihejngen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977478.1776807-1025-1010271668859/AnsiballZ_file.py'
Nov 24 09:44:38 compute-0 sudo[233776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:38 : epoch 692428fa : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:44:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:38 : epoch 692428fa : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 24 09:44:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:38 : epoch 692428fa : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 24 09:44:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:38 : epoch 692428fa : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 24 09:44:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:38 : epoch 692428fa : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 24 09:44:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:38 : epoch 692428fa : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 24 09:44:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:38 : epoch 692428fa : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 24 09:44:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:38 : epoch 692428fa : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:44:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:38 : epoch 692428fa : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:44:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:38 : epoch 692428fa : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:44:38 compute-0 python3.9[233778]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:44:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:38 : epoch 692428fa : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 24 09:44:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:38 : epoch 692428fa : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:44:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:38 : epoch 692428fa : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 24 09:44:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:38 : epoch 692428fa : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 24 09:44:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:38 : epoch 692428fa : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 24 09:44:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:38 : epoch 692428fa : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 24 09:44:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:38 : epoch 692428fa : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 24 09:44:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:38 : epoch 692428fa : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 24 09:44:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:38 : epoch 692428fa : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 24 09:44:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:38 : epoch 692428fa : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 24 09:44:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:38 : epoch 692428fa : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 24 09:44:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:38 : epoch 692428fa : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 24 09:44:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:38 : epoch 692428fa : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 24 09:44:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:38 : epoch 692428fa : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 24 09:44:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:38 : epoch 692428fa : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 09:44:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:38 : epoch 692428fa : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 24 09:44:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:38 : epoch 692428fa : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 09:44:38 compute-0 sudo[233776]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:38 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4a00000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:38 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:38 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:38 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:38.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:39 compute-0 sudo[233944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykkavtigjucvuacizcicfscnmjeajxov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977478.9397743-1049-233194998374506/AnsiballZ_stat.py'
Nov 24 09:44:39 compute-0 sudo[233944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:39 compute-0 ceph-mon[74331]: pgmap v522: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:44:39 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:39 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49f4001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:39 compute-0 python3.9[233946]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:44:39 compute-0 sudo[233944]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:39 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v523: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:44:39 compute-0 sudo[234022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjgliefagzbgphrqgncpwhuwkajfswxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977478.9397743-1049-233194998374506/AnsiballZ_file.py'
Nov 24 09:44:39 compute-0 sudo[234022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:39 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:39 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49d4000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:39 compute-0 python3.9[234024]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:44:39 compute-0 sudo[234022]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:40 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:40 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:44:40 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:40.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:44:40 compute-0 sudo[234175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slxlvzcurarxpcwbjylgezuzdckjjkrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977480.189202-1085-233566769162284/AnsiballZ_stat.py'
Nov 24 09:44:40 compute-0 sudo[234175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:40 compute-0 python3.9[234177]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:44:40 compute-0 sudo[234175]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/094440 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:44:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:40 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49d0000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:40 compute-0 sudo[234253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wiogrfxpzogarsfeaprvahalvxpocfky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977480.189202-1085-233566769162284/AnsiballZ_file.py'
Nov 24 09:44:40 compute-0 sudo[234253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:40 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:40 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:40 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:40.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:44:40] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Nov 24 09:44:40 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:44:40] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Nov 24 09:44:41 compute-0 python3.9[234255]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:44:41 compute-0 sudo[234253]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:41 compute-0 ceph-mon[74331]: pgmap v523: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:44:41 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:41 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4a00001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:41 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v524: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:44:41 compute-0 sudo[234406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klgvbtqtarlrtinskxrjogexoocwgjsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977481.5859053-1121-167617930021424/AnsiballZ_systemd.py'
Nov 24 09:44:41 compute-0 sudo[234406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:41 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:41 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49f4001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:42 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:42 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:44:42 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:42.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:44:42 compute-0 python3.9[234408]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:44:42 compute-0 systemd[1]: Reloading.
Nov 24 09:44:42 compute-0 systemd-rc-local-generator[234433]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:44:42 compute-0 systemd-sysv-generator[234436]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:44:42 compute-0 sudo[234406]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:42 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49d40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:42 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:42 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:42 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:42.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:43 compute-0 sudo[234470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:44:43 compute-0 sudo[234470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:44:43 compute-0 sudo[234470]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:43 compute-0 ceph-mon[74331]: pgmap v524: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:44:43 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:43 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49d00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:44:43 compute-0 sudo[234621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpyiskqvrdysqnksakyuqwghpgowlifl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977483.2444649-1145-259916035583077/AnsiballZ_stat.py'
Nov 24 09:44:43 compute-0 sudo[234621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:43 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v525: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:44:43 compute-0 python3.9[234623]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:44:43 compute-0 sudo[234621]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:43 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:43 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4a00001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:43 compute-0 sudo[234699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcycxwfzedwburskeznpodjsmedmruiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977483.2444649-1145-259916035583077/AnsiballZ_file.py'
Nov 24 09:44:43 compute-0 sudo[234699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:44 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:44 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:44 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:44.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:44 compute-0 python3.9[234701]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:44:44 compute-0 sudo[234699]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:44 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49f4001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:44 compute-0 sudo[234852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tupgzubqxkqjbetaoogcqxaajqoebeui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977484.5544257-1181-253610154764329/AnsiballZ_stat.py'
Nov 24 09:44:44 compute-0 sudo[234852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:44 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:44 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:44 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:44.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:45 compute-0 python3.9[234854]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:44:45 compute-0 sudo[234852]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:45 compute-0 sudo[234931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhyhixuquulnnpgakrlhoixcmiyrkvag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977484.5544257-1181-253610154764329/AnsiballZ_file.py'
Nov 24 09:44:45 compute-0 sudo[234931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:45 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49d40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:45 compute-0 ceph-mon[74331]: pgmap v525: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:44:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Optimize plan auto_2025-11-24_09:44:45
Nov 24 09:44:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 09:44:45 compute-0 ceph-mgr[74626]: [balancer INFO root] do_upmap
Nov 24 09:44:45 compute-0 ceph-mgr[74626]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', 'backups', 'vms', '.nfs', 'default.rgw.control', 'images', '.mgr']
Nov 24 09:44:45 compute-0 ceph-mgr[74626]: [balancer INFO root] prepared 0/10 upmap changes
Nov 24 09:44:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:44:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:44:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:44:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:44:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:44:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:44:45 compute-0 python3.9[234933]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:44:45 compute-0 sudo[234931]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 09:44:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:44:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 09:44:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:44:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:44:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:44:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:44:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:44:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:44:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:44:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:44:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:44:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 09:44:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:44:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:44:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:44:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 24 09:44:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:44:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 09:44:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:44:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:44:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:44:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 09:44:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:44:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 09:44:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 09:44:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 09:44:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:44:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:44:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:44:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:44:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:44:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:44:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:44:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:44:45 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v526: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:44:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:45 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49d00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:46 compute-0 sudo[235083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nffogcepdkijvntofxwjaugwvbdvalwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977485.8227444-1217-93844783331081/AnsiballZ_systemd.py'
Nov 24 09:44:46 compute-0 sudo[235083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:46 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:46 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:46 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:46.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:46 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:44:46 compute-0 ceph-mon[74331]: pgmap v526: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:44:46 compute-0 python3.9[235085]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:44:46 compute-0 systemd[1]: Reloading.
Nov 24 09:44:46 compute-0 systemd-rc-local-generator[235111]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:44:46 compute-0 systemd-sysv-generator[235115]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:44:46 compute-0 systemd[1]: Starting Create netns directory...
Nov 24 09:44:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:46 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4a000089d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:46 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 24 09:44:46 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 24 09:44:46 compute-0 systemd[1]: Finished Create netns directory.
Nov 24 09:44:46 compute-0 sudo[235083]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:46 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:46 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:44:46 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:46.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:44:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:44:47.043Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:44:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:47 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49f4001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:47 compute-0 sudo[235295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otllqxkmlcyakcitljvkawescyjlbuan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977487.3122926-1247-188383706338403/AnsiballZ_file.py'
Nov 24 09:44:47 compute-0 sudo[235295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:47 compute-0 podman[235252]: 2025-11-24 09:44:47.605666049 +0000 UTC m=+0.084936075 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 24 09:44:47 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v527: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Nov 24 09:44:47 compute-0 python3.9[235303]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:44:47 compute-0 sudo[235295]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:47 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49d40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:48 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:48 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000026s ======
Nov 24 09:44:48 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:48.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 24 09:44:48 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:44:48 compute-0 sudo[235457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcqmwnsuddbnjqqaylkfhrvlnfwquveq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977488.1845825-1271-246137195254090/AnsiballZ_stat.py'
Nov 24 09:44:48 compute-0 sudo[235457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:48 compute-0 python3.9[235459]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:44:48 compute-0 sudo[235457]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:48 compute-0 ceph-mon[74331]: pgmap v527: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Nov 24 09:44:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:48 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49d00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:48 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:48 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:44:48 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:48.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:44:49 compute-0 sudo[235580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icrkzijkfuvpnzfghcvtpnoielbvukkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977488.1845825-1271-246137195254090/AnsiballZ_copy.py'
Nov 24 09:44:49 compute-0 sudo[235580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:49 compute-0 python3.9[235582]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763977488.1845825-1271-246137195254090/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:44:49 compute-0 sudo[235580]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:49 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:49 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4a000089d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:49 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v528: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:44:49 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:49 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49f4001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:50 compute-0 sudo[235733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkzstaxueqcuzwowzahwwawihbybwosz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977489.8026698-1322-176775953180982/AnsiballZ_file.py'
Nov 24 09:44:50 compute-0 sudo[235733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:50 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:50 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:50 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:50.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:50 compute-0 python3.9[235735]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:44:50 compute-0 sudo[235733]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:50 compute-0 ceph-mon[74331]: pgmap v528: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:44:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:50 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49d4002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:50 compute-0 sudo[235886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnomidbrkagzjxubcfctsptiadmrfhtk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977490.6392832-1346-141049176204102/AnsiballZ_stat.py'
Nov 24 09:44:50 compute-0 sudo[235886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:50 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:50 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:50 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:50.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:44:50] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Nov 24 09:44:50 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:44:50] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Nov 24 09:44:51 compute-0 python3.9[235888]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:44:51 compute-0 sudo[235886]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:51 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:51 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49d0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:51 compute-0 sudo[236010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuchcbwhflckorxrbiyjxladezytxyzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977490.6392832-1346-141049176204102/AnsiballZ_copy.py'
Nov 24 09:44:51 compute-0 sudo[236010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:51 compute-0 python3.9[236012]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763977490.6392832-1346-141049176204102/.source.json _original_basename=.eio4e4nz follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:44:51 compute-0 sudo[236010]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:51 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v529: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:44:51 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:51 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4a000096e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:52 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:52 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:52 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:52.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:52 compute-0 sudo[236163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uokcklkqjkmehtyzdgcwblvedaikxzqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977492.181258-1391-160860164002866/AnsiballZ_file.py'
Nov 24 09:44:52 compute-0 sudo[236163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/094452 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:44:52 compute-0 python3.9[236165]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:44:52 compute-0 ceph-mon[74331]: pgmap v529: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:44:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:52 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49f4001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:52 compute-0 sudo[236163]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:52 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:52 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:52 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:52.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:53 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:53 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49f4001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:53 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:44:53 compute-0 sudo[236316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odazdwununadxmeruokppcfjghcpashm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977493.126872-1415-139485148051418/AnsiballZ_stat.py'
Nov 24 09:44:53 compute-0 sudo[236316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:53 compute-0 sudo[236316]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:53 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v530: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:44:53 compute-0 sudo[236439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzctrszbcwzfjigbdzhsstuzqihzwivq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977493.126872-1415-139485148051418/AnsiballZ_copy.py'
Nov 24 09:44:53 compute-0 sudo[236439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:53 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:53 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49d0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:54 compute-0 sudo[236439]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:54 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:54 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:54 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:54.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:54 compute-0 podman[236471]: 2025-11-24 09:44:54.785017774 +0000 UTC m=+0.059759262 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible)
Nov 24 09:44:54 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:54 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4a000096e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:54 compute-0 ceph-mon[74331]: pgmap v530: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:44:54 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:54 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:54 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:54.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:55 compute-0 sudo[236612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isusxqtpmvfvlsbdpfingsaqygjqwqwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977494.7532601-1466-121124348539236/AnsiballZ_container_config_data.py'
Nov 24 09:44:55 compute-0 sudo[236612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:55 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49d4002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:55 compute-0 python3.9[236614]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Nov 24 09:44:55 compute-0 sudo[236612]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:55 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v531: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:44:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:55 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49f4001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:56 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:56 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:56 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:56.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:56 compute-0 sudo[236765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akuixdxktqopvaoytfmtyjgxlbdqaiqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977495.7499316-1493-61922071688604/AnsiballZ_container_config_hash.py'
Nov 24 09:44:56 compute-0 sudo[236765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:56 compute-0 python3.9[236767]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 24 09:44:56 compute-0 sudo[236765]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:56 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49d0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:56 compute-0 ceph-mon[74331]: pgmap v531: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:44:56 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:56 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:56 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:56.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:44:57.045Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:44:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:44:57.046Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:44:57 compute-0 sudo[236868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:44:57 compute-0 sudo[236868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:44:57 compute-0 sudo[236868]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:57 compute-0 sudo[236916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:44:57 compute-0 sudo[236916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:44:57 compute-0 sudo[236969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzozpomktfdjfnqawzzejhdlorjhjrnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977496.8353412-1520-280248217242673/AnsiballZ_podman_container_info.py'
Nov 24 09:44:57 compute-0 sudo[236969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:57 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4a000096e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:57 compute-0 python3.9[236971]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 24 09:44:57 compute-0 sudo[236969]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:57 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v532: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:44:57 compute-0 sudo[236916]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:57 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:44:57 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:44:57 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:44:57 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:44:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:57 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49d4003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:57 compute-0 sudo[237053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:44:57 compute-0 sudo[237053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:44:57 compute-0 sudo[237053]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:58 compute-0 sudo[237078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 24 09:44:58 compute-0 sudo[237078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:44:58 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:58 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:44:58 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:44:58.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:44:58 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:44:58 compute-0 podman[237144]: 2025-11-24 09:44:58.388615852 +0000 UTC m=+0.041365546 container create 76cb0ea06e84aa88feadba8ad9b20a2b840349174367b0ae909aeaae83e570d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_keldysh, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:44:58 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Nov 24 09:44:58 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:44:58.398692) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 09:44:58 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Nov 24 09:44:58 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977498398752, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 1051, "num_deletes": 255, "total_data_size": 1818311, "memory_usage": 1846712, "flush_reason": "Manual Compaction"}
Nov 24 09:44:58 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Nov 24 09:44:58 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977498412133, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 1799304, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18051, "largest_seqno": 19101, "table_properties": {"data_size": 1794258, "index_size": 2570, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10170, "raw_average_key_size": 18, "raw_value_size": 1784267, "raw_average_value_size": 3232, "num_data_blocks": 115, "num_entries": 552, "num_filter_entries": 552, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763977399, "oldest_key_time": 1763977399, "file_creation_time": 1763977498, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "42aa12d2-c531-4ddc-8c4c-bc0b5971346b", "db_session_id": "RORHLERH15LC1QL8D0I4", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:44:58 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 13468 microseconds, and 3932 cpu microseconds.
Nov 24 09:44:58 compute-0 ceph-mon[74331]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:44:58 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:44:58.412173) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 1799304 bytes OK
Nov 24 09:44:58 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:44:58.412189) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Nov 24 09:44:58 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:44:58.413314) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Nov 24 09:44:58 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:44:58.413327) EVENT_LOG_v1 {"time_micros": 1763977498413324, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 09:44:58 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:44:58.413343) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 09:44:58 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 1813555, prev total WAL file size 1813555, number of live WAL files 2.
Nov 24 09:44:58 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:44:58 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:44:58.414138) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323532' seq:0, type:0; will stop at (end)
Nov 24 09:44:58 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 09:44:58 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(1757KB)], [38(11MB)]
Nov 24 09:44:58 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977498414200, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 14063381, "oldest_snapshot_seqno": -1}
Nov 24 09:44:58 compute-0 systemd[1]: Started libpod-conmon-76cb0ea06e84aa88feadba8ad9b20a2b840349174367b0ae909aeaae83e570d9.scope.
Nov 24 09:44:58 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:44:58 compute-0 podman[237144]: 2025-11-24 09:44:58.368257218 +0000 UTC m=+0.021006962 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:44:58 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 5009 keys, 13582228 bytes, temperature: kUnknown
Nov 24 09:44:58 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977498485767, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 13582228, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13547216, "index_size": 21398, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12549, "raw_key_size": 127200, "raw_average_key_size": 25, "raw_value_size": 13455026, "raw_average_value_size": 2686, "num_data_blocks": 878, "num_entries": 5009, "num_filter_entries": 5009, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976305, "oldest_key_time": 0, "file_creation_time": 1763977498, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "42aa12d2-c531-4ddc-8c4c-bc0b5971346b", "db_session_id": "RORHLERH15LC1QL8D0I4", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:44:58 compute-0 ceph-mon[74331]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:44:58 compute-0 podman[237144]: 2025-11-24 09:44:58.487450152 +0000 UTC m=+0.140199876 container init 76cb0ea06e84aa88feadba8ad9b20a2b840349174367b0ae909aeaae83e570d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_keldysh, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Nov 24 09:44:58 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:44:58.486022) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 13582228 bytes
Nov 24 09:44:58 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:44:58.487637) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 196.3 rd, 189.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 11.7 +0.0 blob) out(13.0 +0.0 blob), read-write-amplify(15.4) write-amplify(7.5) OK, records in: 5533, records dropped: 524 output_compression: NoCompression
Nov 24 09:44:58 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:44:58.487657) EVENT_LOG_v1 {"time_micros": 1763977498487648, "job": 18, "event": "compaction_finished", "compaction_time_micros": 71646, "compaction_time_cpu_micros": 26435, "output_level": 6, "num_output_files": 1, "total_output_size": 13582228, "num_input_records": 5533, "num_output_records": 5009, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 09:44:58 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:44:58 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977498488133, "job": 18, "event": "table_file_deletion", "file_number": 40}
Nov 24 09:44:58 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:44:58 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977498490722, "job": 18, "event": "table_file_deletion", "file_number": 38}
Nov 24 09:44:58 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:44:58.413976) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:44:58 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:44:58.490865) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:44:58 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:44:58.490870) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:44:58 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:44:58.490872) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:44:58 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:44:58.490873) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:44:58 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:44:58.490874) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:44:58 compute-0 podman[237144]: 2025-11-24 09:44:58.495296101 +0000 UTC m=+0.148045795 container start 76cb0ea06e84aa88feadba8ad9b20a2b840349174367b0ae909aeaae83e570d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_keldysh, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Nov 24 09:44:58 compute-0 podman[237144]: 2025-11-24 09:44:58.49884049 +0000 UTC m=+0.151590214 container attach 76cb0ea06e84aa88feadba8ad9b20a2b840349174367b0ae909aeaae83e570d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_keldysh, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:44:58 compute-0 mystifying_keldysh[237160]: 167 167
Nov 24 09:44:58 compute-0 systemd[1]: libpod-76cb0ea06e84aa88feadba8ad9b20a2b840349174367b0ae909aeaae83e570d9.scope: Deactivated successfully.
Nov 24 09:44:58 compute-0 podman[237144]: 2025-11-24 09:44:58.502125752 +0000 UTC m=+0.154875476 container died 76cb0ea06e84aa88feadba8ad9b20a2b840349174367b0ae909aeaae83e570d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_keldysh, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:44:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8e015d862270968740d446e21525d89f710c753038fd680dc90a1d32037529a-merged.mount: Deactivated successfully.
Nov 24 09:44:58 compute-0 podman[237144]: 2025-11-24 09:44:58.539810685 +0000 UTC m=+0.192560389 container remove 76cb0ea06e84aa88feadba8ad9b20a2b840349174367b0ae909aeaae83e570d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_keldysh, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:44:58 compute-0 systemd[1]: libpod-conmon-76cb0ea06e84aa88feadba8ad9b20a2b840349174367b0ae909aeaae83e570d9.scope: Deactivated successfully.
Nov 24 09:44:58 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Nov 24 09:44:58 compute-0 podman[237186]: 2025-11-24 09:44:58.732613941 +0000 UTC m=+0.070872054 container create 832af6064a909624bb3fe4a17fb476805ade03b44dbc8eb7ddb9cda6b2b58a67 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_shamir, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Nov 24 09:44:58 compute-0 systemd[1]: Started libpod-conmon-832af6064a909624bb3fe4a17fb476805ade03b44dbc8eb7ddb9cda6b2b58a67.scope.
Nov 24 09:44:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:58 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49d0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:58 compute-0 podman[237186]: 2025-11-24 09:44:58.705206768 +0000 UTC m=+0.043464981 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:44:58 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:44:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6630826d6785745aed78456697688bd8e858a126eba5761d13638e22d7af82ba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:44:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6630826d6785745aed78456697688bd8e858a126eba5761d13638e22d7af82ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:44:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6630826d6785745aed78456697688bd8e858a126eba5761d13638e22d7af82ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:44:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6630826d6785745aed78456697688bd8e858a126eba5761d13638e22d7af82ba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:44:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6630826d6785745aed78456697688bd8e858a126eba5761d13638e22d7af82ba/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:44:58 compute-0 podman[237186]: 2025-11-24 09:44:58.820935024 +0000 UTC m=+0.159193177 container init 832af6064a909624bb3fe4a17fb476805ade03b44dbc8eb7ddb9cda6b2b58a67 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:44:58 compute-0 podman[237186]: 2025-11-24 09:44:58.827815658 +0000 UTC m=+0.166073781 container start 832af6064a909624bb3fe4a17fb476805ade03b44dbc8eb7ddb9cda6b2b58a67 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_shamir, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 24 09:44:58 compute-0 ceph-mon[74331]: pgmap v532: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:44:58 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:44:58 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:44:58 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:44:58 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:44:58 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:44:58 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:44:58 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:44:58 compute-0 podman[237186]: 2025-11-24 09:44:58.833593464 +0000 UTC m=+0.171851607 container attach 832af6064a909624bb3fe4a17fb476805ade03b44dbc8eb7ddb9cda6b2b58a67 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:44:58 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:44:58 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:44:58 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:44:58.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:44:59 compute-0 fervent_shamir[237225]: --> passed data devices: 0 physical, 1 LVM
Nov 24 09:44:59 compute-0 fervent_shamir[237225]: --> All data devices are unavailable
Nov 24 09:44:59 compute-0 systemd[1]: libpod-832af6064a909624bb3fe4a17fb476805ade03b44dbc8eb7ddb9cda6b2b58a67.scope: Deactivated successfully.
Nov 24 09:44:59 compute-0 podman[237186]: 2025-11-24 09:44:59.199517137 +0000 UTC m=+0.537775260 container died 832af6064a909624bb3fe4a17fb476805ade03b44dbc8eb7ddb9cda6b2b58a67 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_shamir, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Nov 24 09:44:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-6630826d6785745aed78456697688bd8e858a126eba5761d13638e22d7af82ba-merged.mount: Deactivated successfully.
Nov 24 09:44:59 compute-0 podman[237186]: 2025-11-24 09:44:59.245597362 +0000 UTC m=+0.583855485 container remove 832af6064a909624bb3fe4a17fb476805ade03b44dbc8eb7ddb9cda6b2b58a67 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:44:59 compute-0 systemd[1]: libpod-conmon-832af6064a909624bb3fe4a17fb476805ade03b44dbc8eb7ddb9cda6b2b58a67.scope: Deactivated successfully.
Nov 24 09:44:59 compute-0 sudo[237078]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:59 compute-0 sudo[237358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krfksdymziorydldlqxcclwoyncmonki ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763977498.7803497-1559-98169136717551/AnsiballZ_edpm_container_manage.py'
Nov 24 09:44:59 compute-0 sudo[237358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:44:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:59 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49f4003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:59 compute-0 sudo[237361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:44:59 compute-0 sudo[237361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:44:59 compute-0 sudo[237361]: pam_unix(sudo:session): session closed for user root
Nov 24 09:44:59 compute-0 sudo[237386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- lvm list --format json
Nov 24 09:44:59 compute-0 sudo[237386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:44:59 compute-0 python3[237360]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 24 09:44:59 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v533: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:44:59 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 24 09:44:59 compute-0 podman[237476]: 2025-11-24 09:44:59.75212758 +0000 UTC m=+0.044819405 container create 2b6899a2b90ed4325cb14171c0fb242e56058bbd3dd85559ac2360a5888d6618 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_ritchie, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Nov 24 09:44:59 compute-0 systemd[1]: Started libpod-conmon-2b6899a2b90ed4325cb14171c0fb242e56058bbd3dd85559ac2360a5888d6618.scope.
Nov 24 09:44:59 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:44:59 compute-0 podman[237476]: 2025-11-24 09:44:59.73512224 +0000 UTC m=+0.027814105 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:44:59 compute-0 podman[237476]: 2025-11-24 09:44:59.841057309 +0000 UTC m=+0.133749164 container init 2b6899a2b90ed4325cb14171c0fb242e56058bbd3dd85559ac2360a5888d6618 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_ritchie, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:44:59 compute-0 podman[237476]: 2025-11-24 09:44:59.84941941 +0000 UTC m=+0.142111245 container start 2b6899a2b90ed4325cb14171c0fb242e56058bbd3dd85559ac2360a5888d6618 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_ritchie, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:44:59 compute-0 affectionate_ritchie[237495]: 167 167
Nov 24 09:44:59 compute-0 systemd[1]: libpod-2b6899a2b90ed4325cb14171c0fb242e56058bbd3dd85559ac2360a5888d6618.scope: Deactivated successfully.
Nov 24 09:44:59 compute-0 podman[237476]: 2025-11-24 09:44:59.85495963 +0000 UTC m=+0.147651495 container attach 2b6899a2b90ed4325cb14171c0fb242e56058bbd3dd85559ac2360a5888d6618 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 24 09:44:59 compute-0 podman[237476]: 2025-11-24 09:44:59.855926474 +0000 UTC m=+0.148618319 container died 2b6899a2b90ed4325cb14171c0fb242e56058bbd3dd85559ac2360a5888d6618 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:44:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f0c82ae36267dd33b68a8c60863c0e5a713327ee54741431f7f0d683f918db0-merged.mount: Deactivated successfully.
Nov 24 09:44:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:44:59 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4a0000a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:44:59 compute-0 podman[237476]: 2025-11-24 09:44:59.904349919 +0000 UTC m=+0.197041754 container remove 2b6899a2b90ed4325cb14171c0fb242e56058bbd3dd85559ac2360a5888d6618 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 24 09:44:59 compute-0 systemd[1]: libpod-conmon-2b6899a2b90ed4325cb14171c0fb242e56058bbd3dd85559ac2360a5888d6618.scope: Deactivated successfully.
Nov 24 09:45:00 compute-0 podman[237519]: 2025-11-24 09:45:00.069789482 +0000 UTC m=+0.041505031 container create c54d1ad82424fadf3a093348a4e2e69fa907a94ef02cc21e84d884e5a5bc622a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_engelbart, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:45:00 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:00 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:00 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:00.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:00 compute-0 systemd[1]: Started libpod-conmon-c54d1ad82424fadf3a093348a4e2e69fa907a94ef02cc21e84d884e5a5bc622a.scope.
Nov 24 09:45:00 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:45:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2fcd2be1e95f63486c71baa30b5e7a406572a1192137f657a7ae070d2e81c5a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:45:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2fcd2be1e95f63486c71baa30b5e7a406572a1192137f657a7ae070d2e81c5a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:45:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2fcd2be1e95f63486c71baa30b5e7a406572a1192137f657a7ae070d2e81c5a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:45:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2fcd2be1e95f63486c71baa30b5e7a406572a1192137f657a7ae070d2e81c5a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:45:00 compute-0 podman[237519]: 2025-11-24 09:45:00.052142106 +0000 UTC m=+0.023857675 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:45:00 compute-0 podman[237519]: 2025-11-24 09:45:00.156529775 +0000 UTC m=+0.128245344 container init c54d1ad82424fadf3a093348a4e2e69fa907a94ef02cc21e84d884e5a5bc622a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_engelbart, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:45:00 compute-0 podman[237519]: 2025-11-24 09:45:00.166880397 +0000 UTC m=+0.138595946 container start c54d1ad82424fadf3a093348a4e2e69fa907a94ef02cc21e84d884e5a5bc622a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_engelbart, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:45:00 compute-0 podman[237519]: 2025-11-24 09:45:00.170331715 +0000 UTC m=+0.142047294 container attach c54d1ad82424fadf3a093348a4e2e69fa907a94ef02cc21e84d884e5a5bc622a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:45:00 compute-0 silly_engelbart[237536]: {
Nov 24 09:45:00 compute-0 silly_engelbart[237536]:     "0": [
Nov 24 09:45:00 compute-0 silly_engelbart[237536]:         {
Nov 24 09:45:00 compute-0 silly_engelbart[237536]:             "devices": [
Nov 24 09:45:00 compute-0 silly_engelbart[237536]:                 "/dev/loop3"
Nov 24 09:45:00 compute-0 silly_engelbart[237536]:             ],
Nov 24 09:45:00 compute-0 silly_engelbart[237536]:             "lv_name": "ceph_lv0",
Nov 24 09:45:00 compute-0 silly_engelbart[237536]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:45:00 compute-0 silly_engelbart[237536]:             "lv_size": "21470642176",
Nov 24 09:45:00 compute-0 silly_engelbart[237536]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=84a084c3-61a7-5de7-8207-1f88efa59a64,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 24 09:45:00 compute-0 silly_engelbart[237536]:             "lv_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:45:00 compute-0 silly_engelbart[237536]:             "name": "ceph_lv0",
Nov 24 09:45:00 compute-0 silly_engelbart[237536]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:45:00 compute-0 silly_engelbart[237536]:             "tags": {
Nov 24 09:45:00 compute-0 silly_engelbart[237536]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:45:00 compute-0 silly_engelbart[237536]:                 "ceph.block_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:45:00 compute-0 silly_engelbart[237536]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 09:45:00 compute-0 silly_engelbart[237536]:                 "ceph.cluster_fsid": "84a084c3-61a7-5de7-8207-1f88efa59a64",
Nov 24 09:45:00 compute-0 silly_engelbart[237536]:                 "ceph.cluster_name": "ceph",
Nov 24 09:45:00 compute-0 silly_engelbart[237536]:                 "ceph.crush_device_class": "",
Nov 24 09:45:00 compute-0 silly_engelbart[237536]:                 "ceph.encrypted": "0",
Nov 24 09:45:00 compute-0 silly_engelbart[237536]:                 "ceph.osd_fsid": "4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c",
Nov 24 09:45:00 compute-0 silly_engelbart[237536]:                 "ceph.osd_id": "0",
Nov 24 09:45:00 compute-0 silly_engelbart[237536]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 09:45:00 compute-0 silly_engelbart[237536]:                 "ceph.type": "block",
Nov 24 09:45:00 compute-0 silly_engelbart[237536]:                 "ceph.vdo": "0",
Nov 24 09:45:00 compute-0 silly_engelbart[237536]:                 "ceph.with_tpm": "0"
Nov 24 09:45:00 compute-0 silly_engelbart[237536]:             },
Nov 24 09:45:00 compute-0 silly_engelbart[237536]:             "type": "block",
Nov 24 09:45:00 compute-0 silly_engelbart[237536]:             "vg_name": "ceph_vg0"
Nov 24 09:45:00 compute-0 silly_engelbart[237536]:         }
Nov 24 09:45:00 compute-0 silly_engelbart[237536]:     ]
Nov 24 09:45:00 compute-0 silly_engelbart[237536]: }
Nov 24 09:45:00 compute-0 systemd[1]: libpod-c54d1ad82424fadf3a093348a4e2e69fa907a94ef02cc21e84d884e5a5bc622a.scope: Deactivated successfully.
Nov 24 09:45:00 compute-0 podman[237519]: 2025-11-24 09:45:00.468025762 +0000 UTC m=+0.439741311 container died c54d1ad82424fadf3a093348a4e2e69fa907a94ef02cc21e84d884e5a5bc622a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_engelbart, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Nov 24 09:45:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2fcd2be1e95f63486c71baa30b5e7a406572a1192137f657a7ae070d2e81c5a-merged.mount: Deactivated successfully.
Nov 24 09:45:00 compute-0 podman[237519]: 2025-11-24 09:45:00.705892096 +0000 UTC m=+0.677607635 container remove c54d1ad82424fadf3a093348a4e2e69fa907a94ef02cc21e84d884e5a5bc622a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_engelbart, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:45:00 compute-0 podman[237438]: 2025-11-24 09:45:00.708455531 +0000 UTC m=+1.097524553 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 24 09:45:00 compute-0 systemd[1]: libpod-conmon-c54d1ad82424fadf3a093348a4e2e69fa907a94ef02cc21e84d884e5a5bc622a.scope: Deactivated successfully.
Nov 24 09:45:00 compute-0 sudo[237386]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:00 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49d4003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:00 compute-0 sudo[237590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:45:00 compute-0 sudo[237590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:45:00 compute-0 sudo[237590]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:00 compute-0 podman[237625]: 2025-11-24 09:45:00.843585268 +0000 UTC m=+0.044965397 container create 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible)
Nov 24 09:45:00 compute-0 podman[237625]: 2025-11-24 09:45:00.818922624 +0000 UTC m=+0.020302773 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 24 09:45:00 compute-0 python3[237360]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 24 09:45:00 compute-0 sudo[237638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- raw list --format json
Nov 24 09:45:00 compute-0 sudo[237638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:45:00 compute-0 ceph-mon[74331]: pgmap v533: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:45:00 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:45:00 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:00 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:00 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:00.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:00 compute-0 sudo[237358]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:45:00] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Nov 24 09:45:00 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:45:00] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Nov 24 09:45:01 compute-0 podman[237752]: 2025-11-24 09:45:01.221929935 +0000 UTC m=+0.035971721 container create cc9f8eaa3f7b2e3bfea8e76ed67f17b94510111b4edc31e9d0a0864e1c2cc3fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_knuth, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:45:01 compute-0 podman[237752]: 2025-11-24 09:45:01.206501555 +0000 UTC m=+0.020543371 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:45:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:01 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49d0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:01 compute-0 systemd[1]: Started libpod-conmon-cc9f8eaa3f7b2e3bfea8e76ed67f17b94510111b4edc31e9d0a0864e1c2cc3fb.scope.
Nov 24 09:45:01 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:45:01 compute-0 podman[237752]: 2025-11-24 09:45:01.36927715 +0000 UTC m=+0.183318966 container init cc9f8eaa3f7b2e3bfea8e76ed67f17b94510111b4edc31e9d0a0864e1c2cc3fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 09:45:01 compute-0 podman[237752]: 2025-11-24 09:45:01.375813606 +0000 UTC m=+0.189855382 container start cc9f8eaa3f7b2e3bfea8e76ed67f17b94510111b4edc31e9d0a0864e1c2cc3fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_knuth, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:45:01 compute-0 podman[237752]: 2025-11-24 09:45:01.379564881 +0000 UTC m=+0.193606667 container attach cc9f8eaa3f7b2e3bfea8e76ed67f17b94510111b4edc31e9d0a0864e1c2cc3fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True)
Nov 24 09:45:01 compute-0 determined_knuth[237793]: 167 167
Nov 24 09:45:01 compute-0 systemd[1]: libpod-cc9f8eaa3f7b2e3bfea8e76ed67f17b94510111b4edc31e9d0a0864e1c2cc3fb.scope: Deactivated successfully.
Nov 24 09:45:01 compute-0 podman[237752]: 2025-11-24 09:45:01.381592982 +0000 UTC m=+0.195634788 container died cc9f8eaa3f7b2e3bfea8e76ed67f17b94510111b4edc31e9d0a0864e1c2cc3fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_knuth, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:45:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a795f9db1d0fb1c026ec0bda06c68b3735fe4d614e8a05afcf1938cff7de5bf-merged.mount: Deactivated successfully.
Nov 24 09:45:01 compute-0 podman[237752]: 2025-11-24 09:45:01.425740089 +0000 UTC m=+0.239781875 container remove cc9f8eaa3f7b2e3bfea8e76ed67f17b94510111b4edc31e9d0a0864e1c2cc3fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:45:01 compute-0 systemd[1]: libpod-conmon-cc9f8eaa3f7b2e3bfea8e76ed67f17b94510111b4edc31e9d0a0864e1c2cc3fb.scope: Deactivated successfully.
Nov 24 09:45:01 compute-0 sudo[237918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkuwikbkkfufhepxnuwzzvyrobzvulkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977501.304978-1583-88997669271195/AnsiballZ_stat.py'
Nov 24 09:45:01 compute-0 sudo[237918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:01 compute-0 podman[237913]: 2025-11-24 09:45:01.585798566 +0000 UTC m=+0.049850212 container create f960223572ed86db42cc79d965b3e29b34d21fc6a5d71f2b954c06c94920e9bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_cannon, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Nov 24 09:45:01 compute-0 systemd[1]: Started libpod-conmon-f960223572ed86db42cc79d965b3e29b34d21fc6a5d71f2b954c06c94920e9bc.scope.
Nov 24 09:45:01 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:45:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8bc6b023d274e27080f647981830af7537ea2cca1d138b434cda8e6af56c6ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:45:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8bc6b023d274e27080f647981830af7537ea2cca1d138b434cda8e6af56c6ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:45:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8bc6b023d274e27080f647981830af7537ea2cca1d138b434cda8e6af56c6ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:45:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8bc6b023d274e27080f647981830af7537ea2cca1d138b434cda8e6af56c6ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:45:01 compute-0 podman[237913]: 2025-11-24 09:45:01.55907777 +0000 UTC m=+0.023129446 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:45:01 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v534: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:45:01 compute-0 podman[237913]: 2025-11-24 09:45:01.67893351 +0000 UTC m=+0.142985196 container init f960223572ed86db42cc79d965b3e29b34d21fc6a5d71f2b954c06c94920e9bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:45:01 compute-0 podman[237913]: 2025-11-24 09:45:01.685813254 +0000 UTC m=+0.149864910 container start f960223572ed86db42cc79d965b3e29b34d21fc6a5d71f2b954c06c94920e9bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_cannon, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:45:01 compute-0 podman[237913]: 2025-11-24 09:45:01.695574321 +0000 UTC m=+0.159625977 container attach f960223572ed86db42cc79d965b3e29b34d21fc6a5d71f2b954c06c94920e9bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_cannon, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:45:01 compute-0 python3.9[237929]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:45:01 compute-0 sudo[237918]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:01 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49f4003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:02 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:02 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:02 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:02.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:02 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:02 : epoch 692428fa : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:45:02 compute-0 lvm[238109]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 09:45:02 compute-0 lvm[238109]: VG ceph_vg0 finished
Nov 24 09:45:02 compute-0 affectionate_cannon[237935]: {}
Nov 24 09:45:02 compute-0 systemd[1]: libpod-f960223572ed86db42cc79d965b3e29b34d21fc6a5d71f2b954c06c94920e9bc.scope: Deactivated successfully.
Nov 24 09:45:02 compute-0 systemd[1]: libpod-f960223572ed86db42cc79d965b3e29b34d21fc6a5d71f2b954c06c94920e9bc.scope: Consumed 1.114s CPU time.
Nov 24 09:45:02 compute-0 podman[237913]: 2025-11-24 09:45:02.38604847 +0000 UTC m=+0.850100146 container died f960223572ed86db42cc79d965b3e29b34d21fc6a5d71f2b954c06c94920e9bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Nov 24 09:45:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8bc6b023d274e27080f647981830af7537ea2cca1d138b434cda8e6af56c6ae-merged.mount: Deactivated successfully.
Nov 24 09:45:02 compute-0 podman[237913]: 2025-11-24 09:45:02.429519829 +0000 UTC m=+0.893571485 container remove f960223572ed86db42cc79d965b3e29b34d21fc6a5d71f2b954c06c94920e9bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:45:02 compute-0 systemd[1]: libpod-conmon-f960223572ed86db42cc79d965b3e29b34d21fc6a5d71f2b954c06c94920e9bc.scope: Deactivated successfully.
Nov 24 09:45:02 compute-0 sudo[238177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czwxgdctvmkgkfubxpoxeqatdnfmoatl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977502.2023072-1610-33444469347227/AnsiballZ_file.py'
Nov 24 09:45:02 compute-0 sudo[238177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:02 compute-0 sudo[237638]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:02 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:45:02 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:45:02 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:45:02 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:45:02 compute-0 sudo[238180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:45:02 compute-0 sudo[238180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:45:02 compute-0 sudo[238180]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:02 compute-0 python3.9[238179]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:45:02 compute-0 sudo[238177]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:02 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:02 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4a0000a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:02 compute-0 sudo[238278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pottuyncateeetdetnxesbsglknvaiwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977502.2023072-1610-33444469347227/AnsiballZ_stat.py'
Nov 24 09:45:02 compute-0 sudo[238278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:02 compute-0 ceph-mon[74331]: pgmap v534: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:45:02 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:45:02 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:45:02 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:02 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:02 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:02.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:03 compute-0 python3.9[238280]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:45:03 compute-0 sudo[238278]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:03 compute-0 sudo[238312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:45:03 compute-0 sudo[238312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:45:03 compute-0 sudo[238312]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:03 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49d4003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:03 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:45:03 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v535: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:45:03 compute-0 sudo[238455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwabzqqytpqvqjmdvpfujlbxtidjlnns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977503.1358979-1610-21836946566317/AnsiballZ_copy.py'
Nov 24 09:45:03 compute-0 sudo[238455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:03 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49d0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:03 compute-0 python3.9[238457]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763977503.1358979-1610-21836946566317/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:45:04 compute-0 sudo[238455]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:04 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:04 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:45:04 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:04.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:45:04 compute-0 sudo[238531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tebqwvwxdclojpkoztlgwlnumqbujfsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977503.1358979-1610-21836946566317/AnsiballZ_systemd.py'
Nov 24 09:45:04 compute-0 sudo[238531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:04 compute-0 python3.9[238533]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 09:45:04 compute-0 systemd[1]: Reloading.
Nov 24 09:45:04 compute-0 systemd-sysv-generator[238563]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:45:04 compute-0 systemd-rc-local-generator[238559]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:45:04 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:04 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49f4003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:04 compute-0 sudo[238531]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:04 compute-0 ceph-mon[74331]: pgmap v535: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:45:04 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:04 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:04 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:04.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:05 compute-0 sudo[238643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhddoovhfydhiqtvijhnsjlhdeopovuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977503.1358979-1610-21836946566317/AnsiballZ_systemd.py'
Nov 24 09:45:05 compute-0 sudo[238643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:05 : epoch 692428fa : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:45:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:05 : epoch 692428fa : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:45:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:05 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4a0000a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:05 compute-0 python3.9[238645]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:45:05 compute-0 systemd[1]: Reloading.
Nov 24 09:45:05 compute-0 systemd-rc-local-generator[238677]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:45:05 compute-0 systemd-sysv-generator[238680]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:45:05 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v536: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:45:05 compute-0 systemd[1]: Starting multipathd container...
Nov 24 09:45:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:05 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49d4003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:05 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:45:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05a16f6913f2e4a39e950a51817a44074134a6da06562942cd6f8fa953d14a2f/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 24 09:45:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05a16f6913f2e4a39e950a51817a44074134a6da06562942cd6f8fa953d14a2f/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 24 09:45:05 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4.
Nov 24 09:45:05 compute-0 podman[238687]: 2025-11-24 09:45:05.963737665 +0000 UTC m=+0.112336252 container init 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 09:45:05 compute-0 multipathd[238700]: + sudo -E kolla_set_configs
Nov 24 09:45:05 compute-0 sudo[238706]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Nov 24 09:45:05 compute-0 podman[238687]: 2025-11-24 09:45:05.987484715 +0000 UTC m=+0.136083272 container start 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 09:45:05 compute-0 sudo[238706]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 24 09:45:05 compute-0 sudo[238706]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 24 09:45:05 compute-0 podman[238687]: multipathd
Nov 24 09:45:05 compute-0 systemd[1]: Started multipathd container.
Nov 24 09:45:06 compute-0 multipathd[238700]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 24 09:45:06 compute-0 multipathd[238700]: INFO:__main__:Validating config file
Nov 24 09:45:06 compute-0 multipathd[238700]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 24 09:45:06 compute-0 multipathd[238700]: INFO:__main__:Writing out command to execute
Nov 24 09:45:06 compute-0 sudo[238643]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:06 compute-0 sudo[238706]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:06 compute-0 multipathd[238700]: ++ cat /run_command
Nov 24 09:45:06 compute-0 multipathd[238700]: + CMD='/usr/sbin/multipathd -d'
Nov 24 09:45:06 compute-0 multipathd[238700]: + ARGS=
Nov 24 09:45:06 compute-0 multipathd[238700]: + sudo kolla_copy_cacerts
Nov 24 09:45:06 compute-0 sudo[238727]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Nov 24 09:45:06 compute-0 sudo[238727]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 24 09:45:06 compute-0 sudo[238727]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 24 09:45:06 compute-0 sudo[238727]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:06 compute-0 multipathd[238700]: + [[ ! -n '' ]]
Nov 24 09:45:06 compute-0 multipathd[238700]: + . kolla_extend_start
Nov 24 09:45:06 compute-0 multipathd[238700]: Running command: '/usr/sbin/multipathd -d'
Nov 24 09:45:06 compute-0 multipathd[238700]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 24 09:45:06 compute-0 multipathd[238700]: + umask 0022
Nov 24 09:45:06 compute-0 multipathd[238700]: + exec /usr/sbin/multipathd -d
Nov 24 09:45:06 compute-0 podman[238707]: 2025-11-24 09:45:06.055265449 +0000 UTC m=+0.057484695 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd)
Nov 24 09:45:06 compute-0 systemd[1]: 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4-4938d1a185ab17c3.service: Main process exited, code=exited, status=1/FAILURE
Nov 24 09:45:06 compute-0 systemd[1]: 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4-4938d1a185ab17c3.service: Failed with result 'exit-code'.
Nov 24 09:45:06 compute-0 multipathd[238700]: 3453.698174 | --------start up--------
Nov 24 09:45:06 compute-0 multipathd[238700]: 3453.698189 | read /etc/multipath.conf
Nov 24 09:45:06 compute-0 multipathd[238700]: 3453.703350 | path checkers start up
Nov 24 09:45:06 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:06 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:06 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:06.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:06 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49d0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:06 compute-0 ceph-mon[74331]: pgmap v536: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:45:06 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:06 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:06 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:06.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:45:07.046Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:45:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:07 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49f4003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:07 compute-0 python3.9[238890]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:45:07 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v537: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:45:07 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 09:45:07 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 4221 writes, 19K keys, 4221 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.03 MB/s
                                           Cumulative WAL: 4221 writes, 4221 syncs, 1.00 writes per sync, written: 0.03 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1452 writes, 5919 keys, 1452 commit groups, 1.0 writes per commit group, ingest: 10.86 MB, 0.02 MB/s
                                           Interval WAL: 1452 writes, 1452 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     93.8      0.33              0.09         9    0.037       0      0       0.0       0.0
                                             L6      1/0   12.95 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.2    147.4    124.9      0.81              0.24         8    0.102     38K   4334       0.0       0.0
                                            Sum      1/0   12.95 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.2    104.5    115.8      1.15              0.33        17    0.067     38K   4334       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.8    109.7    111.1      0.45              0.10         6    0.075     17K   2047       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    147.4    124.9      0.81              0.24         8    0.102     38K   4334       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     94.7      0.33              0.09         8    0.041       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.6      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.031, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.13 GB write, 0.11 MB/s write, 0.12 GB read, 0.10 MB/s read, 1.1 seconds
                                           Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b87797d350#2 capacity: 304.00 MB usage: 5.70 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 6.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(341,5.36 MB,1.76472%) FilterBlock(18,116.98 KB,0.0375798%) IndexBlock(18,226.14 KB,0.0726449%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 24 09:45:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:07 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4a0000a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:07 compute-0 sudo[239043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbqdvvdkzlubykwinipmsuiqghzhuuck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977507.7506182-1718-276247627415751/AnsiballZ_command.py'
Nov 24 09:45:07 compute-0 sudo[239043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:08 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:08 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:08 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:08.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:08 compute-0 python3.9[239045]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:45:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:08 : epoch 692428fa : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:45:08 compute-0 sudo[239043]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:08 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:45:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:08 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49d4003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:08 compute-0 sudo[239209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hotzweyuikekxyitgfzvhzgrxjwdtcwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977508.6299965-1742-59159732447321/AnsiballZ_systemd.py'
Nov 24 09:45:08 compute-0 sudo[239209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:08 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:08 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:08 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:08.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:08 compute-0 ceph-mon[74331]: pgmap v537: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:45:09 compute-0 python3.9[239211]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 09:45:09 compute-0 systemd[1]: Stopping multipathd container...
Nov 24 09:45:09 compute-0 multipathd[238700]: 3456.890123 | exit (signal)
Nov 24 09:45:09 compute-0 multipathd[238700]: 3456.890220 | --------shut down-------
Nov 24 09:45:09 compute-0 systemd[1]: libpod-05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4.scope: Deactivated successfully.
Nov 24 09:45:09 compute-0 podman[239215]: 2025-11-24 09:45:09.289967621 +0000 UTC m=+0.064448431 container died 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251118, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 09:45:09 compute-0 systemd[1]: 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4-4938d1a185ab17c3.timer: Deactivated successfully.
Nov 24 09:45:09 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4.
Nov 24 09:45:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:09 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49d0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-05a16f6913f2e4a39e950a51817a44074134a6da06562942cd6f8fa953d14a2f-merged.mount: Deactivated successfully.
Nov 24 09:45:09 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4-userdata-shm.mount: Deactivated successfully.
Nov 24 09:45:09 compute-0 podman[239215]: 2025-11-24 09:45:09.333390919 +0000 UTC m=+0.107871729 container cleanup 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:45:09 compute-0 podman[239215]: multipathd
Nov 24 09:45:09 compute-0 podman[239245]: multipathd
Nov 24 09:45:09 compute-0 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Nov 24 09:45:09 compute-0 systemd[1]: Stopped multipathd container.
Nov 24 09:45:09 compute-0 systemd[1]: Starting multipathd container...
Nov 24 09:45:09 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:45:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05a16f6913f2e4a39e950a51817a44074134a6da06562942cd6f8fa953d14a2f/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 24 09:45:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05a16f6913f2e4a39e950a51817a44074134a6da06562942cd6f8fa953d14a2f/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 24 09:45:09 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4.
Nov 24 09:45:09 compute-0 podman[239258]: 2025-11-24 09:45:09.540817073 +0000 UTC m=+0.115340007 container init 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 24 09:45:09 compute-0 multipathd[239273]: + sudo -E kolla_set_configs
Nov 24 09:45:09 compute-0 sudo[239279]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Nov 24 09:45:09 compute-0 sudo[239279]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 24 09:45:09 compute-0 sudo[239279]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 24 09:45:09 compute-0 podman[239258]: 2025-11-24 09:45:09.566897163 +0000 UTC m=+0.141420067 container start 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251118)
Nov 24 09:45:09 compute-0 podman[239258]: multipathd
Nov 24 09:45:09 compute-0 systemd[1]: Started multipathd container.
Nov 24 09:45:09 compute-0 multipathd[239273]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 24 09:45:09 compute-0 multipathd[239273]: INFO:__main__:Validating config file
Nov 24 09:45:09 compute-0 multipathd[239273]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 24 09:45:09 compute-0 multipathd[239273]: INFO:__main__:Writing out command to execute
Nov 24 09:45:09 compute-0 sudo[239209]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:09 compute-0 sudo[239279]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:09 compute-0 multipathd[239273]: ++ cat /run_command
Nov 24 09:45:09 compute-0 multipathd[239273]: + CMD='/usr/sbin/multipathd -d'
Nov 24 09:45:09 compute-0 multipathd[239273]: + ARGS=
Nov 24 09:45:09 compute-0 multipathd[239273]: + sudo kolla_copy_cacerts
Nov 24 09:45:09 compute-0 sudo[239304]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Nov 24 09:45:09 compute-0 sudo[239304]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 24 09:45:09 compute-0 sudo[239304]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 24 09:45:09 compute-0 sudo[239304]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:09 compute-0 multipathd[239273]: + [[ ! -n '' ]]
Nov 24 09:45:09 compute-0 multipathd[239273]: + . kolla_extend_start
Nov 24 09:45:09 compute-0 multipathd[239273]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 24 09:45:09 compute-0 multipathd[239273]: Running command: '/usr/sbin/multipathd -d'
Nov 24 09:45:09 compute-0 multipathd[239273]: + umask 0022
Nov 24 09:45:09 compute-0 multipathd[239273]: + exec /usr/sbin/multipathd -d
Nov 24 09:45:09 compute-0 podman[239280]: 2025-11-24 09:45:09.650271912 +0000 UTC m=+0.072576887 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.build-date=20251118, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 24 09:45:09 compute-0 multipathd[239273]: 3457.283869 | --------start up--------
Nov 24 09:45:09 compute-0 multipathd[239273]: 3457.283885 | read /etc/multipath.conf
Nov 24 09:45:09 compute-0 multipathd[239273]: 3457.289044 | path checkers start up
Nov 24 09:45:09 compute-0 systemd[1]: 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4-d46901e33a9e50f.service: Main process exited, code=exited, status=1/FAILURE
Nov 24 09:45:09 compute-0 systemd[1]: 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4-d46901e33a9e50f.service: Failed with result 'exit-code'.
Nov 24 09:45:09 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v538: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:45:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:09 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49d0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:10 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:10 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:10 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:10.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:10 compute-0 sudo[239465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbggyzwpcnqdiufxmgzpikyormysqpmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977510.2109094-1766-16708160807961/AnsiballZ_file.py'
Nov 24 09:45:10 compute-0 sudo[239465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:10 compute-0 python3.9[239467]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:45:10 compute-0 sudo[239465]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:10 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49f4003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:10 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:10 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:45:10 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:10.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:45:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:45:10] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Nov 24 09:45:10 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:45:10] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Nov 24 09:45:10 compute-0 ceph-mon[74331]: pgmap v538: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:45:11 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 24 09:45:11 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Nov 24 09:45:11 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:11 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49f8001110 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:11 compute-0 sudo[239620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqymecxghwnkjudxkkcxwwoiptcapnvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977511.4144223-1802-102178759977830/AnsiballZ_file.py'
Nov 24 09:45:11 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v539: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:45:11 compute-0 sudo[239620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:11 compute-0 python3.9[239622]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 24 09:45:11 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:11 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4a0000a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:11 compute-0 sudo[239620]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:12 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:12 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:12 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:12.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:12 compute-0 sudo[239773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjahwowytkezltrydzdhawljimmbhsnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977512.2755065-1826-8561783775490/AnsiballZ_modprobe.py'
Nov 24 09:45:12 compute-0 sudo[239773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:12 compute-0 python3.9[239775]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Nov 24 09:45:12 compute-0 kernel: Key type psk registered
Nov 24 09:45:12 compute-0 sudo[239773]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:12 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:12 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49d0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:12 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:12 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:12 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:12.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:13 compute-0 ceph-mon[74331]: pgmap v539: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:45:13 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:13 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49f4003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:45:13 compute-0 sudo[239937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdagayifhvtafklrqjmqczxgsahxayce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977513.2612598-1850-87997546751805/AnsiballZ_stat.py'
Nov 24 09:45:13 compute-0 sudo[239937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:13 compute-0 python3.9[239939]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:45:13 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v540: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:45:13 compute-0 sudo[239937]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:13 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:13 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49f8001110 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:13 compute-0 sudo[240060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-maipslnwzcsroxawhdqrwtznftlkbpjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977513.2612598-1850-87997546751805/AnsiballZ_copy.py'
Nov 24 09:45:13 compute-0 sudo[240060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:14 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:14 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:45:14 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:14.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:45:14 compute-0 python3.9[240062]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763977513.2612598-1850-87997546751805/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:45:14 compute-0 sudo[240060]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:14 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/094514 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:45:14 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:14 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49d0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:14 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:14 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:14 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:14.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:15 compute-0 ceph-mon[74331]: pgmap v540: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:45:15 compute-0 sudo[240213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qeaaptnnhhckukylorzeejxkbfqjikzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977514.8343017-1898-177272982938923/AnsiballZ_lineinfile.py'
Nov 24 09:45:15 compute-0 sudo[240213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:15 compute-0 python3.9[240215]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:45:15 compute-0 sudo[240213]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:15 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4a0000a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:45:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:45:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:45:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:45:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:45:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:45:15 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v541: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:45:15 compute-0 sudo[240366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsbwqhlopnaxgmdwkjbnqzswezdmbywj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977515.6144507-1922-151789923729049/AnsiballZ_systemd.py'
Nov 24 09:45:15 compute-0 sudo[240366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:15 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49f4003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:16 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:45:16 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:16 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:45:16 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:16.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:45:16 compute-0 python3.9[240368]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 09:45:16 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 24 09:45:16 compute-0 systemd[1]: Stopped Load Kernel Modules.
Nov 24 09:45:16 compute-0 systemd[1]: Stopping Load Kernel Modules...
Nov 24 09:45:16 compute-0 systemd[1]: Starting Load Kernel Modules...
Nov 24 09:45:16 compute-0 systemd[1]: Finished Load Kernel Modules.
Nov 24 09:45:16 compute-0 sudo[240366]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:16 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49f80022a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:16 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:16 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:45:16 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:16.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:45:16 compute-0 sudo[240523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvylpouzhdvgjoloowlbbxjetbrkvvrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977516.7377803-1946-30722085600350/AnsiballZ_dnf.py'
Nov 24 09:45:16 compute-0 sudo[240523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:45:17.047Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 09:45:17 compute-0 ceph-mon[74331]: pgmap v541: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:45:17 compute-0 python3.9[240525]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 09:45:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:17 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49d0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:17 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v542: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:45:17 compute-0 podman[240528]: 2025-11-24 09:45:17.844048676 +0000 UTC m=+0.122572440 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 24 09:45:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:17 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4a0000a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:18 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:18 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:18 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:18.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:18 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:45:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:18 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49f4003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:18 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:18 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:18 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:18.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:19 compute-0 ceph-mon[74331]: pgmap v542: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:45:19 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:19 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49f80022a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:19 compute-0 systemd[1]: Reloading.
Nov 24 09:45:19 compute-0 systemd-rc-local-generator[240587]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:45:19 compute-0 systemd-sysv-generator[240592]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:45:19 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v543: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:45:19 compute-0 systemd[1]: Reloading.
Nov 24 09:45:19 compute-0 systemd-rc-local-generator[240622]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:45:19 compute-0 systemd-sysv-generator[240626]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:45:19 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:19 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49d0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:20 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:20 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:20 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:20.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:20 compute-0 systemd-logind[822]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 24 09:45:20 compute-0 systemd-logind[822]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 24 09:45:20 compute-0 lvm[240668]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 09:45:20 compute-0 lvm[240668]: VG ceph_vg0 finished
Nov 24 09:45:20 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 09:45:20 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 24 09:45:20 compute-0 systemd[1]: Reloading.
Nov 24 09:45:20 compute-0 systemd-sysv-generator[240724]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:45:20 compute-0 systemd-rc-local-generator[240720]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:45:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:45:20.555 165073 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:45:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:45:20.556 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:45:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:45:20.556 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:45:20 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 24 09:45:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:20 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49d0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:20 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:20 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:45:20 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:20.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:45:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:45:20] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Nov 24 09:45:20 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:45:20] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Nov 24 09:45:21 compute-0 ceph-mon[74331]: pgmap v543: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:45:21 compute-0 sudo[240523]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:21 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:21 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49f4003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:21 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 09:45:21 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 24 09:45:21 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.326s CPU time.
Nov 24 09:45:21 compute-0 systemd[1]: run-r971b8db4f773494ca1fdc008f3514f25.service: Deactivated successfully.
Nov 24 09:45:21 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v544: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:45:21 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:21 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49f80022a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:22 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:22 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:22 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:22.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:22 compute-0 sudo[242012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-praqzuoawwfjwctsfjkgwkfqyihksgjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977522.3320725-1970-148057929174559/AnsiballZ_systemd_service.py'
Nov 24 09:45:22 compute-0 sudo[242012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:22 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:22 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49f80022a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:22 compute-0 python3.9[242014]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 09:45:22 compute-0 systemd[1]: Stopping Open-iSCSI...
Nov 24 09:45:22 compute-0 iscsid[229322]: iscsid shutting down.
Nov 24 09:45:22 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Nov 24 09:45:22 compute-0 systemd[1]: Stopped Open-iSCSI.
Nov 24 09:45:22 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 24 09:45:22 compute-0 systemd[1]: Starting Open-iSCSI...
Nov 24 09:45:22 compute-0 systemd[1]: Started Open-iSCSI.
Nov 24 09:45:22 compute-0 sudo[242012]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:22 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:22 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:22 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:22.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:23 compute-0 ceph-mon[74331]: pgmap v544: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:45:23 compute-0 sudo[242044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:45:23 compute-0 sudo[242044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:45:23 compute-0 sudo[242044]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:23 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49f80022a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:23 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:45:23 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v545: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:45:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/094523 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:45:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:23 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49f4003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:23 compute-0 python3.9[242194]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 09:45:24 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:24 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:24 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:24.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:24 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49f80022a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:24 compute-0 sudo[242362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbjlxdvsmqhccagtgbdefrmjrnbagvrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977524.6815507-2022-181054077954310/AnsiballZ_file.py'
Nov 24 09:45:24 compute-0 sudo[242362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:24 compute-0 podman[242323]: 2025-11-24 09:45:24.965262391 +0000 UTC m=+0.057687930 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 24 09:45:24 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:24 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:24 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:24.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:25 compute-0 ceph-mon[74331]: pgmap v545: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:45:25 compute-0 python3.9[242368]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:45:25 compute-0 sudo[242362]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:25 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4a0000a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:25 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v546: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:45:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:25 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49d0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:26 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:26 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:26 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:26.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:26 compute-0 sudo[242521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jalunnxauhnlwpgckfxuajuclvfqfzzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977525.8487039-2055-15281160643493/AnsiballZ_systemd_service.py'
Nov 24 09:45:26 compute-0 sudo[242521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:26 compute-0 python3.9[242523]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 09:45:26 compute-0 systemd[1]: Reloading.
Nov 24 09:45:26 compute-0 systemd-rc-local-generator[242549]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:45:26 compute-0 systemd-sysv-generator[242553]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:45:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:26 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49f4003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:26 compute-0 sudo[242521]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:26 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:26 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:45:26 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:26.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:45:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:45:27.048Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:45:27 compute-0 ceph-mon[74331]: pgmap v546: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:45:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:27 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49f8003f10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:27 compute-0 python3.9[242709]: ansible-ansible.builtin.service_facts Invoked
Nov 24 09:45:27 compute-0 network[242726]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 09:45:27 compute-0 network[242727]: 'network-scripts' will be removed from distribution in near future.
Nov 24 09:45:27 compute-0 network[242728]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 09:45:27 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v547: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:45:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:27 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4a0000a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:28 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:28 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:28 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:28.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:28 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:45:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:28 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49d0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:29 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:29 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:45:29 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:28.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:45:29 compute-0 ceph-mon[74331]: pgmap v547: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:45:29 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:29 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49f4003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:29 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v548: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:45:29 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:29 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49f8003f10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:30 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:30 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:30 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:30.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:30 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4a0000a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:45:30] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Nov 24 09:45:30 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:45:30] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Nov 24 09:45:31 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:31 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:31 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:31.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:31 compute-0 ceph-mon[74331]: pgmap v548: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:45:31 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:45:31 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:31 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49d0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:31 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v549: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:45:31 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:31 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49d0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:32 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:32 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:32 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:32.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:32 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:32 : epoch 692428fa : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:45:32 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:32 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49d0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:33.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:33 compute-0 ceph-mon[74331]: pgmap v549: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:45:33 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:33 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4a0000a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:45:33 compute-0 sudo[243007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thazxgixzoiscaqrjcsnfqsarqqzwyly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977533.2389846-2112-253350336276011/AnsiballZ_systemd_service.py'
Nov 24 09:45:33 compute-0 sudo[243007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:33 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v550: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:45:33 compute-0 python3.9[243009]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:45:33 compute-0 sudo[243007]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:33 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:33 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49f8003f10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:34 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:34 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:34 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:34.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:34 compute-0 sudo[243160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzrllyeyqnftijefsxxxahadusrrxtwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977533.946333-2112-248822948969975/AnsiballZ_systemd_service.py'
Nov 24 09:45:34 compute-0 sudo[243160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:34 compute-0 python3.9[243162]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:45:34 compute-0 sudo[243160]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:34 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:34 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49d0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:35 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:35 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:35 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:35.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:35 compute-0 sudo[243314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlxiccjbmkchpwqylggwiflcvehwofbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977534.7629464-2112-198554146506507/AnsiballZ_systemd_service.py'
Nov 24 09:45:35 compute-0 sudo[243314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:35 compute-0 ceph-mon[74331]: pgmap v550: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:45:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:35 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49f4003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:35 compute-0 python3.9[243316]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:45:35 compute-0 sudo[243314]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:35 : epoch 692428fa : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:45:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:35 : epoch 692428fa : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:45:35 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v551: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:45:35 compute-0 sudo[243468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvqnrbdaqsqfnthqiwpqgzydpymeiztw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977535.5345416-2112-171423348779444/AnsiballZ_systemd_service.py'
Nov 24 09:45:35 compute-0 sudo[243468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:35 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4a0000a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:36 compute-0 python3.9[243470]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:45:36 compute-0 sudo[243468]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:36 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:36 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:36 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:36.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:36 compute-0 sudo[243622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aevqgtoyxsvhtdqxdigbrthbhdvytcgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977536.1731029-2112-195296770364223/AnsiballZ_systemd_service.py'
Nov 24 09:45:36 compute-0 sudo[243622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:36 compute-0 python3.9[243624]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:45:36 compute-0 sudo[243622]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:36 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:36 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49f8003f10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:37 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:37 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:45:37 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:37.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:45:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:45:37.048Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:45:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:45:37.048Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 09:45:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:45:37.048Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 09:45:37 compute-0 sudo[243775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-latjrcalrplokclivdmijvgdkauirskn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977536.8620112-2112-28965004274050/AnsiballZ_systemd_service.py'
Nov 24 09:45:37 compute-0 sudo[243775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:37 compute-0 ceph-mon[74331]: pgmap v551: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:45:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:37 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49d0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:37 compute-0 python3.9[243777]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:45:37 compute-0 sudo[243775]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:37 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v552: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:45:37 compute-0 sudo[243929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mljxcqmwnpilcfoiooecyxyumimfilsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977537.587712-2112-45655605356030/AnsiballZ_systemd_service.py'
Nov 24 09:45:37 compute-0 sudo[243929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:37 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49f4003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:38 compute-0 python3.9[243931]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:45:38 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:38 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:38 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:38.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:38 compute-0 sudo[243929]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:38 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:45:38 compute-0 sudo[244083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usexxqiwysorxvdxqmimkaqrsngunmoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977538.2933788-2112-222245473457823/AnsiballZ_systemd_service.py'
Nov 24 09:45:38 compute-0 sudo[244083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:38 : epoch 692428fa : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:45:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:38 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4a0000a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:38 compute-0 python3.9[244085]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:45:38 compute-0 sudo[244083]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:39 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:39 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:39 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:39.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:39 compute-0 ceph-mon[74331]: pgmap v552: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:45:39 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:39 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49f8003f10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:39 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v553: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:45:39 compute-0 podman[244112]: 2025-11-24 09:45:39.780623968 +0000 UTC m=+0.058117510 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 24 09:45:39 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:39 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49d0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:40 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:40 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:45:40 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:40.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:45:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:40 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49f4003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:45:40] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Nov 24 09:45:40 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:45:40] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Nov 24 09:45:41 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:41 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:41 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:41.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:41 compute-0 sudo[244259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suihmzaewwjwtsqsdujhbkobjblgvlbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977540.8514676-2289-183607081397278/AnsiballZ_file.py'
Nov 24 09:45:41 compute-0 sudo[244259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:41 compute-0 ceph-mon[74331]: pgmap v553: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:45:41 compute-0 python3.9[244261]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:45:41 compute-0 sudo[244259]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:41 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:41 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4a0000a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:41 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v554: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:45:41 compute-0 sudo[244413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dezjcjugzbxvbtoluzozbhiqjxdfdeyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977541.4775333-2289-149841372710546/AnsiballZ_file.py'
Nov 24 09:45:41 compute-0 sudo[244413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:41 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:41 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49ec001230 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:41 compute-0 python3.9[244415]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:45:41 compute-0 sudo[244413]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:42 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:42 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:42 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:42.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:42 compute-0 sudo[244566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbpfftodipennpkfwvibfdfpbuqmsdhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977542.0882027-2289-15572030204671/AnsiballZ_file.py'
Nov 24 09:45:42 compute-0 sudo[244566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:42 compute-0 python3.9[244568]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:45:42 compute-0 sudo[244566]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:42 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49d4001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:42 compute-0 sudo[244718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gypkijjcxenlxqcbauechmserdalmbiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977542.6813242-2289-165095854773464/AnsiballZ_file.py'
Nov 24 09:45:42 compute-0 sudo[244718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:43 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:43 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:45:43 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:43.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:45:43 compute-0 python3.9[244720]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:45:43 compute-0 sudo[244718]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:43 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:43 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49f4003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:43 compute-0 sudo[244815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:45:43 compute-0 sudo[244815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:45:43 compute-0 sudo[244815]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:45:43 compute-0 ceph-mon[74331]: pgmap v554: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:45:43 compute-0 sudo[244896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyfgowkihsxrutkgdrwronpylilscclc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977543.2548826-2289-143616764258354/AnsiballZ_file.py'
Nov 24 09:45:43 compute-0 sudo[244896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:43 compute-0 python3.9[244898]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:45:43 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v555: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:45:43 compute-0 sudo[244896]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:43 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/094543 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:45:43 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:43 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4a0000a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:44 compute-0 sudo[245048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onhxdmqiyvmyufdmaqwdfhpoakoqzxtj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977543.8074145-2289-266413271361527/AnsiballZ_file.py'
Nov 24 09:45:44 compute-0 sudo[245048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:44 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:44 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:45:44 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:44.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:45:44 compute-0 python3.9[245050]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:45:44 compute-0 sudo[245048]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:44 compute-0 ceph-mon[74331]: pgmap v555: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:45:44 compute-0 sudo[245201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqezrfxmwytrxvpoedttpdmittttjhfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977544.3607295-2289-21916470280129/AnsiballZ_file.py'
Nov 24 09:45:44 compute-0 sudo[245201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:44 compute-0 python3.9[245203]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:45:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:44 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49ec001d30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:44 compute-0 sudo[245201]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:45 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:45 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:45 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:45.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-0[79585]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Nov 24 09:45:45 compute-0 sudo[245353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-panuldepmjvlclhvdurowvmzxzriqiqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977544.9414308-2289-201610057981944/AnsiballZ_file.py'
Nov 24 09:45:45 compute-0 sudo[245353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Optimize plan auto_2025-11-24_09:45:45
Nov 24 09:45:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 09:45:45 compute-0 ceph-mgr[74626]: [balancer INFO root] do_upmap
Nov 24 09:45:45 compute-0 ceph-mgr[74626]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', 'backups', 'volumes', 'vms', '.mgr', '.nfs', 'images', 'cephfs.cephfs.data', 'default.rgw.log']
Nov 24 09:45:45 compute-0 ceph-mgr[74626]: [balancer INFO root] prepared 0/10 upmap changes
Nov 24 09:45:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:45 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49d4001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:45 compute-0 python3.9[245355]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:45:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:45:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:45:45 compute-0 sudo[245353]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:45 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:45:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:45:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:45:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:45:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:45:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 09:45:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:45:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 09:45:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:45:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:45:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:45:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:45:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:45:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:45:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:45:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:45:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:45:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 09:45:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:45:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:45:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:45:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 24 09:45:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:45:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 09:45:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:45:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:45:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:45:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 09:45:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:45:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 09:45:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 09:45:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:45:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 09:45:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:45:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:45:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:45:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:45:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:45:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:45:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:45:45 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v556: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:45:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:45 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49f4003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:46 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:46 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:46 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:46.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:46 compute-0 ceph-mon[74331]: pgmap v556: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:45:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:46 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4a0000a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:47 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:47 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:47 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:47.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:45:47.049Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 09:45:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:45:47.049Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:45:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:45:47.050Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:45:47 compute-0 sudo[245507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljrvykayqegnwpxrpwfbaymtyyjjavhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977546.830011-2460-262130326006298/AnsiballZ_file.py'
Nov 24 09:45:47 compute-0 sudo[245507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:47 compute-0 python3.9[245509]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:45:47 compute-0 sudo[245507]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:47 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49ec001d30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:47 compute-0 sudo[245660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxepqevnlbpupphswzdbmblgpnzdyvtk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977547.40275-2460-150169147399014/AnsiballZ_file.py'
Nov 24 09:45:47 compute-0 sudo[245660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:47 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v557: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:45:47 compute-0 python3.9[245662]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:45:47 compute-0 sudo[245660]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:47 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49f4003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:48 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:48 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:48 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:48.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:48 compute-0 sudo[245823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qavxvvzvjxvtapzfekbaczfldjvlxfno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977547.986736-2460-24380110761145/AnsiballZ_file.py'
Nov 24 09:45:48 compute-0 sudo[245823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:48 compute-0 podman[245786]: 2025-11-24 09:45:48.294033545 +0000 UTC m=+0.098322538 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 09:45:48 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:45:48 compute-0 python3.9[245831]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:45:48 compute-0 sudo[245823]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:48 compute-0 ceph-mon[74331]: pgmap v557: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:45:48 compute-0 sudo[245991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwgemxqbpawfufspxgalnkrewxyqpcvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977548.5679588-2460-45093724505868/AnsiballZ_file.py'
Nov 24 09:45:48 compute-0 sudo[245991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:48 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49d4001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:48 compute-0 python3.9[245993]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:45:49 compute-0 sudo[245991]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:49 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:49 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:49 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:49.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:49 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:49 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4a0000a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:49 compute-0 sudo[246144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnrsfuxyafssxlgffobssbuxuyhabyff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977549.1596277-2460-51494258738699/AnsiballZ_file.py'
Nov 24 09:45:49 compute-0 sudo[246144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:49 compute-0 python3.9[246146]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:45:49 compute-0 sudo[246144]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:49 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v558: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:45:49 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:49 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49ec001d30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:50 compute-0 sudo[246296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hihvuciruvyxzdwfxywfmoqyukrjbehb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977549.7592626-2460-3033463292941/AnsiballZ_file.py'
Nov 24 09:45:50 compute-0 sudo[246296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:50 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:50 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:50 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:50.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:50 compute-0 python3.9[246298]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:45:50 compute-0 sudo[246296]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:50 compute-0 sudo[246449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugjcjtpsyabkolcjtugmhaahlxvgzwui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977550.3811755-2460-86703799288390/AnsiballZ_file.py'
Nov 24 09:45:50 compute-0 sudo[246449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:50 compute-0 ceph-mon[74331]: pgmap v558: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:45:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:50 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49ec001d30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:50 compute-0 python3.9[246451]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:45:50 compute-0 sudo[246449]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:45:50] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Nov 24 09:45:50 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:45:50] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Nov 24 09:45:51 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:51 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:51 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:51.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:51 compute-0 sudo[246602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrxsuawaolmhalrefwykzkhwwsqoidji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977550.9878762-2460-116044958613010/AnsiballZ_file.py'
Nov 24 09:45:51 compute-0 sudo[246602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:51 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:51 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49f4003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:51 compute-0 python3.9[246604]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:45:51 compute-0 sudo[246602]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:51 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v559: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:45:51 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:51 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49f4003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:52 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:52 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:45:52 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:52.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:45:52 compute-0 ceph-mon[74331]: pgmap v559: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:45:52 compute-0 sudo[246755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnwmyxmiobbhjlwisrckpakkfdkerzmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977552.5425868-2634-11632653566973/AnsiballZ_command.py'
Nov 24 09:45:52 compute-0 sudo[246755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:52 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49ec001d30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:53 compute-0 python3.9[246757]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:45:53 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:53 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:45:53 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:53.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:45:53 compute-0 sudo[246755]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:53 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:53 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49d4001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:53 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:45:53 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v560: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:45:53 compute-0 python3.9[246910]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 24 09:45:53 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:53 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49d4001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:45:54 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:54 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:54 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:54.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:54 compute-0 sudo[247061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plgblbvczsrtxekxsdctdpyoabzgamiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977554.2945795-2688-90380589061985/AnsiballZ_systemd_service.py'
Nov 24 09:45:54 compute-0 sudo[247061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:54 compute-0 ceph-mon[74331]: pgmap v560: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:45:54 compute-0 kernel: ganesha.nfsd[244263]: segfault at 50 ip 00007f4aac35532e sp 00007f4a797f9210 error 4 in libntirpc.so.5.8[7f4aac33a000+2c000] likely on CPU 4 (core 0, socket 4)
Nov 24 09:45:54 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 24 09:45:54 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[231438]: 24/11/2025 09:45:54 : epoch 692428fa : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49d4001bd0 fd 39 proxy ignored for local
Nov 24 09:45:54 compute-0 systemd[1]: Started Process Core Dump (PID 247064/UID 0).
Nov 24 09:45:54 compute-0 python3.9[247063]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 09:45:54 compute-0 systemd[1]: Reloading.
Nov 24 09:45:54 compute-0 systemd-sysv-generator[247092]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:45:55 compute-0 systemd-rc-local-generator[247089]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:45:55 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:55 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:55 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:55.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:55 compute-0 sudo[247061]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:55 compute-0 podman[247103]: 2025-11-24 09:45:55.307052296 +0000 UTC m=+0.054597073 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:45:55 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v561: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:45:55 compute-0 sudo[247269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdykkowxpcsqjsbotwzjvfzbgprqfzic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977555.4884417-2712-194683379246499/AnsiballZ_command.py'
Nov 24 09:45:55 compute-0 sudo[247269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:55 compute-0 python3.9[247271]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:45:55 compute-0 systemd-coredump[247065]: Process 231442 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 57:
                                                    #0  0x00007f4aac35532e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 24 09:45:55 compute-0 sudo[247269]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:56 compute-0 systemd[1]: systemd-coredump@7-247064-0.service: Deactivated successfully.
Nov 24 09:45:56 compute-0 systemd[1]: systemd-coredump@7-247064-0.service: Consumed 1.132s CPU time.
Nov 24 09:45:56 compute-0 podman[247314]: 2025-11-24 09:45:56.114026297 +0000 UTC m=+0.024669828 container died e52b5521c1ca85472f90d84d542c2f965eec7792e33978d03021d5570540aca5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:45:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b7c7091bee56c0867016abdbe9e6cf981db5df26d3faf4ee292150e91aa4387-merged.mount: Deactivated successfully.
Nov 24 09:45:56 compute-0 podman[247314]: 2025-11-24 09:45:56.154905901 +0000 UTC m=+0.065549422 container remove e52b5521c1ca85472f90d84d542c2f965eec7792e33978d03021d5570540aca5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:45:56 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.2.0.compute-0.ssprex.service: Main process exited, code=exited, status=139/n/a
Nov 24 09:45:56 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:56 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:56 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:56.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:56 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.2.0.compute-0.ssprex.service: Failed with result 'exit-code'.
Nov 24 09:45:56 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.2.0.compute-0.ssprex.service: Consumed 1.476s CPU time.
Nov 24 09:45:56 compute-0 sudo[247468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mirrxnifoffrplqitejttyzabufipual ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977556.1036675-2712-220866412659010/AnsiballZ_command.py'
Nov 24 09:45:56 compute-0 sudo[247468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:56 compute-0 python3.9[247470]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:45:56 compute-0 sudo[247468]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:56 compute-0 ceph-mon[74331]: pgmap v561: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:45:56 compute-0 sudo[247621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxnpuunrkpzekouwqcmsdmeenwxrnkqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977556.7239351-2712-120541044492337/AnsiballZ_command.py'
Nov 24 09:45:56 compute-0 sudo[247621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:57 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:57 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:57 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:57.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:45:57.050Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:45:57 compute-0 python3.9[247623]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:45:57 compute-0 sudo[247621]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:57 compute-0 sudo[247775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpdtdwuefqesblgxtmfkryvirenkriqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977557.4057002-2712-30700288998441/AnsiballZ_command.py'
Nov 24 09:45:57 compute-0 sudo[247775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:57 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v562: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:45:57 compute-0 python3.9[247777]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:45:57 compute-0 sudo[247775]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:58 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:58 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:58 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:45:58.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:58 compute-0 sudo[247928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdkdyulragjshbyoazxmrqsvrbkskuxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977557.9878244-2712-98974625397429/AnsiballZ_command.py'
Nov 24 09:45:58 compute-0 sudo[247928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:58 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:45:58 compute-0 python3.9[247930]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:45:58 compute-0 sudo[247928]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:58 compute-0 ceph-mon[74331]: pgmap v562: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:45:58 compute-0 sudo[248082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqcsbmiivfjnjckzvyfhzdcemhylubss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977558.5889328-2712-163123919085708/AnsiballZ_command.py'
Nov 24 09:45:58 compute-0 sudo[248082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:59 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:45:59 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:45:59 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:45:59.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:45:59 compute-0 python3.9[248084]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:45:59 compute-0 sudo[248082]: pam_unix(sudo:session): session closed for user root
Nov 24 09:45:59 compute-0 sudo[248236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eotbsgnstddyerqonvyfgvtmlbvkmsld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977559.3939655-2712-29938377814754/AnsiballZ_command.py'
Nov 24 09:45:59 compute-0 sudo[248236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:45:59 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v563: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:45:59 compute-0 python3.9[248238]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:45:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/094559 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:45:59 compute-0 sudo[248236]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:00 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:00 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:00 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:00.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:00 compute-0 sudo[248389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qztfkxgfizyozcvkjddimamyzhioejpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977559.9967234-2712-68021006215589/AnsiballZ_command.py'
Nov 24 09:46:00 compute-0 sudo[248389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:00 compute-0 python3.9[248391]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 09:46:00 compute-0 sudo[248389]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/094600 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:46:00 compute-0 ceph-mon[74331]: pgmap v563: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:46:00 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:46:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:46:00] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Nov 24 09:46:00 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:46:00] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Nov 24 09:46:01 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:01 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:01 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:01.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:01 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v564: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:46:02 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:02 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:02 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:02.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:02 compute-0 sudo[248545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvrswvpdogdesvvlmpzcczkfgzmouzqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977562.2742553-2919-191146837099303/AnsiballZ_file.py'
Nov 24 09:46:02 compute-0 sudo[248545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:02 compute-0 python3.9[248547]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:46:02 compute-0 sudo[248545]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:02 compute-0 sudo[248569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:46:02 compute-0 sudo[248569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:46:02 compute-0 sudo[248569]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:02 compute-0 sudo[248605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Nov 24 09:46:02 compute-0 sudo[248605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:46:02 compute-0 ceph-mon[74331]: pgmap v564: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:46:03 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:03 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:03 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:03.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:03 compute-0 sudo[248754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yalccmifaabejzovrnyeuzzzamjvxpyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977562.8794286-2919-264634876200744/AnsiballZ_file.py'
Nov 24 09:46:03 compute-0 sudo[248754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:03 compute-0 sudo[248605]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:03 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:46:03 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:46:03 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:46:03 compute-0 python3.9[248761]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:46:03 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:46:03 compute-0 sudo[248754]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:03 compute-0 sudo[248771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:46:03 compute-0 sudo[248771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:46:03 compute-0 sudo[248771]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:03 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:46:03 compute-0 sudo[248819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:46:03 compute-0 sudo[248819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:46:03 compute-0 sudo[248897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:46:03 compute-0 sudo[248897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:46:03 compute-0 sudo[248897]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:03 compute-0 sudo[249009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qratxgaepmmmtbsrlocvdntqmoymokkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977563.4468515-2919-196215687692701/AnsiballZ_file.py'
Nov 24 09:46:03 compute-0 sudo[249009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:03 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v565: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:46:03 compute-0 python3.9[249011]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:46:03 compute-0 sudo[249009]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:03 compute-0 sudo[248819]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:03 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:46:04 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:46:04 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:46:04 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:46:04 compute-0 sudo[249053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:46:04 compute-0 sudo[249053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:46:04 compute-0 sudo[249053]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:04 compute-0 sudo[249078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 24 09:46:04 compute-0 sudo[249078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:46:04 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:04 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:04 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:04.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:04 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:46:04 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:46:04 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:46:04 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:46:04 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:46:04 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:46:04 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:46:04 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:46:04 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:46:04 compute-0 podman[249148]: 2025-11-24 09:46:04.54061073 +0000 UTC m=+0.040957397 container create 1b83532095c353551586f9a5f06cf66f939f78ad44b20c9973833aff93f8be20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_lichterman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Nov 24 09:46:04 compute-0 systemd[1]: Started libpod-conmon-1b83532095c353551586f9a5f06cf66f939f78ad44b20c9973833aff93f8be20.scope.
Nov 24 09:46:04 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:46:04 compute-0 podman[249148]: 2025-11-24 09:46:04.614976128 +0000 UTC m=+0.115322825 container init 1b83532095c353551586f9a5f06cf66f939f78ad44b20c9973833aff93f8be20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_lichterman, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:46:04 compute-0 podman[249148]: 2025-11-24 09:46:04.523756436 +0000 UTC m=+0.024103123 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:46:04 compute-0 podman[249148]: 2025-11-24 09:46:04.621465607 +0000 UTC m=+0.121812274 container start 1b83532095c353551586f9a5f06cf66f939f78ad44b20c9973833aff93f8be20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_lichterman, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Nov 24 09:46:04 compute-0 podman[249148]: 2025-11-24 09:46:04.625239229 +0000 UTC m=+0.125585896 container attach 1b83532095c353551586f9a5f06cf66f939f78ad44b20c9973833aff93f8be20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_lichterman, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:46:04 compute-0 systemd[1]: libpod-1b83532095c353551586f9a5f06cf66f939f78ad44b20c9973833aff93f8be20.scope: Deactivated successfully.
Nov 24 09:46:04 compute-0 infallible_lichterman[249213]: 167 167
Nov 24 09:46:04 compute-0 conmon[249213]: conmon 1b83532095c353551586 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1b83532095c353551586f9a5f06cf66f939f78ad44b20c9973833aff93f8be20.scope/container/memory.events
Nov 24 09:46:04 compute-0 podman[249148]: 2025-11-24 09:46:04.628139092 +0000 UTC m=+0.128485779 container died 1b83532095c353551586f9a5f06cf66f939f78ad44b20c9973833aff93f8be20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_lichterman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Nov 24 09:46:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-8757b46149b62fedc168273b3424bc37837a495c30c337d158d648e8ecae2468-merged.mount: Deactivated successfully.
Nov 24 09:46:04 compute-0 podman[249148]: 2025-11-24 09:46:04.662084466 +0000 UTC m=+0.162431133 container remove 1b83532095c353551586f9a5f06cf66f939f78ad44b20c9973833aff93f8be20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_lichterman, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:46:04 compute-0 systemd[1]: libpod-conmon-1b83532095c353551586f9a5f06cf66f939f78ad44b20c9973833aff93f8be20.scope: Deactivated successfully.
Nov 24 09:46:04 compute-0 sudo[249308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfztwwpaxhxlodiluxqpirggjqltxocq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977564.5268054-2985-226281024742653/AnsiballZ_file.py'
Nov 24 09:46:04 compute-0 sudo[249308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:04 compute-0 podman[249309]: 2025-11-24 09:46:04.825005439 +0000 UTC m=+0.046016992 container create 504bec738e1ed0e51ccb3b67f1518bbf986f736474593f6b10e19453eae2c137 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:46:04 compute-0 systemd[1]: Started libpod-conmon-504bec738e1ed0e51ccb3b67f1518bbf986f736474593f6b10e19453eae2c137.scope.
Nov 24 09:46:04 compute-0 podman[249309]: 2025-11-24 09:46:04.804939736 +0000 UTC m=+0.025951309 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:46:04 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:46:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7a2bf6e024c974c811513484e1c57b6866f501b2c72c5cad563d002299f6d4b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:46:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7a2bf6e024c974c811513484e1c57b6866f501b2c72c5cad563d002299f6d4b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:46:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7a2bf6e024c974c811513484e1c57b6866f501b2c72c5cad563d002299f6d4b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:46:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7a2bf6e024c974c811513484e1c57b6866f501b2c72c5cad563d002299f6d4b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:46:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7a2bf6e024c974c811513484e1c57b6866f501b2c72c5cad563d002299f6d4b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:46:04 compute-0 podman[249309]: 2025-11-24 09:46:04.929581779 +0000 UTC m=+0.150593352 container init 504bec738e1ed0e51ccb3b67f1518bbf986f736474593f6b10e19453eae2c137 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_mclaren, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:46:04 compute-0 podman[249309]: 2025-11-24 09:46:04.938566669 +0000 UTC m=+0.159578212 container start 504bec738e1ed0e51ccb3b67f1518bbf986f736474593f6b10e19453eae2c137 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_mclaren, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:46:04 compute-0 podman[249309]: 2025-11-24 09:46:04.941604594 +0000 UTC m=+0.162616167 container attach 504bec738e1ed0e51ccb3b67f1518bbf986f736474593f6b10e19453eae2c137 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_mclaren, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:46:04 compute-0 python3.9[249317]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:46:05 compute-0 sudo[249308]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:05 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:05 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:05 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:05.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:05 compute-0 exciting_mclaren[249328]: --> passed data devices: 0 physical, 1 LVM
Nov 24 09:46:05 compute-0 exciting_mclaren[249328]: --> All data devices are unavailable
Nov 24 09:46:05 compute-0 systemd[1]: libpod-504bec738e1ed0e51ccb3b67f1518bbf986f736474593f6b10e19453eae2c137.scope: Deactivated successfully.
Nov 24 09:46:05 compute-0 podman[249309]: 2025-11-24 09:46:05.291328948 +0000 UTC m=+0.512340511 container died 504bec738e1ed0e51ccb3b67f1518bbf986f736474593f6b10e19453eae2c137 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:46:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7a2bf6e024c974c811513484e1c57b6866f501b2c72c5cad563d002299f6d4b-merged.mount: Deactivated successfully.
Nov 24 09:46:05 compute-0 podman[249309]: 2025-11-24 09:46:05.341540652 +0000 UTC m=+0.562552205 container remove 504bec738e1ed0e51ccb3b67f1518bbf986f736474593f6b10e19453eae2c137 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_mclaren, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Nov 24 09:46:05 compute-0 systemd[1]: libpod-conmon-504bec738e1ed0e51ccb3b67f1518bbf986f736474593f6b10e19453eae2c137.scope: Deactivated successfully.
Nov 24 09:46:05 compute-0 sudo[249505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywfpvsgnmudzamrmqziopjjmzjhyzgra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977565.1303966-2985-197408808255543/AnsiballZ_file.py'
Nov 24 09:46:05 compute-0 sudo[249078]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:05 compute-0 sudo[249505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:05 compute-0 ceph-mon[74331]: pgmap v565: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:46:05 compute-0 sudo[249508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:46:05 compute-0 sudo[249508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:46:05 compute-0 sudo[249508]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:05 compute-0 sudo[249533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- lvm list --format json
Nov 24 09:46:05 compute-0 sudo[249533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:46:05 compute-0 python3.9[249507]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:46:05 compute-0 sudo[249505]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:05 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v566: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:46:05 compute-0 podman[249698]: 2025-11-24 09:46:05.855332238 +0000 UTC m=+0.037206906 container create 50e17f8a6b872cb3aa31d887c6c0562f773ad821b40752851558624e697e96d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_feynman, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True)
Nov 24 09:46:05 compute-0 systemd[1]: Started libpod-conmon-50e17f8a6b872cb3aa31d887c6c0562f773ad821b40752851558624e697e96d9.scope.
Nov 24 09:46:05 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:46:05 compute-0 podman[249698]: 2025-11-24 09:46:05.928428074 +0000 UTC m=+0.110302762 container init 50e17f8a6b872cb3aa31d887c6c0562f773ad821b40752851558624e697e96d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 24 09:46:05 compute-0 podman[249698]: 2025-11-24 09:46:05.93518859 +0000 UTC m=+0.117063258 container start 50e17f8a6b872cb3aa31d887c6c0562f773ad821b40752851558624e697e96d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 24 09:46:05 compute-0 podman[249698]: 2025-11-24 09:46:05.840186036 +0000 UTC m=+0.022060734 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:46:05 compute-0 podman[249698]: 2025-11-24 09:46:05.938175424 +0000 UTC m=+0.120050092 container attach 50e17f8a6b872cb3aa31d887c6c0562f773ad821b40752851558624e697e96d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:46:05 compute-0 cranky_feynman[249738]: 167 167
Nov 24 09:46:05 compute-0 systemd[1]: libpod-50e17f8a6b872cb3aa31d887c6c0562f773ad821b40752851558624e697e96d9.scope: Deactivated successfully.
Nov 24 09:46:05 compute-0 podman[249698]: 2025-11-24 09:46:05.940821209 +0000 UTC m=+0.122695887 container died 50e17f8a6b872cb3aa31d887c6c0562f773ad821b40752851558624e697e96d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_feynman, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Nov 24 09:46:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2f2662a066a1d32d661bef9482eeb38b68ed5ba0b3ce20d8b1be10f83ba8281-merged.mount: Deactivated successfully.
Nov 24 09:46:05 compute-0 sudo[249776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bohnkjzrmvtnyiwbdahmgflbnjiihkpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977565.7064478-2985-55116654839265/AnsiballZ_file.py'
Nov 24 09:46:05 compute-0 sudo[249776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:05 compute-0 podman[249698]: 2025-11-24 09:46:05.979771526 +0000 UTC m=+0.161646194 container remove 50e17f8a6b872cb3aa31d887c6c0562f773ad821b40752851558624e697e96d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_feynman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 09:46:05 compute-0 systemd[1]: libpod-conmon-50e17f8a6b872cb3aa31d887c6c0562f773ad821b40752851558624e697e96d9.scope: Deactivated successfully.
Nov 24 09:46:06 compute-0 podman[249791]: 2025-11-24 09:46:06.134207671 +0000 UTC m=+0.038291722 container create 56d0ee679f77ed884abe0bef0140300086433dfda45f9c313a1d3641ea3df7ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_hoover, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:46:06 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:06 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:06 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:06.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:06 compute-0 python3.9[249783]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:46:06 compute-0 systemd[1]: Started libpod-conmon-56d0ee679f77ed884abe0bef0140300086433dfda45f9c313a1d3641ea3df7ae.scope.
Nov 24 09:46:06 compute-0 sudo[249776]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:06 compute-0 podman[249791]: 2025-11-24 09:46:06.117587192 +0000 UTC m=+0.021671273 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:46:06 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:46:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7037cfe581afe38bc00bda52609a3d9b49adba01f286f4d9a296ebe957508107/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:46:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7037cfe581afe38bc00bda52609a3d9b49adba01f286f4d9a296ebe957508107/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:46:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7037cfe581afe38bc00bda52609a3d9b49adba01f286f4d9a296ebe957508107/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:46:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7037cfe581afe38bc00bda52609a3d9b49adba01f286f4d9a296ebe957508107/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:46:06 compute-0 podman[249791]: 2025-11-24 09:46:06.232464725 +0000 UTC m=+0.136548776 container init 56d0ee679f77ed884abe0bef0140300086433dfda45f9c313a1d3641ea3df7ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_hoover, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:46:06 compute-0 podman[249791]: 2025-11-24 09:46:06.23957422 +0000 UTC m=+0.143658261 container start 56d0ee679f77ed884abe0bef0140300086433dfda45f9c313a1d3641ea3df7ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_hoover, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:46:06 compute-0 podman[249791]: 2025-11-24 09:46:06.24241811 +0000 UTC m=+0.146502171 container attach 56d0ee679f77ed884abe0bef0140300086433dfda45f9c313a1d3641ea3df7ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_hoover, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default)
Nov 24 09:46:06 compute-0 ceph-mon[74331]: pgmap v566: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:46:06 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.2.0.compute-0.ssprex.service: Scheduled restart job, restart counter is at 8.
Nov 24 09:46:06 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ssprex for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:46:06 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.2.0.compute-0.ssprex.service: Consumed 1.476s CPU time.
Nov 24 09:46:06 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ssprex for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:46:06 compute-0 flamboyant_hoover[249808]: {
Nov 24 09:46:06 compute-0 flamboyant_hoover[249808]:     "0": [
Nov 24 09:46:06 compute-0 flamboyant_hoover[249808]:         {
Nov 24 09:46:06 compute-0 flamboyant_hoover[249808]:             "devices": [
Nov 24 09:46:06 compute-0 flamboyant_hoover[249808]:                 "/dev/loop3"
Nov 24 09:46:06 compute-0 flamboyant_hoover[249808]:             ],
Nov 24 09:46:06 compute-0 flamboyant_hoover[249808]:             "lv_name": "ceph_lv0",
Nov 24 09:46:06 compute-0 flamboyant_hoover[249808]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:46:06 compute-0 flamboyant_hoover[249808]:             "lv_size": "21470642176",
Nov 24 09:46:06 compute-0 flamboyant_hoover[249808]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=84a084c3-61a7-5de7-8207-1f88efa59a64,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 24 09:46:06 compute-0 flamboyant_hoover[249808]:             "lv_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:46:06 compute-0 flamboyant_hoover[249808]:             "name": "ceph_lv0",
Nov 24 09:46:06 compute-0 flamboyant_hoover[249808]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:46:06 compute-0 flamboyant_hoover[249808]:             "tags": {
Nov 24 09:46:06 compute-0 flamboyant_hoover[249808]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:46:06 compute-0 flamboyant_hoover[249808]:                 "ceph.block_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:46:06 compute-0 flamboyant_hoover[249808]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 09:46:06 compute-0 flamboyant_hoover[249808]:                 "ceph.cluster_fsid": "84a084c3-61a7-5de7-8207-1f88efa59a64",
Nov 24 09:46:06 compute-0 flamboyant_hoover[249808]:                 "ceph.cluster_name": "ceph",
Nov 24 09:46:06 compute-0 flamboyant_hoover[249808]:                 "ceph.crush_device_class": "",
Nov 24 09:46:06 compute-0 flamboyant_hoover[249808]:                 "ceph.encrypted": "0",
Nov 24 09:46:06 compute-0 flamboyant_hoover[249808]:                 "ceph.osd_fsid": "4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c",
Nov 24 09:46:06 compute-0 flamboyant_hoover[249808]:                 "ceph.osd_id": "0",
Nov 24 09:46:06 compute-0 flamboyant_hoover[249808]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 09:46:06 compute-0 flamboyant_hoover[249808]:                 "ceph.type": "block",
Nov 24 09:46:06 compute-0 flamboyant_hoover[249808]:                 "ceph.vdo": "0",
Nov 24 09:46:06 compute-0 flamboyant_hoover[249808]:                 "ceph.with_tpm": "0"
Nov 24 09:46:06 compute-0 flamboyant_hoover[249808]:             },
Nov 24 09:46:06 compute-0 flamboyant_hoover[249808]:             "type": "block",
Nov 24 09:46:06 compute-0 flamboyant_hoover[249808]:             "vg_name": "ceph_vg0"
Nov 24 09:46:06 compute-0 flamboyant_hoover[249808]:         }
Nov 24 09:46:06 compute-0 flamboyant_hoover[249808]:     ]
Nov 24 09:46:06 compute-0 flamboyant_hoover[249808]: }
Nov 24 09:46:06 compute-0 systemd[1]: libpod-56d0ee679f77ed884abe0bef0140300086433dfda45f9c313a1d3641ea3df7ae.scope: Deactivated successfully.
Nov 24 09:46:06 compute-0 podman[249791]: 2025-11-24 09:46:06.579094364 +0000 UTC m=+0.483178415 container died 56d0ee679f77ed884abe0bef0140300086433dfda45f9c313a1d3641ea3df7ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_hoover, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 09:46:06 compute-0 sudo[250001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-raitoyywquvxzfvyolanxtxmjkmhvgry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977566.3359215-2985-15599295882185/AnsiballZ_file.py'
Nov 24 09:46:06 compute-0 sudo[250001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-7037cfe581afe38bc00bda52609a3d9b49adba01f286f4d9a296ebe957508107-merged.mount: Deactivated successfully.
Nov 24 09:46:06 compute-0 podman[249791]: 2025-11-24 09:46:06.617482137 +0000 UTC m=+0.521566188 container remove 56d0ee679f77ed884abe0bef0140300086433dfda45f9c313a1d3641ea3df7ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_hoover, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:46:06 compute-0 systemd[1]: libpod-conmon-56d0ee679f77ed884abe0bef0140300086433dfda45f9c313a1d3641ea3df7ae.scope: Deactivated successfully.
Nov 24 09:46:06 compute-0 sudo[249533]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:06 compute-0 podman[250025]: 2025-11-24 09:46:06.685038057 +0000 UTC m=+0.045668863 container create 8cfa845d75d5604b34281178556638270379d19daada1fad880251d8d2ee8d75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Nov 24 09:46:06 compute-0 sudo[250034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:46:06 compute-0 sudo[250034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:46:06 compute-0 sudo[250034]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ee78d55d4f5ea86ecb404eed46a59950dbb09ffae3bd860df8d1a7a9d3d3266/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 24 09:46:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ee78d55d4f5ea86ecb404eed46a59950dbb09ffae3bd860df8d1a7a9d3d3266/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:46:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ee78d55d4f5ea86ecb404eed46a59950dbb09ffae3bd860df8d1a7a9d3d3266/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:46:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ee78d55d4f5ea86ecb404eed46a59950dbb09ffae3bd860df8d1a7a9d3d3266/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ssprex-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:46:06 compute-0 podman[250025]: 2025-11-24 09:46:06.750662809 +0000 UTC m=+0.111293635 container init 8cfa845d75d5604b34281178556638270379d19daada1fad880251d8d2ee8d75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:46:06 compute-0 podman[250025]: 2025-11-24 09:46:06.75515005 +0000 UTC m=+0.115780856 container start 8cfa845d75d5604b34281178556638270379d19daada1fad880251d8d2ee8d75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:46:06 compute-0 bash[250025]: 8cfa845d75d5604b34281178556638270379d19daada1fad880251d8d2ee8d75
Nov 24 09:46:06 compute-0 sudo[250065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- raw list --format json
Nov 24 09:46:06 compute-0 podman[250025]: 2025-11-24 09:46:06.667218539 +0000 UTC m=+0.027849365 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:46:06 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ssprex for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:46:06 compute-0 sudo[250065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:46:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[250066]: 24/11/2025 09:46:06 : epoch 6924295e : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 24 09:46:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[250066]: 24/11/2025 09:46:06 : epoch 6924295e : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 24 09:46:06 compute-0 python3.9[250010]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:46:06 compute-0 sudo[250001]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[250066]: 24/11/2025 09:46:06 : epoch 6924295e : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 24 09:46:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[250066]: 24/11/2025 09:46:06 : epoch 6924295e : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 24 09:46:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[250066]: 24/11/2025 09:46:06 : epoch 6924295e : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 24 09:46:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[250066]: 24/11/2025 09:46:06 : epoch 6924295e : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 24 09:46:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[250066]: 24/11/2025 09:46:06 : epoch 6924295e : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 24 09:46:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[250066]: 24/11/2025 09:46:06 : epoch 6924295e : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:46:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:46:07.052Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:46:07 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:07 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:46:07 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:07.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:46:07 compute-0 sudo[250334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqoknoptyiypandxfeyxwpwdqhqzarqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977566.90531-2985-206214806423467/AnsiballZ_file.py'
Nov 24 09:46:07 compute-0 podman[250294]: 2025-11-24 09:46:07.160693165 +0000 UTC m=+0.055471043 container create cc504a94f20590e289347d71c40134665463cdd4001a1735eafa694a90ef0a65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:46:07 compute-0 sudo[250334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:07 compute-0 systemd[1]: Started libpod-conmon-cc504a94f20590e289347d71c40134665463cdd4001a1735eafa694a90ef0a65.scope.
Nov 24 09:46:07 compute-0 podman[250294]: 2025-11-24 09:46:07.129288064 +0000 UTC m=+0.024065972 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:46:07 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:46:07 compute-0 podman[250294]: 2025-11-24 09:46:07.262093248 +0000 UTC m=+0.156871136 container init cc504a94f20590e289347d71c40134665463cdd4001a1735eafa694a90ef0a65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_beaver, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid)
Nov 24 09:46:07 compute-0 podman[250294]: 2025-11-24 09:46:07.268881345 +0000 UTC m=+0.163659223 container start cc504a94f20590e289347d71c40134665463cdd4001a1735eafa694a90ef0a65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_beaver, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 24 09:46:07 compute-0 strange_beaver[250339]: 167 167
Nov 24 09:46:07 compute-0 systemd[1]: libpod-cc504a94f20590e289347d71c40134665463cdd4001a1735eafa694a90ef0a65.scope: Deactivated successfully.
Nov 24 09:46:07 compute-0 podman[250294]: 2025-11-24 09:46:07.278914601 +0000 UTC m=+0.173692479 container attach cc504a94f20590e289347d71c40134665463cdd4001a1735eafa694a90ef0a65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_beaver, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Nov 24 09:46:07 compute-0 podman[250294]: 2025-11-24 09:46:07.279655059 +0000 UTC m=+0.174432937 container died cc504a94f20590e289347d71c40134665463cdd4001a1735eafa694a90ef0a65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_beaver, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Nov 24 09:46:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-b686689d0226b64f729a334b77ceab97db1d5912f768c976f615f982105d40d8-merged.mount: Deactivated successfully.
Nov 24 09:46:07 compute-0 podman[250294]: 2025-11-24 09:46:07.323846005 +0000 UTC m=+0.218623883 container remove cc504a94f20590e289347d71c40134665463cdd4001a1735eafa694a90ef0a65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_beaver, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 24 09:46:07 compute-0 systemd[1]: libpod-conmon-cc504a94f20590e289347d71c40134665463cdd4001a1735eafa694a90ef0a65.scope: Deactivated successfully.
Nov 24 09:46:07 compute-0 python3.9[250336]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:46:07 compute-0 sudo[250334]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:07 compute-0 podman[250389]: 2025-11-24 09:46:07.481330405 +0000 UTC m=+0.040817384 container create 0645b6323bfe6740404622c7cf3d10c84dd164e4051ba54c1c53c602c0305e1a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Nov 24 09:46:07 compute-0 systemd[1]: Started libpod-conmon-0645b6323bfe6740404622c7cf3d10c84dd164e4051ba54c1c53c602c0305e1a.scope.
Nov 24 09:46:07 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:46:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9d5cc253703ad4f7ea8ad4bd29bb7daa4ec272b937005959cf7f0b55f7501a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:46:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9d5cc253703ad4f7ea8ad4bd29bb7daa4ec272b937005959cf7f0b55f7501a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:46:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9d5cc253703ad4f7ea8ad4bd29bb7daa4ec272b937005959cf7f0b55f7501a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:46:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9d5cc253703ad4f7ea8ad4bd29bb7daa4ec272b937005959cf7f0b55f7501a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:46:07 compute-0 podman[250389]: 2025-11-24 09:46:07.556316508 +0000 UTC m=+0.115803517 container init 0645b6323bfe6740404622c7cf3d10c84dd164e4051ba54c1c53c602c0305e1a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:46:07 compute-0 podman[250389]: 2025-11-24 09:46:07.464682215 +0000 UTC m=+0.024169214 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:46:07 compute-0 podman[250389]: 2025-11-24 09:46:07.564649573 +0000 UTC m=+0.124136552 container start 0645b6323bfe6740404622c7cf3d10c84dd164e4051ba54c1c53c602c0305e1a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_northcutt, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:46:07 compute-0 podman[250389]: 2025-11-24 09:46:07.568031015 +0000 UTC m=+0.127518024 container attach 0645b6323bfe6740404622c7cf3d10c84dd164e4051ba54c1c53c602c0305e1a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_northcutt, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:46:07 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v567: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:46:07 compute-0 sudo[250546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bewlldmmrzvtzylduavvfjgwmhsfhgps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977567.52159-2985-161513171420833/AnsiballZ_file.py'
Nov 24 09:46:07 compute-0 sudo[250546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:07 compute-0 python3.9[250552]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:46:08 compute-0 sudo[250546]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:08 compute-0 lvm[250659]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 09:46:08 compute-0 lvm[250659]: VG ceph_vg0 finished
Nov 24 09:46:08 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:08 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:08 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:08.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:08 compute-0 elegant_northcutt[250430]: {}
Nov 24 09:46:08 compute-0 systemd[1]: libpod-0645b6323bfe6740404622c7cf3d10c84dd164e4051ba54c1c53c602c0305e1a.scope: Deactivated successfully.
Nov 24 09:46:08 compute-0 podman[250389]: 2025-11-24 09:46:08.238232765 +0000 UTC m=+0.797719744 container died 0645b6323bfe6740404622c7cf3d10c84dd164e4051ba54c1c53c602c0305e1a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_northcutt, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 24 09:46:08 compute-0 systemd[1]: libpod-0645b6323bfe6740404622c7cf3d10c84dd164e4051ba54c1c53c602c0305e1a.scope: Consumed 1.072s CPU time.
Nov 24 09:46:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9d5cc253703ad4f7ea8ad4bd29bb7daa4ec272b937005959cf7f0b55f7501a3-merged.mount: Deactivated successfully.
Nov 24 09:46:08 compute-0 podman[250389]: 2025-11-24 09:46:08.277956311 +0000 UTC m=+0.837443290 container remove 0645b6323bfe6740404622c7cf3d10c84dd164e4051ba54c1c53c602c0305e1a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:46:08 compute-0 systemd[1]: libpod-conmon-0645b6323bfe6740404622c7cf3d10c84dd164e4051ba54c1c53c602c0305e1a.scope: Deactivated successfully.
Nov 24 09:46:08 compute-0 sudo[250065]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:08 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:46:08 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:46:08 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:46:08 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:46:08 compute-0 sudo[250791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odddxqnhlnfodgzzzpglvppimaexulqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977568.1450224-2985-182016118628704/AnsiballZ_file.py'
Nov 24 09:46:08 compute-0 sudo[250791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:08 compute-0 sudo[250759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:46:08 compute-0 sudo[250759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:46:08 compute-0 sudo[250759]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:08 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:46:08 compute-0 python3.9[250800]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:46:08 compute-0 sudo[250791]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:08 compute-0 ceph-mon[74331]: pgmap v567: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:46:08 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:46:08 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:46:09 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:09 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:46:09 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:09.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:46:09 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v568: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:46:10 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:10 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:10 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:10.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:10 compute-0 ceph-mon[74331]: pgmap v568: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:46:10 compute-0 podman[250828]: 2025-11-24 09:46:10.820584914 +0000 UTC m=+0.084020206 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 24 09:46:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:46:10] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Nov 24 09:46:10 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:46:10] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Nov 24 09:46:11 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:11 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:11 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:11.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:11 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v569: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 2 op/s
Nov 24 09:46:12 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:12 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:12 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:12.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:12 compute-0 ceph-mon[74331]: pgmap v569: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 2 op/s
Nov 24 09:46:12 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[250066]: 24/11/2025 09:46:12 : epoch 6924295e : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:46:12 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[250066]: 24/11/2025 09:46:12 : epoch 6924295e : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:46:13 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:13 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:13 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:13.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:46:13 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v570: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:46:14 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:14 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:14 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:14.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:14 compute-0 sudo[250978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-maxpetpjbwgawbzolazeczrvbemekbpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977574.1178248-3310-121614689210779/AnsiballZ_getent.py'
Nov 24 09:46:14 compute-0 sudo[250978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:14 compute-0 python3.9[250980]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Nov 24 09:46:14 compute-0 sudo[250978]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:14 compute-0 ceph-mon[74331]: pgmap v570: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:46:15 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:15 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:46:15 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:15.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:46:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:46:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:46:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:46:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:46:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:46:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:46:15 compute-0 sudo[251132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmdqclmrknqgcydbwdzyqrvkgkezheqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977575.015781-3334-198054224175054/AnsiballZ_group.py'
Nov 24 09:46:15 compute-0 sudo[251132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:15 compute-0 python3.9[251134]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 24 09:46:15 compute-0 groupadd[251135]: group added to /etc/group: name=nova, GID=42436
Nov 24 09:46:15 compute-0 groupadd[251135]: group added to /etc/gshadow: name=nova
Nov 24 09:46:15 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v571: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:46:15 compute-0 groupadd[251135]: new group: name=nova, GID=42436
Nov 24 09:46:15 compute-0 sudo[251132]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:15 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Nov 24 09:46:15 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:46:15.986608) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 09:46:15 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Nov 24 09:46:15 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977575986657, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 921, "num_deletes": 251, "total_data_size": 1607038, "memory_usage": 1633040, "flush_reason": "Manual Compaction"}
Nov 24 09:46:15 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Nov 24 09:46:15 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:46:15 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977575998668, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 1553648, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19102, "largest_seqno": 20022, "table_properties": {"data_size": 1549113, "index_size": 2187, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10009, "raw_average_key_size": 19, "raw_value_size": 1540015, "raw_average_value_size": 3013, "num_data_blocks": 98, "num_entries": 511, "num_filter_entries": 511, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763977498, "oldest_key_time": 1763977498, "file_creation_time": 1763977575, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "42aa12d2-c531-4ddc-8c4c-bc0b5971346b", "db_session_id": "RORHLERH15LC1QL8D0I4", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:46:15 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 12101 microseconds, and 3898 cpu microseconds.
Nov 24 09:46:15 compute-0 ceph-mon[74331]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:46:16 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:46:15.998715) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 1553648 bytes OK
Nov 24 09:46:16 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:46:15.998733) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Nov 24 09:46:16 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:46:16.001551) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Nov 24 09:46:16 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:46:16.001565) EVENT_LOG_v1 {"time_micros": 1763977576001560, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 09:46:16 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:46:16.001581) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 09:46:16 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 1602680, prev total WAL file size 1602680, number of live WAL files 2.
Nov 24 09:46:16 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:46:16 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:46:16.002342) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Nov 24 09:46:16 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 09:46:16 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(1517KB)], [41(12MB)]
Nov 24 09:46:16 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977576002428, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 15135876, "oldest_snapshot_seqno": -1}
Nov 24 09:46:16 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 5004 keys, 12954757 bytes, temperature: kUnknown
Nov 24 09:46:16 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977576075314, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 12954757, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12920359, "index_size": 20784, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12549, "raw_key_size": 127697, "raw_average_key_size": 25, "raw_value_size": 12828682, "raw_average_value_size": 2563, "num_data_blocks": 851, "num_entries": 5004, "num_filter_entries": 5004, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976305, "oldest_key_time": 0, "file_creation_time": 1763977576, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "42aa12d2-c531-4ddc-8c4c-bc0b5971346b", "db_session_id": "RORHLERH15LC1QL8D0I4", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:46:16 compute-0 ceph-mon[74331]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:46:16 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:46:16.075537) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 12954757 bytes
Nov 24 09:46:16 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:46:16.076871) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 207.5 rd, 177.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 13.0 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(18.1) write-amplify(8.3) OK, records in: 5520, records dropped: 516 output_compression: NoCompression
Nov 24 09:46:16 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:46:16.076903) EVENT_LOG_v1 {"time_micros": 1763977576076882, "job": 20, "event": "compaction_finished", "compaction_time_micros": 72941, "compaction_time_cpu_micros": 30101, "output_level": 6, "num_output_files": 1, "total_output_size": 12954757, "num_input_records": 5520, "num_output_records": 5004, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 09:46:16 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:46:16 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977576077501, "job": 20, "event": "table_file_deletion", "file_number": 43}
Nov 24 09:46:16 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:46:16 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977576079967, "job": 20, "event": "table_file_deletion", "file_number": 41}
Nov 24 09:46:16 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:46:16.002224) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:46:16 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:46:16.080080) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:46:16 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:46:16.080085) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:46:16 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:46:16.080087) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:46:16 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:46:16.080088) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:46:16 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:46:16.080090) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:46:16 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:16 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:16 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:16.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:16 compute-0 sudo[251291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yublmlfbeqtsqlqkwuoztlwuwxfziikh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977576.1119804-3358-97435738004157/AnsiballZ_user.py'
Nov 24 09:46:16 compute-0 sudo[251291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:16 compute-0 python3.9[251293]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 24 09:46:16 compute-0 useradd[251295]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Nov 24 09:46:16 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 09:46:16 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 09:46:16 compute-0 useradd[251295]: add 'nova' to group 'libvirt'
Nov 24 09:46:16 compute-0 useradd[251295]: add 'nova' to shadow group 'libvirt'
Nov 24 09:46:16 compute-0 sudo[251291]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:17 compute-0 ceph-mon[74331]: pgmap v571: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:46:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:46:17.053Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 09:46:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:46:17.053Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:46:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:46:17.054Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:46:17 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:17 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:46:17 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:17.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:46:17 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v572: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Nov 24 09:46:18 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:18 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.002000048s ======
Nov 24 09:46:18 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:18.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Nov 24 09:46:18 compute-0 sshd-session[251328]: Accepted publickey for zuul from 192.168.122.30 port 60640 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 09:46:18 compute-0 systemd-logind[822]: New session 55 of user zuul.
Nov 24 09:46:18 compute-0 systemd[1]: Started Session 55 of User zuul.
Nov 24 09:46:18 compute-0 sshd-session[251328]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 09:46:18 compute-0 podman[251332]: 2025-11-24 09:46:18.409026252 +0000 UTC m=+0.074691656 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller)
Nov 24 09:46:18 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:46:18 compute-0 sshd-session[251338]: Received disconnect from 192.168.122.30 port 60640:11: disconnected by user
Nov 24 09:46:18 compute-0 sshd-session[251338]: Disconnected from user zuul 192.168.122.30 port 60640
Nov 24 09:46:18 compute-0 sshd-session[251328]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:46:18 compute-0 systemd[1]: session-55.scope: Deactivated successfully.
Nov 24 09:46:18 compute-0 systemd-logind[822]: Session 55 logged out. Waiting for processes to exit.
Nov 24 09:46:18 compute-0 systemd-logind[822]: Removed session 55.
Nov 24 09:46:19 compute-0 python3.9[251509]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:46:19 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:19 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:46:19 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:19.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:46:19 compute-0 ceph-mon[74331]: pgmap v572: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Nov 24 09:46:19 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[250066]: 24/11/2025 09:46:19 : epoch 6924295e : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:46:19 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[250066]: 24/11/2025 09:46:19 : epoch 6924295e : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 24 09:46:19 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[250066]: 24/11/2025 09:46:19 : epoch 6924295e : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 24 09:46:19 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[250066]: 24/11/2025 09:46:19 : epoch 6924295e : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 24 09:46:19 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[250066]: 24/11/2025 09:46:19 : epoch 6924295e : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 24 09:46:19 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[250066]: 24/11/2025 09:46:19 : epoch 6924295e : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 24 09:46:19 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[250066]: 24/11/2025 09:46:19 : epoch 6924295e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 24 09:46:19 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[250066]: 24/11/2025 09:46:19 : epoch 6924295e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:46:19 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[250066]: 24/11/2025 09:46:19 : epoch 6924295e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:46:19 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[250066]: 24/11/2025 09:46:19 : epoch 6924295e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:46:19 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[250066]: 24/11/2025 09:46:19 : epoch 6924295e : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 24 09:46:19 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[250066]: 24/11/2025 09:46:19 : epoch 6924295e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:46:19 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[250066]: 24/11/2025 09:46:19 : epoch 6924295e : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 24 09:46:19 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[250066]: 24/11/2025 09:46:19 : epoch 6924295e : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 24 09:46:19 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[250066]: 24/11/2025 09:46:19 : epoch 6924295e : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 24 09:46:19 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[250066]: 24/11/2025 09:46:19 : epoch 6924295e : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 24 09:46:19 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[250066]: 24/11/2025 09:46:19 : epoch 6924295e : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 24 09:46:19 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[250066]: 24/11/2025 09:46:19 : epoch 6924295e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 24 09:46:19 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[250066]: 24/11/2025 09:46:19 : epoch 6924295e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 24 09:46:19 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[250066]: 24/11/2025 09:46:19 : epoch 6924295e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 24 09:46:19 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[250066]: 24/11/2025 09:46:19 : epoch 6924295e : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 24 09:46:19 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[250066]: 24/11/2025 09:46:19 : epoch 6924295e : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 24 09:46:19 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[250066]: 24/11/2025 09:46:19 : epoch 6924295e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 24 09:46:19 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[250066]: 24/11/2025 09:46:19 : epoch 6924295e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 24 09:46:19 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[250066]: 24/11/2025 09:46:19 : epoch 6924295e : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 09:46:19 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[250066]: 24/11/2025 09:46:19 : epoch 6924295e : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 24 09:46:19 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[250066]: 24/11/2025 09:46:19 : epoch 6924295e : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 09:46:19 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[250066]: 24/11/2025 09:46:19 : epoch 6924295e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac5c000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:46:19 compute-0 python3.9[251643]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763977578.628775-3433-115237753212920/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:46:19 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v573: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:46:19 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[250066]: 24/11/2025 09:46:19 : epoch 6924295e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac500014d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:46:20 compute-0 python3.9[251796]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:46:20 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:20 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:20 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:20.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:20 compute-0 python3.9[251873]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:46:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:46:20.556 165073 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:46:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:46:20.557 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:46:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:46:20.557 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:46:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/094620 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:46:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [ALERT] 327/094620 (4) : backend 'backend' has no server available!
Nov 24 09:46:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[250066]: 24/11/2025 09:46:20 : epoch 6924295e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac38000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:46:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:46:20] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Nov 24 09:46:20 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:46:20] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Nov 24 09:46:21 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:21 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:46:21 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:21.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:46:21 compute-0 python3.9[252023]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:46:21 compute-0 ceph-mon[74331]: pgmap v573: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:46:21 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[250066]: 24/11/2025 09:46:21 : epoch 6924295e : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac34000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:46:21 compute-0 python3.9[252145]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763977580.664696-3433-190433524891340/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:46:21 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v574: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1023 B/s wr, 4 op/s
Nov 24 09:46:21 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/094621 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:46:21 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[250066]: 24/11/2025 09:46:21 : epoch 6924295e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac58001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:46:22 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:22 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:22 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:22.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:22 compute-0 python3.9[252295]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:46:22 compute-0 python3.9[252417]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763977581.7570217-3433-194823247280687/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:46:22 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/094622 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:46:22 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[250066]: 24/11/2025 09:46:22 : epoch 6924295e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac500021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:46:23 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:23 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:23 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:23.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:23 compute-0 ceph-mon[74331]: pgmap v574: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1023 B/s wr, 4 op/s
Nov 24 09:46:23 compute-0 python3.9[252567]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:46:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[250066]: 24/11/2025 09:46:23 : epoch 6924295e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac380016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:46:23 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:46:23 compute-0 sudo[252690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:46:23 compute-0 sudo[252690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:46:23 compute-0 sudo[252690]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:23 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v575: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 511 B/s wr, 2 op/s
Nov 24 09:46:23 compute-0 python3.9[252689]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763977582.8366919-3433-84395062769186/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:46:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[250066]: 24/11/2025 09:46:23 : epoch 6924295e : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac340016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:46:24 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:24 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:24 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:24.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:24 compute-0 python3.9[252864]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:46:24 compute-0 python3.9[252986]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763977583.8962924-3433-37361148278389/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:46:24 compute-0 kernel: ganesha.nfsd[251582]: segfault at 50 ip 00007fad0884132e sp 00007faccf7fd210 error 4 in libntirpc.so.5.8[7fad08826000+2c000] likely on CPU 3 (core 0, socket 3)
Nov 24 09:46:24 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 24 09:46:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[250066]: 24/11/2025 09:46:24 : epoch 6924295e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fac500021d0 fd 38 proxy ignored for local
Nov 24 09:46:24 compute-0 systemd[1]: Started Process Core Dump (PID 252987/UID 0).
Nov 24 09:46:25 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:25 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:25 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:25.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:25 compute-0 ceph-mon[74331]: pgmap v575: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 511 B/s wr, 2 op/s
Nov 24 09:46:25 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v576: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 511 B/s wr, 2 op/s
Nov 24 09:46:25 compute-0 podman[253014]: 2025-11-24 09:46:25.773923777 +0000 UTC m=+0.047174620 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 24 09:46:26 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:26 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:26 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:26.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:26 compute-0 sudo[253160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qogzowgundljmnzikhggshbhqhvnollr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977586.1087744-3682-164927237458965/AnsiballZ_file.py'
Nov 24 09:46:26 compute-0 sudo[253160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:26 compute-0 systemd-coredump[252988]: Process 250092 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 42:
                                                    #0  0x00007fad0884132e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 24 09:46:26 compute-0 systemd[1]: systemd-coredump@8-252987-0.service: Deactivated successfully.
Nov 24 09:46:26 compute-0 systemd[1]: systemd-coredump@8-252987-0.service: Consumed 1.628s CPU time.
Nov 24 09:46:26 compute-0 python3.9[253162]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:46:26 compute-0 sudo[253160]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:26 compute-0 podman[253167]: 2025-11-24 09:46:26.618153323 +0000 UTC m=+0.031811533 container died 8cfa845d75d5604b34281178556638270379d19daada1fad880251d8d2ee8d75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 09:46:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ee78d55d4f5ea86ecb404eed46a59950dbb09ffae3bd860df8d1a7a9d3d3266-merged.mount: Deactivated successfully.
Nov 24 09:46:26 compute-0 podman[253167]: 2025-11-24 09:46:26.662546204 +0000 UTC m=+0.076204344 container remove 8cfa845d75d5604b34281178556638270379d19daada1fad880251d8d2ee8d75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:46:26 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.2.0.compute-0.ssprex.service: Main process exited, code=exited, status=139/n/a
Nov 24 09:46:26 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.2.0.compute-0.ssprex.service: Failed with result 'exit-code'.
Nov 24 09:46:26 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.2.0.compute-0.ssprex.service: Consumed 1.423s CPU time.
Nov 24 09:46:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:46:27.055Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:46:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:46:27.056Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:46:27 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:27 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:27 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:27.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:27 compute-0 sudo[253359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfuhoifrliwxayovwtksvmcptxvjvdoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977586.9378362-3706-100848261204017/AnsiballZ_copy.py'
Nov 24 09:46:27 compute-0 sudo[253359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:27 compute-0 ceph-mon[74331]: pgmap v576: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 511 B/s wr, 2 op/s
Nov 24 09:46:27 compute-0 python3.9[253361]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:46:27 compute-0 sudo[253359]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:27 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v577: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 511 B/s wr, 2 op/s
Nov 24 09:46:28 compute-0 sudo[253512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lndepicwundtxxmasvbudzdstwhaqmif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977587.7223756-3730-115257923000276/AnsiballZ_stat.py'
Nov 24 09:46:28 compute-0 sudo[253512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:28 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:28 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:28 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:28.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:28 compute-0 python3.9[253514]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:46:28 compute-0 sudo[253512]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:28 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:46:28 compute-0 sudo[253665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cclqwisjrddhxhmzmgcarprlthcrjyir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977588.4596467-3754-48092640166938/AnsiballZ_stat.py'
Nov 24 09:46:28 compute-0 sudo[253665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:28 compute-0 python3.9[253667]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:46:28 compute-0 sudo[253665]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:29 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:29 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:29 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:29.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:29 compute-0 ceph-mon[74331]: pgmap v577: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 511 B/s wr, 2 op/s
Nov 24 09:46:29 compute-0 sudo[253789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmbvbltcgspruaftawrnstwnlygougfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977588.4596467-3754-48092640166938/AnsiballZ_copy.py'
Nov 24 09:46:29 compute-0 sudo[253789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:29 compute-0 python3.9[253791]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1763977588.4596467-3754-48092640166938/.source _original_basename=.4peugi58 follow=False checksum=525d0e5306d3ead14248a5e4850083cbaabcdf8e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Nov 24 09:46:29 compute-0 sudo[253789]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:29 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v578: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:46:30 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:30 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:30 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:30.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:30 compute-0 python3.9[253944]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:46:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/094630 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:46:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:46:30] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Nov 24 09:46:30 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:46:30] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Nov 24 09:46:31 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:31 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:46:31 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:31.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:46:31 compute-0 ceph-mon[74331]: pgmap v578: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:46:31 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:46:31 compute-0 python3.9[254096]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:46:31 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v579: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Nov 24 09:46:31 compute-0 python3.9[254218]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763977590.896069-3832-233192459659385/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:46:32 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:32 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:32 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:32.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:32 compute-0 python3.9[254369]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 09:46:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:33.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:33 compute-0 python3.9[254490]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763977592.2062461-3877-83983118466768/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 09:46:33 compute-0 ceph-mon[74331]: pgmap v579: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Nov 24 09:46:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:46:33 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v580: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 426 B/s wr, 1 op/s
Nov 24 09:46:34 compute-0 ceph-osd[82549]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 09:46:34 compute-0 ceph-osd[82549]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 9181 writes, 36K keys, 9181 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 9181 writes, 2009 syncs, 4.57 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 720 writes, 1140 keys, 720 commit groups, 1.0 writes per commit group, ingest: 0.38 MB, 0.00 MB/s
                                           Interval WAL: 720 writes, 336 syncs, 2.14 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd2f30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd2f30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd2f30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 24 09:46:34 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:34 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:46:34 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:34.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:46:34 compute-0 sudo[254641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isomkezdbnpmrppikrglkdaxqvhlfykj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977593.9189487-3928-280285241113037/AnsiballZ_container_config_data.py'
Nov 24 09:46:34 compute-0 sudo[254641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:34 compute-0 python3.9[254643]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Nov 24 09:46:34 compute-0 sudo[254641]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:35 compute-0 sudo[254794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwuymshogeswkmnrlcswnlmskndejymt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977594.7821622-3955-116399538188527/AnsiballZ_container_config_hash.py'
Nov 24 09:46:35 compute-0 sudo[254794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:35 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:35 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:35 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:35.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:35 compute-0 python3.9[254796]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 24 09:46:35 compute-0 sudo[254794]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:35 compute-0 ceph-mon[74331]: pgmap v580: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 426 B/s wr, 1 op/s
Nov 24 09:46:35 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v581: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 426 B/s wr, 1 op/s
Nov 24 09:46:35 compute-0 sudo[254947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cklxazxydfurwnbuownooihymgoaolao ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763977595.7367203-3985-275845985722610/AnsiballZ_edpm_container_manage.py'
Nov 24 09:46:35 compute-0 sudo[254947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:36 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:36 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:36 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:36.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:36 compute-0 python3[254949]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Nov 24 09:46:36 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.2.0.compute-0.ssprex.service: Scheduled restart job, restart counter is at 9.
Nov 24 09:46:36 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ssprex for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:46:36 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.2.0.compute-0.ssprex.service: Consumed 1.423s CPU time.
Nov 24 09:46:36 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ssprex for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:46:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:46:37.057Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:46:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:46:37.057Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:46:37 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:37 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:46:37 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:37.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:46:37 compute-0 podman[255032]: 2025-11-24 09:46:37.209194465 +0000 UTC m=+0.045292223 container create 23a5a18a3f0edbf33c725c8301f9a3f79a26049224ffead44197162fdd659a4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:46:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52e4efe15d3eab0f916d9eefcafbd4d304e88d2bbed7a4576062987b323e1697/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 24 09:46:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52e4efe15d3eab0f916d9eefcafbd4d304e88d2bbed7a4576062987b323e1697/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:46:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52e4efe15d3eab0f916d9eefcafbd4d304e88d2bbed7a4576062987b323e1697/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:46:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52e4efe15d3eab0f916d9eefcafbd4d304e88d2bbed7a4576062987b323e1697/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ssprex-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:46:37 compute-0 podman[255032]: 2025-11-24 09:46:37.27403624 +0000 UTC m=+0.110134008 container init 23a5a18a3f0edbf33c725c8301f9a3f79a26049224ffead44197162fdd659a4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:46:37 compute-0 podman[255032]: 2025-11-24 09:46:37.187232576 +0000 UTC m=+0.023330354 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:46:37 compute-0 podman[255032]: 2025-11-24 09:46:37.282204641 +0000 UTC m=+0.118302399 container start 23a5a18a3f0edbf33c725c8301f9a3f79a26049224ffead44197162fdd659a4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:46:37 compute-0 bash[255032]: 23a5a18a3f0edbf33c725c8301f9a3f79a26049224ffead44197162fdd659a4c
Nov 24 09:46:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:46:37 : epoch 6924297d : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 24 09:46:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:46:37 : epoch 6924297d : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 24 09:46:37 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ssprex for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:46:37 compute-0 ceph-mon[74331]: pgmap v581: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 426 B/s wr, 1 op/s
Nov 24 09:46:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:46:37 : epoch 6924297d : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 24 09:46:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:46:37 : epoch 6924297d : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 24 09:46:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:46:37 : epoch 6924297d : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 24 09:46:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:46:37 : epoch 6924297d : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 24 09:46:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:46:37 : epoch 6924297d : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 24 09:46:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:46:37 : epoch 6924297d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:46:37 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v582: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:46:38 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:38 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:38 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:38.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:38 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:46:39 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:39 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:39 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:39.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:39 compute-0 ceph-mon[74331]: pgmap v582: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:46:39 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v583: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:46:40 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:40 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:40 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:40.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:40 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:46:40] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Nov 24 09:46:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:46:40] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Nov 24 09:46:41 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:41 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:46:41 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:41.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:46:41 compute-0 ceph-mon[74331]: pgmap v583: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:46:41 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v584: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1.2 KiB/s wr, 3 op/s
Nov 24 09:46:42 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:42 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:42 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:42.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:42 compute-0 podman[255114]: 2025-11-24 09:46:42.468279452 +0000 UTC m=+0.746069275 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:46:43 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:43 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:43 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:43.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:46:43 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v585: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 767 B/s wr, 2 op/s
Nov 24 09:46:43 compute-0 sudo[255149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:46:43 compute-0 sudo[255149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:46:43 compute-0 sudo[255149]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:44 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:44 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:44 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:44.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:45 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:45 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:45 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:45.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:46:45 : epoch 6924297d : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Nov 24 09:46:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:46:45 : epoch 6924297d : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Nov 24 09:46:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:46:45 : epoch 6924297d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:46:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:46:45 : epoch 6924297d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:46:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:46:45 : epoch 6924297d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:46:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Optimize plan auto_2025-11-24_09:46:45
Nov 24 09:46:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 09:46:45 compute-0 ceph-mgr[74626]: [balancer INFO root] do_upmap
Nov 24 09:46:45 compute-0 ceph-mgr[74626]: [balancer INFO root] pools ['.rgw.root', 'images', 'cephfs.cephfs.meta', '.mgr', '.nfs', 'default.rgw.control', 'volumes', 'backups', 'vms', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta']
Nov 24 09:46:45 compute-0 ceph-mgr[74626]: [balancer INFO root] prepared 0/10 upmap changes
Nov 24 09:46:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:46:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:46:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:46:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:46:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:46:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:46:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 09:46:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:46:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 09:46:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:46:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:46:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:46:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:46:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:46:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:46:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:46:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:46:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:46:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 09:46:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:46:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:46:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:46:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 24 09:46:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:46:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 09:46:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:46:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:46:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:46:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 09:46:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:46:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 09:46:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 09:46:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:46:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 09:46:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:46:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:46:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:46:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:46:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:46:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:46:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:46:45 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v586: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 767 B/s wr, 2 op/s
Nov 24 09:46:45 compute-0 ceph-mon[74331]: pgmap v584: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1.2 KiB/s wr, 3 op/s
Nov 24 09:46:46 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:46 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:46:46 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:46.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:46:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:46:46 : epoch 6924297d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:46:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:46:46 : epoch 6924297d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:46:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:46:46 : epoch 6924297d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:46:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:46:47.058Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:46:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:46:47.058Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:46:47 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:47 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:47 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:47.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:47 compute-0 ceph-mon[74331]: pgmap v585: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 767 B/s wr, 2 op/s
Nov 24 09:46:47 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:46:47 compute-0 ceph-mon[74331]: pgmap v586: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 767 B/s wr, 2 op/s
Nov 24 09:46:47 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v587: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Nov 24 09:46:47 compute-0 podman[254964]: 2025-11-24 09:46:47.728967517 +0000 UTC m=+11.422354132 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 24 09:46:47 compute-0 podman[255214]: 2025-11-24 09:46:47.877881416 +0000 UTC m=+0.057626107 container create 252d85ef275353b7778bcea8d13e017f59765962da915da603356f1a04d46e91 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.build-date=20251118, tcib_managed=true, managed_by=edpm_ansible)
Nov 24 09:46:47 compute-0 podman[255214]: 2025-11-24 09:46:47.840941169 +0000 UTC m=+0.020685890 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 24 09:46:47 compute-0 python3[254949]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Nov 24 09:46:47 compute-0 sudo[254947]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:48 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:48 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:48 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:48.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:48 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:46:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/094648 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:46:48 compute-0 podman[255277]: 2025-11-24 09:46:48.798309984 +0000 UTC m=+0.076933661 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 24 09:46:49 compute-0 ceph-mon[74331]: pgmap v587: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Nov 24 09:46:49 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:49 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:49 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:49.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:49 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v588: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Nov 24 09:46:50 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:50 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:50 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:50.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:46:50] "GET /metrics HTTP/1.1" 200 48278 "" "Prometheus/2.51.0"
Nov 24 09:46:50 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:46:50] "GET /metrics HTTP/1.1" 200 48278 "" "Prometheus/2.51.0"
Nov 24 09:46:51 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:51 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:51 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:51.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:51 compute-0 ceph-mon[74331]: pgmap v588: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Nov 24 09:46:51 compute-0 sudo[255431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuzavlcvmoilzvqwzklxmdjeznsahfex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977611.0582972-4009-161434934244018/AnsiballZ_stat.py'
Nov 24 09:46:51 compute-0 sudo[255431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:51 compute-0 python3.9[255433]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:46:51 compute-0 sudo[255431]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:51 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v589: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1.2 KiB/s wr, 3 op/s
Nov 24 09:46:51 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/094651 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:46:52 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:52 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:52 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:52.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:46:52 : epoch 6924297d : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-000000000000001b:nfs.cephfs.2: -2
Nov 24 09:46:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:46:52 : epoch 6924297d : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:46:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:46:52 : epoch 6924297d : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 24 09:46:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:46:52 : epoch 6924297d : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 24 09:46:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:46:52 : epoch 6924297d : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 24 09:46:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:46:52 : epoch 6924297d : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 24 09:46:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:46:52 : epoch 6924297d : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 24 09:46:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:46:52 : epoch 6924297d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 24 09:46:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:46:52 : epoch 6924297d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:46:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:46:52 : epoch 6924297d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:46:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:46:52 : epoch 6924297d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:46:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:46:52 : epoch 6924297d : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 24 09:46:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:46:52 : epoch 6924297d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:46:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:46:52 : epoch 6924297d : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 24 09:46:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:46:52 : epoch 6924297d : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 24 09:46:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:46:52 : epoch 6924297d : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 24 09:46:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:46:52 : epoch 6924297d : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 24 09:46:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:46:52 : epoch 6924297d : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 24 09:46:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:46:52 : epoch 6924297d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 24 09:46:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:46:52 : epoch 6924297d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 24 09:46:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:46:52 : epoch 6924297d : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 24 09:46:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:46:52 : epoch 6924297d : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 24 09:46:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:46:52 : epoch 6924297d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 24 09:46:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:46:52 : epoch 6924297d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 24 09:46:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:46:52 : epoch 6924297d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 24 09:46:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:46:52 : epoch 6924297d : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 09:46:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:46:52 : epoch 6924297d : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 24 09:46:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:46:52 : epoch 6924297d : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 09:46:52 compute-0 sudo[255599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmfnafnwspbzhkjxhksuekizepfuwzhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977612.2373784-4045-157826985395660/AnsiballZ_container_config_data.py'
Nov 24 09:46:52 compute-0 sudo[255599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:52 compute-0 python3.9[255601]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Nov 24 09:46:52 compute-0 sudo[255599]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:46:52 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa540000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:46:53 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:53 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:53 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:53.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:53 compute-0 ceph-mon[74331]: pgmap v589: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1.2 KiB/s wr, 3 op/s
Nov 24 09:46:53 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:46:53 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa5280016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:46:53 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:46:54 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:46:54 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa514000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:46:54 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v590: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 494 B/s wr, 1 op/s
Nov 24 09:46:54 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:54 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:46:54 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:54.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:46:54 compute-0 sudo[255755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unqbkdtoitrxyaoouadyjsszaydbdouu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977614.3903148-4072-205683287938150/AnsiballZ_container_config_hash.py'
Nov 24 09:46:54 compute-0 sudo[255755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:54 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/094654 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:46:54 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:46:54 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa520000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:46:54 compute-0 python3.9[255757]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 24 09:46:54 compute-0 sudo[255755]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:55 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:55 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:55 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:55.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:55 compute-0 ceph-mon[74331]: pgmap v590: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 494 B/s wr, 1 op/s
Nov 24 09:46:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:46:55 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa52c001230 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:46:55 compute-0 sudo[255908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oauozyjcyirnjpsfscicgefdomknvryt ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763977615.2989466-4102-33592608837336/AnsiballZ_edpm_container_manage.py'
Nov 24 09:46:55 compute-0 sudo[255908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:55 compute-0 python3[255910]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Nov 24 09:46:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:46:56 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa514000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:46:56 compute-0 podman[255949]: 2025-11-24 09:46:56.039514638 +0000 UTC m=+0.052687566 container create 2bcc3f6b74feccee69b753e47d7cc4656ba8a54db8bd7c4a29440bb4766f2a4d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, container_name=nova_compute, org.label-schema.build-date=20251118, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:46:56 compute-0 podman[255949]: 2025-11-24 09:46:56.014072422 +0000 UTC m=+0.027245350 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 24 09:46:56 compute-0 python3[255910]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Nov 24 09:46:56 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v591: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 494 B/s wr, 1 op/s
Nov 24 09:46:56 compute-0 sudo[255908]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:56 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:56 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:56 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:56.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:56 compute-0 podman[256088]: 2025-11-24 09:46:56.778350854 +0000 UTC m=+0.051533978 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 09:46:56 compute-0 sudo[256157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbkhixkgjyhcdatobajrlppaknddxbrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977616.5410812-4126-133864424695472/AnsiballZ_stat.py'
Nov 24 09:46:56 compute-0 sudo[256157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:46:56 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa514000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:46:56 compute-0 python3.9[256159]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:46:57 compute-0 sudo[256157]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:46:57.059Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:46:57 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:57 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:57 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:57.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:57 compute-0 ceph-mon[74331]: pgmap v591: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 494 B/s wr, 1 op/s
Nov 24 09:46:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:46:57 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa520001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:46:57 compute-0 sudo[256312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrqbhqerfofwnfmxouwsembvstgmqjaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977617.3769634-4153-149916794959892/AnsiballZ_file.py'
Nov 24 09:46:57 compute-0 sudo[256312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:57 compute-0 python3.9[256314]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:46:57 compute-0 sudo[256312]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:46:58 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa52c001d50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:46:58 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v592: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 494 B/s wr, 1 op/s
Nov 24 09:46:58 compute-0 sudo[256463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syfcpeydvcdzmkieabmwtkfdybhqllbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977617.8718994-4153-9061100910801/AnsiballZ_copy.py'
Nov 24 09:46:58 compute-0 sudo[256463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:58 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:58 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:58 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:46:58.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:58 compute-0 python3.9[256465]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763977617.8718994-4153-9061100910801/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 09:46:58 compute-0 sudo[256463]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:58 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:46:58 compute-0 sudo[256540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmplyjubeufvlvcnshuhpqsabshrdblr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977617.8718994-4153-9061100910801/AnsiballZ_systemd.py'
Nov 24 09:46:58 compute-0 sudo[256540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:46:58 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa514000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:46:58 compute-0 python3.9[256542]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 09:46:58 compute-0 systemd[1]: Reloading.
Nov 24 09:46:58 compute-0 systemd-rc-local-generator[256568]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:46:59 compute-0 systemd-sysv-generator[256573]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:46:59 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:46:59 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:46:59 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:46:59.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:46:59 compute-0 ceph-mon[74331]: pgmap v592: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 494 B/s wr, 1 op/s
Nov 24 09:46:59 compute-0 sudo[256540]: pam_unix(sudo:session): session closed for user root
Nov 24 09:46:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:46:59 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa514000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:46:59 compute-0 sudo[256652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twdqldeoshnweloqmavchjensfyitvxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977617.8718994-4153-9061100910801/AnsiballZ_systemd.py'
Nov 24 09:46:59 compute-0 sudo[256652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:46:59 compute-0 python3.9[256654]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 09:46:59 compute-0 systemd[1]: Reloading.
Nov 24 09:46:59 compute-0 systemd-rc-local-generator[256682]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 09:46:59 compute-0 systemd-sysv-generator[256685]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 09:47:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:00 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa520001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:00 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v593: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 576 B/s rd, 164 B/s wr, 0 op/s
Nov 24 09:47:00 compute-0 systemd[1]: Starting nova_compute container...
Nov 24 09:47:00 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:00 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:00 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:00.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:00 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:47:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/114372ba9b59567b97dc1ae261a5e4cf4162dd089003ae3a2f2aa8efd72359ee/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 24 09:47:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/114372ba9b59567b97dc1ae261a5e4cf4162dd089003ae3a2f2aa8efd72359ee/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 24 09:47:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/114372ba9b59567b97dc1ae261a5e4cf4162dd089003ae3a2f2aa8efd72359ee/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 24 09:47:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/114372ba9b59567b97dc1ae261a5e4cf4162dd089003ae3a2f2aa8efd72359ee/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 24 09:47:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/114372ba9b59567b97dc1ae261a5e4cf4162dd089003ae3a2f2aa8efd72359ee/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 24 09:47:00 compute-0 podman[256693]: 2025-11-24 09:47:00.32992004 +0000 UTC m=+0.095747024 container init 2bcc3f6b74feccee69b753e47d7cc4656ba8a54db8bd7c4a29440bb4766f2a4d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 24 09:47:00 compute-0 podman[256693]: 2025-11-24 09:47:00.340336006 +0000 UTC m=+0.106162970 container start 2bcc3f6b74feccee69b753e47d7cc4656ba8a54db8bd7c4a29440bb4766f2a4d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 24 09:47:00 compute-0 nova_compute[256709]: + sudo -E kolla_set_configs
Nov 24 09:47:00 compute-0 podman[256693]: nova_compute
Nov 24 09:47:00 compute-0 systemd[1]: Started nova_compute container.
Nov 24 09:47:00 compute-0 sudo[256652]: pam_unix(sudo:session): session closed for user root
Nov 24 09:47:00 compute-0 nova_compute[256709]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 24 09:47:00 compute-0 nova_compute[256709]: INFO:__main__:Validating config file
Nov 24 09:47:00 compute-0 nova_compute[256709]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 24 09:47:00 compute-0 nova_compute[256709]: INFO:__main__:Copying service configuration files
Nov 24 09:47:00 compute-0 nova_compute[256709]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 24 09:47:00 compute-0 nova_compute[256709]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 24 09:47:00 compute-0 nova_compute[256709]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 24 09:47:00 compute-0 nova_compute[256709]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 24 09:47:00 compute-0 nova_compute[256709]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 24 09:47:00 compute-0 nova_compute[256709]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 24 09:47:00 compute-0 nova_compute[256709]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 24 09:47:00 compute-0 nova_compute[256709]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 24 09:47:00 compute-0 nova_compute[256709]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 24 09:47:00 compute-0 nova_compute[256709]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 24 09:47:00 compute-0 nova_compute[256709]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 24 09:47:00 compute-0 nova_compute[256709]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 24 09:47:00 compute-0 nova_compute[256709]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 24 09:47:00 compute-0 nova_compute[256709]: INFO:__main__:Deleting /etc/ceph
Nov 24 09:47:00 compute-0 nova_compute[256709]: INFO:__main__:Creating directory /etc/ceph
Nov 24 09:47:00 compute-0 nova_compute[256709]: INFO:__main__:Setting permission for /etc/ceph
Nov 24 09:47:00 compute-0 nova_compute[256709]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 24 09:47:00 compute-0 nova_compute[256709]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 24 09:47:00 compute-0 nova_compute[256709]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 24 09:47:00 compute-0 nova_compute[256709]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 24 09:47:00 compute-0 nova_compute[256709]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 24 09:47:00 compute-0 nova_compute[256709]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 24 09:47:00 compute-0 nova_compute[256709]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 24 09:47:00 compute-0 nova_compute[256709]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 24 09:47:00 compute-0 nova_compute[256709]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 24 09:47:00 compute-0 nova_compute[256709]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 24 09:47:00 compute-0 nova_compute[256709]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 24 09:47:00 compute-0 nova_compute[256709]: INFO:__main__:Writing out command to execute
Nov 24 09:47:00 compute-0 nova_compute[256709]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 24 09:47:00 compute-0 nova_compute[256709]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 24 09:47:00 compute-0 nova_compute[256709]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 24 09:47:00 compute-0 nova_compute[256709]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 24 09:47:00 compute-0 nova_compute[256709]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 24 09:47:00 compute-0 nova_compute[256709]: ++ cat /run_command
Nov 24 09:47:00 compute-0 nova_compute[256709]: + CMD=nova-compute
Nov 24 09:47:00 compute-0 nova_compute[256709]: + ARGS=
Nov 24 09:47:00 compute-0 nova_compute[256709]: + sudo kolla_copy_cacerts
Nov 24 09:47:00 compute-0 nova_compute[256709]: + [[ ! -n '' ]]
Nov 24 09:47:00 compute-0 nova_compute[256709]: + . kolla_extend_start
Nov 24 09:47:00 compute-0 nova_compute[256709]: Running command: 'nova-compute'
Nov 24 09:47:00 compute-0 nova_compute[256709]: + echo 'Running command: '\''nova-compute'\'''
Nov 24 09:47:00 compute-0 nova_compute[256709]: + umask 0022
Nov 24 09:47:00 compute-0 nova_compute[256709]: + exec nova-compute
Nov 24 09:47:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:00 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa52c001d50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:47:00] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Nov 24 09:47:00 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:47:00] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Nov 24 09:47:01 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:01 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:01 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:01.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:01 compute-0 ceph-mon[74331]: pgmap v593: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 576 B/s rd, 164 B/s wr, 0 op/s
Nov 24 09:47:01 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:47:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:01 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa514000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:01 compute-0 python3.9[256872]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:47:02 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:02 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa528002000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:02 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v594: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 988 B/s rd, 247 B/s wr, 1 op/s
Nov 24 09:47:02 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:02 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:02 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:02.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:02 compute-0 nova_compute[256709]: 2025-11-24 09:47:02.510 256713 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 24 09:47:02 compute-0 nova_compute[256709]: 2025-11-24 09:47:02.510 256713 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 24 09:47:02 compute-0 nova_compute[256709]: 2025-11-24 09:47:02.510 256713 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 24 09:47:02 compute-0 nova_compute[256709]: 2025-11-24 09:47:02.511 256713 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Nov 24 09:47:02 compute-0 python3.9[257025]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:47:02 compute-0 nova_compute[256709]: 2025-11-24 09:47:02.653 256713 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:47:02 compute-0 nova_compute[256709]: 2025-11-24 09:47:02.672 256713 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:47:02 compute-0 nova_compute[256709]: 2025-11-24 09:47:02.673 256713 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Nov 24 09:47:02 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:02 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa520001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:03 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:03 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:03 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:03.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.308 256713 INFO nova.virt.driver [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Nov 24 09:47:03 compute-0 ceph-mon[74331]: pgmap v594: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 988 B/s rd, 247 B/s wr, 1 op/s
Nov 24 09:47:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:03 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa520001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:03 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.462 256713 INFO nova.compute.provider_config [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.473 256713 DEBUG oslo_concurrency.lockutils [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.473 256713 DEBUG oslo_concurrency.lockutils [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.474 256713 DEBUG oslo_concurrency.lockutils [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.474 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.474 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.474 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.474 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.475 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.475 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.475 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.475 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.475 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.475 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.475 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.476 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.476 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.476 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.476 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.476 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.476 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.477 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.477 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.477 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.477 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.477 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.477 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.478 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.478 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.478 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.478 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.478 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.478 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.478 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.479 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.479 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.479 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.479 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.479 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.479 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.480 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.480 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.480 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.480 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.480 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.480 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.480 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.481 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.481 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.481 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.481 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.481 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.481 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.482 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.482 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.482 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.482 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.482 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.482 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.483 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.483 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.483 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.483 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.483 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.483 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.484 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.484 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.484 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.484 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.484 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.484 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.484 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.485 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.485 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.485 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.485 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.485 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.485 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.486 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.486 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.486 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.486 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.487 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.487 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.487 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.487 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.487 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.488 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.488 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.488 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.488 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.488 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.488 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.489 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.489 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.489 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.489 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.489 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.490 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.490 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.490 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.490 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.490 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.491 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.491 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.491 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.491 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.491 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.492 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.492 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.492 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.492 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.492 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.493 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.493 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.493 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.493 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.493 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.493 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.494 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.494 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.494 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.494 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.494 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.494 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.494 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.495 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.495 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.495 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.495 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.495 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.495 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.495 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.496 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.496 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.496 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.496 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.496 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.496 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.496 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.497 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.497 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.497 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.497 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.497 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.497 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.497 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.498 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.498 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.498 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.498 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.498 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.498 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.499 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.499 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.499 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.499 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.499 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.499 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.499 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.500 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.500 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.500 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.500 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.500 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.500 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.501 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.501 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.501 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.501 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.501 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.501 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.502 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.502 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.502 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.502 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.502 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.502 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.502 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.503 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.503 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.503 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.503 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.503 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.503 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.503 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.504 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.504 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.504 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.504 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.504 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.504 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.504 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.505 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.505 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.505 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.505 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.505 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.505 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.505 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.506 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.506 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.506 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.506 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.506 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.506 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.507 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.507 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.507 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.507 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.507 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.507 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.507 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.508 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.508 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.508 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.508 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.508 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.508 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.508 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.509 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.509 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.509 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.509 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.509 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.509 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.509 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.510 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.510 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.510 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.510 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.510 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.510 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.510 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.511 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.511 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.511 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.511 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.511 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.511 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.511 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.512 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.512 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.512 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.512 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.512 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.512 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.512 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.513 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.513 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.513 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.513 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.513 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.513 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.513 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.514 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.514 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.514 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.514 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.514 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.514 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.514 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.515 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.515 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.515 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.515 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.515 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.515 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.515 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.516 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.516 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.516 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.516 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.516 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.516 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.517 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.517 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.517 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.517 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.517 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.517 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.517 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.518 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.518 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.518 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.518 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.518 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.518 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.518 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.519 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.519 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.519 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.519 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.519 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.519 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.520 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.520 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.520 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.520 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.520 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.520 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.520 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.520 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.521 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.521 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.521 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.521 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.521 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.522 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.522 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.522 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.522 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.522 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.522 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.523 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.523 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.523 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.523 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.523 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.523 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.524 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.524 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.524 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.524 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.524 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.524 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.524 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.525 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.525 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.525 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.525 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.525 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.525 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.525 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.526 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.526 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.526 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.526 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.526 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.526 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.527 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.527 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.527 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.527 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.527 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.527 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.527 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.528 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.528 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.528 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.528 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.528 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.528 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.528 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.529 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.529 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.529 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.529 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.529 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.530 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.530 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.530 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.530 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.530 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.530 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.530 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.531 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.531 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.531 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.531 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.531 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.531 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.532 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.532 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.532 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.532 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.532 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.532 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.532 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.533 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.533 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.533 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.533 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.533 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.533 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.533 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.534 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.534 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.534 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.534 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.534 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.534 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.534 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.535 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.535 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.535 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.535 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.535 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.535 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.535 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.536 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.536 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.536 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.536 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.536 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.536 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.536 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.537 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 python3.9[257178]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.537 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.537 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.537 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.537 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.537 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.537 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.538 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.538 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.538 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.538 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.538 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.538 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.538 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.539 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.539 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.539 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.539 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.539 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.539 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.539 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.540 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.540 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.540 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.540 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.540 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.540 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.540 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.541 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.541 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.541 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.541 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.541 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.541 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.542 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.542 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.542 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.542 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.542 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.542 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.542 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.543 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.543 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.543 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.543 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.543 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.543 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.544 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.544 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.544 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.544 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.544 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.544 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.544 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.545 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.545 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.545 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.545 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.545 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.545 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.545 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.546 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.546 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.546 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.546 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.546 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.546 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.546 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.547 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.547 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.547 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.547 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.547 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.547 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.547 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.548 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.548 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.548 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.548 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.548 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.548 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.548 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.549 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.549 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.549 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.549 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.549 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.549 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.550 256713 WARNING oslo_config.cfg [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 24 09:47:03 compute-0 nova_compute[256709]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 24 09:47:03 compute-0 nova_compute[256709]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 24 09:47:03 compute-0 nova_compute[256709]: and ``live_migration_inbound_addr`` respectively.
Nov 24 09:47:03 compute-0 nova_compute[256709]: ).  Its value may be silently ignored in the future.
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.550 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.550 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.550 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.550 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.550 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.551 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.551 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.551 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.551 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.551 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.551 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.551 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.552 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.552 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.552 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.552 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.552 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.552 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.552 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.rbd_secret_uuid        = 84a084c3-61a7-5de7-8207-1f88efa59a64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.553 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.553 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.553 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.553 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.553 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.553 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.554 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.554 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.554 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.554 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.554 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.554 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.555 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.555 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.555 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.555 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.555 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.555 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.555 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.556 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.556 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.556 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.556 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.556 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.556 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.556 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.556 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.557 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.557 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.557 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.557 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.557 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.557 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.558 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.558 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.558 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.558 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.558 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.558 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.559 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.559 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.559 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.559 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.559 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.559 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.560 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.560 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.560 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.560 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.560 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.560 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.561 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.561 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.561 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.561 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.561 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.561 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.562 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.562 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.562 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.562 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.562 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.562 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.562 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.563 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.563 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.563 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.563 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.563 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.563 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.564 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.564 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.564 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.564 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.564 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.564 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.564 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.564 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.565 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.565 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.565 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.565 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.565 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.565 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.565 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.566 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.566 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.566 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.566 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.566 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.566 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.567 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.567 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.567 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.567 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.567 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.567 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.567 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.568 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.568 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.568 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.568 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.568 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.568 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.568 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.569 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.569 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.569 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.569 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.569 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.569 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.569 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.570 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.570 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.570 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.570 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.570 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.570 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.570 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.571 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.571 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.571 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.571 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.571 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.571 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.572 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.572 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.572 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.572 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.572 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.572 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.572 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.573 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.573 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.573 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.573 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.573 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.573 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.573 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.574 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.574 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.574 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.574 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.574 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.574 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.574 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.575 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.575 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.575 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.575 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.575 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.575 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.576 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.576 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.576 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.576 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.576 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.576 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.576 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.577 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.577 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.577 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.577 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.577 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.577 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.578 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.578 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.578 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.578 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.578 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.578 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.578 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.579 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.579 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.579 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.579 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.579 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.579 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.580 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.580 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.580 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.580 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.580 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.580 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.580 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.581 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.581 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.581 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.581 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.581 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.581 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.581 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.582 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.582 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.582 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.582 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.582 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.582 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.582 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.583 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.583 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.583 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.583 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.583 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.583 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.583 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.584 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.584 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.584 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.584 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.584 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.584 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.584 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.585 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.585 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.585 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.585 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.585 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.585 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.585 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.586 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.586 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.586 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.586 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.586 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.586 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.586 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.587 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.587 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.587 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.587 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.587 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.588 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.588 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.588 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.588 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.588 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.588 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.588 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.589 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.589 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.589 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.589 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.589 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.589 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.590 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.590 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.590 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.590 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.590 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.590 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.590 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.591 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.591 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.591 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.591 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.591 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.591 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.591 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.592 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.592 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.592 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.592 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.592 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.592 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.592 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.593 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.593 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.593 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.593 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.593 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.593 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.594 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.594 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.594 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.594 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.594 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.594 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.595 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.595 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.595 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.595 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.595 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.595 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.595 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.596 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.596 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.596 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.596 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.596 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.596 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.596 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.597 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.597 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.597 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.597 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.597 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.597 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.598 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.598 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.598 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.598 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.598 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.598 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.598 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.599 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.599 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.599 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.599 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.599 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.599 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.600 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.600 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.600 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.600 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.600 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.600 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.600 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.601 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.601 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.601 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.601 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.601 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.601 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.601 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.602 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.602 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.602 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.602 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.602 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.602 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.602 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.603 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.603 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.603 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.603 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.603 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.603 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.603 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.604 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.604 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.604 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.604 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.604 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.604 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.604 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.605 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.605 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.605 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.605 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.605 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.605 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.605 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.606 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.606 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.606 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.606 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.606 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.606 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.607 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.607 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.607 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.607 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.607 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.607 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.608 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.608 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.608 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.608 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.608 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.608 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.608 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.608 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.609 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.609 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.609 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.609 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.609 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.609 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.610 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.610 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.610 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.610 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.610 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.610 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.610 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.611 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.611 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.611 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.611 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.611 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.611 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.611 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.612 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.612 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.612 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.612 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.612 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.612 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.612 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.613 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.613 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.613 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.613 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.613 256713 DEBUG oslo_service.service [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.614 256713 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.631 256713 DEBUG nova.virt.libvirt.host [None req-81904a61-c487-4bcb-aad8-6e4580912e17 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.632 256713 DEBUG nova.virt.libvirt.host [None req-81904a61-c487-4bcb-aad8-6e4580912e17 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.632 256713 DEBUG nova.virt.libvirt.host [None req-81904a61-c487-4bcb-aad8-6e4580912e17 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.632 256713 DEBUG nova.virt.libvirt.host [None req-81904a61-c487-4bcb-aad8-6e4580912e17 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Nov 24 09:47:03 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Nov 24 09:47:03 compute-0 systemd[1]: Started libvirt QEMU daemon.
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.706 256713 DEBUG nova.virt.libvirt.host [None req-81904a61-c487-4bcb-aad8-6e4580912e17 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fd210a63d60> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.708 256713 DEBUG nova.virt.libvirt.host [None req-81904a61-c487-4bcb-aad8-6e4580912e17 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fd210a63d60> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.709 256713 INFO nova.virt.libvirt.driver [None req-81904a61-c487-4bcb-aad8-6e4580912e17 - - - - - -] Connection event '1' reason 'None'
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.751 256713 WARNING nova.virt.libvirt.driver [None req-81904a61-c487-4bcb-aad8-6e4580912e17 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Nov 24 09:47:03 compute-0 nova_compute[256709]: 2025-11-24 09:47:03.752 256713 DEBUG nova.virt.libvirt.volume.mount [None req-81904a61-c487-4bcb-aad8-6e4580912e17 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Nov 24 09:47:03 compute-0 sudo[257247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:47:03 compute-0 sudo[257247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:47:03 compute-0 sudo[257247]: pam_unix(sudo:session): session closed for user root
Nov 24 09:47:04 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:04 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa514002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:04 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v595: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 494 B/s rd, 82 B/s wr, 0 op/s
Nov 24 09:47:04 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:04 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:04 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:04.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:04 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:04 : epoch 6924297d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:47:04 compute-0 sudo[257414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qozqrjfxjmebgljynjhumkeiafpvkzgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977623.941574-4333-15352719650440/AnsiballZ_podman_container.py'
Nov 24 09:47:04 compute-0 sudo[257414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:47:04 compute-0 nova_compute[256709]: 2025-11-24 09:47:04.567 256713 INFO nova.virt.libvirt.host [None req-81904a61-c487-4bcb-aad8-6e4580912e17 - - - - - -] Libvirt host capabilities <capabilities>
Nov 24 09:47:04 compute-0 nova_compute[256709]: 
Nov 24 09:47:04 compute-0 nova_compute[256709]:   <host>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <uuid>4c455ecc-8696-436b-b07b-3b4a91ae800f</uuid>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <cpu>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <arch>x86_64</arch>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model>EPYC-Rome-v4</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <vendor>AMD</vendor>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <microcode version='16777317'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <signature family='23' model='49' stepping='0'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <maxphysaddr mode='emulate' bits='40'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature name='x2apic'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature name='tsc-deadline'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature name='osxsave'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature name='hypervisor'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature name='tsc_adjust'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature name='spec-ctrl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature name='stibp'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature name='arch-capabilities'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature name='ssbd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature name='cmp_legacy'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature name='topoext'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature name='virt-ssbd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature name='lbrv'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature name='tsc-scale'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature name='vmcb-clean'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature name='pause-filter'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature name='pfthreshold'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature name='svme-addr-chk'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature name='rdctl-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature name='skip-l1dfl-vmentry'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature name='mds-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature name='pschange-mc-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <pages unit='KiB' size='4'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <pages unit='KiB' size='2048'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <pages unit='KiB' size='1048576'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </cpu>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <power_management>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <suspend_mem/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </power_management>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <iommu support='no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <migration_features>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <live/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <uri_transports>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <uri_transport>tcp</uri_transport>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <uri_transport>rdma</uri_transport>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </uri_transports>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </migration_features>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <topology>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <cells num='1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <cell id='0'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:           <memory unit='KiB'>7864320</memory>
Nov 24 09:47:04 compute-0 nova_compute[256709]:           <pages unit='KiB' size='4'>1966080</pages>
Nov 24 09:47:04 compute-0 nova_compute[256709]:           <pages unit='KiB' size='2048'>0</pages>
Nov 24 09:47:04 compute-0 nova_compute[256709]:           <pages unit='KiB' size='1048576'>0</pages>
Nov 24 09:47:04 compute-0 nova_compute[256709]:           <distances>
Nov 24 09:47:04 compute-0 nova_compute[256709]:             <sibling id='0' value='10'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:           </distances>
Nov 24 09:47:04 compute-0 nova_compute[256709]:           <cpus num='8'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:           </cpus>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         </cell>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </cells>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </topology>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <cache>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </cache>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <secmodel>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model>selinux</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <doi>0</doi>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </secmodel>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <secmodel>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model>dac</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <doi>0</doi>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <baselabel type='kvm'>+107:+107</baselabel>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <baselabel type='qemu'>+107:+107</baselabel>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </secmodel>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   </host>
Nov 24 09:47:04 compute-0 nova_compute[256709]: 
Nov 24 09:47:04 compute-0 nova_compute[256709]:   <guest>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <os_type>hvm</os_type>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <arch name='i686'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <wordsize>32</wordsize>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <domain type='qemu'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <domain type='kvm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </arch>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <features>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <pae/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <nonpae/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <acpi default='on' toggle='yes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <apic default='on' toggle='no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <cpuselection/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <deviceboot/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <disksnapshot default='on' toggle='no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <externalSnapshot/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </features>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   </guest>
Nov 24 09:47:04 compute-0 nova_compute[256709]: 
Nov 24 09:47:04 compute-0 nova_compute[256709]:   <guest>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <os_type>hvm</os_type>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <arch name='x86_64'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <wordsize>64</wordsize>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <domain type='qemu'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <domain type='kvm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </arch>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <features>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <acpi default='on' toggle='yes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <apic default='on' toggle='no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <cpuselection/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <deviceboot/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <disksnapshot default='on' toggle='no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <externalSnapshot/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </features>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   </guest>
Nov 24 09:47:04 compute-0 nova_compute[256709]: 
Nov 24 09:47:04 compute-0 nova_compute[256709]: </capabilities>
Nov 24 09:47:04 compute-0 nova_compute[256709]: 
Nov 24 09:47:04 compute-0 nova_compute[256709]: 2025-11-24 09:47:04.575 256713 DEBUG nova.virt.libvirt.host [None req-81904a61-c487-4bcb-aad8-6e4580912e17 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 24 09:47:04 compute-0 nova_compute[256709]: 2025-11-24 09:47:04.603 256713 DEBUG nova.virt.libvirt.host [None req-81904a61-c487-4bcb-aad8-6e4580912e17 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 24 09:47:04 compute-0 nova_compute[256709]: <domainCapabilities>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   <path>/usr/libexec/qemu-kvm</path>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   <domain>kvm</domain>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   <arch>i686</arch>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   <vcpu max='240'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   <iothreads supported='yes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   <os supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <enum name='firmware'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <loader supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='type'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>rom</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>pflash</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='readonly'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>yes</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>no</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='secure'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>no</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </loader>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   </os>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   <cpu>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <mode name='host-passthrough' supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='hostPassthroughMigratable'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>on</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>off</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </mode>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <mode name='maximum' supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='maximumMigratable'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>on</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>off</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </mode>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <mode name='host-model' supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <vendor>AMD</vendor>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='x2apic'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='tsc-deadline'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='hypervisor'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='tsc_adjust'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='spec-ctrl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='stibp'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='ssbd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='cmp_legacy'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='overflow-recov'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='succor'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='ibrs'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='amd-ssbd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='virt-ssbd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='lbrv'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='tsc-scale'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='vmcb-clean'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='flushbyasid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='pause-filter'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='pfthreshold'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='svme-addr-chk'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='disable' name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </mode>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <mode name='custom' supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Broadwell'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Broadwell-IBRS'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Broadwell-noTSX'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Broadwell-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Broadwell-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Broadwell-v3'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Broadwell-v4'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Cascadelake-Server'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Cascadelake-Server-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Cascadelake-Server-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Cascadelake-Server-v3'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Cascadelake-Server-v4'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Cascadelake-Server-v5'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Cooperlake'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Cooperlake-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Cooperlake-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Denverton'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='mpx'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Denverton-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='mpx'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Denverton-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Denverton-v3'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Dhyana-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='EPYC-Genoa'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amd-psfd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='auto-ibrs'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='no-nested-data-bp'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='null-sel-clr-base'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='stibp-always-on'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='EPYC-Genoa-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amd-psfd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='auto-ibrs'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='no-nested-data-bp'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='null-sel-clr-base'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='stibp-always-on'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='EPYC-Milan'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='EPYC-Milan-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='EPYC-Milan-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amd-psfd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='no-nested-data-bp'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='null-sel-clr-base'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='stibp-always-on'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='EPYC-Rome'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='EPYC-Rome-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='EPYC-Rome-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='EPYC-Rome-v3'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='EPYC-v3'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='EPYC-v4'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='GraniteRapids'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-fp16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-int8'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-tile'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-fp16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fbsdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrc'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fzrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='mcdt-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pbrsb-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='prefetchiti'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='psdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xfd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='GraniteRapids-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-fp16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-int8'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-tile'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-fp16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fbsdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrc'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fzrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='mcdt-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pbrsb-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='prefetchiti'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='psdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xfd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='GraniteRapids-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-fp16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-int8'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-tile'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx10'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx10-128'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx10-256'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx10-512'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-fp16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='cldemote'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fbsdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrc'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fzrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='mcdt-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='movdir64b'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='movdiri'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pbrsb-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='prefetchiti'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='psdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ss'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xfd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Haswell'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Haswell-IBRS'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Haswell-noTSX'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Haswell-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Haswell-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Haswell-v3'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Haswell-v4'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Icelake-Server'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Icelake-Server-noTSX'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Icelake-Server-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Icelake-Server-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Icelake-Server-v3'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Icelake-Server-v4'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Icelake-Server-v5'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Icelake-Server-v6'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Icelake-Server-v7'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='IvyBridge'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='IvyBridge-IBRS'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='IvyBridge-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='IvyBridge-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='KnightsMill'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-4fmaps'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-4vnniw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512er'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512pf'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ss'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='KnightsMill-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-4fmaps'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-4vnniw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512er'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512pf'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ss'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Opteron_G4'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fma4'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xop'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Opteron_G4-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fma4'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xop'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Opteron_G5'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fma4'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='tbm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xop'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Opteron_G5-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fma4'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='tbm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xop'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='SapphireRapids'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-int8'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-tile'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-fp16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrc'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fzrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xfd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='SapphireRapids-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-int8'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-tile'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-fp16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrc'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fzrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xfd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='SapphireRapids-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-int8'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-tile'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-fp16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fbsdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrc'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fzrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='psdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xfd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='SapphireRapids-v3'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-int8'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-tile'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-fp16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='cldemote'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fbsdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrc'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fzrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='movdir64b'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='movdiri'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='psdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ss'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xfd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='SierraForest'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-ne-convert'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-vnni-int8'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='cmpccxadd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fbsdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='mcdt-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pbrsb-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='psdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-0 python3.9[257416]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='SierraForest-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-ne-convert'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-vnni-int8'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='cmpccxadd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fbsdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='mcdt-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pbrsb-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='psdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Client'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Client-IBRS'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Client-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Client-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Client-v3'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Client-v4'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Server'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Server-IBRS'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Server-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Server-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Server-v3'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Server-v4'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Server-v5'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Snowridge'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='cldemote'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='core-capability'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='movdir64b'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='movdiri'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='mpx'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='split-lock-detect'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Snowridge-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='cldemote'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='core-capability'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='movdir64b'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='movdiri'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='mpx'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='split-lock-detect'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Snowridge-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='cldemote'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='core-capability'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='movdir64b'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='movdiri'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='split-lock-detect'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Snowridge-v3'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='cldemote'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='core-capability'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='movdir64b'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='movdiri'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='split-lock-detect'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Snowridge-v4'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='cldemote'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='movdir64b'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='movdiri'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='athlon'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='3dnow'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='3dnowext'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='athlon-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='3dnow'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='3dnowext'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='core2duo'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ss'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='core2duo-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ss'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='coreduo'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ss'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='coreduo-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ss'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='n270'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ss'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='n270-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ss'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='phenom'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='3dnow'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='3dnowext'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='phenom-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='3dnow'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='3dnowext'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </mode>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   </cpu>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   <memoryBacking supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <enum name='sourceType'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <value>file</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <value>anonymous</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <value>memfd</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   </memoryBacking>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   <devices>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <disk supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='diskDevice'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>disk</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>cdrom</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>floppy</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>lun</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='bus'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>ide</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>fdc</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>scsi</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>virtio</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>usb</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>sata</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='model'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>virtio</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>virtio-transitional</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>virtio-non-transitional</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </disk>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <graphics supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='type'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>vnc</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>egl-headless</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>dbus</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </graphics>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <video supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='modelType'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>vga</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>cirrus</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>virtio</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>none</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>bochs</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>ramfb</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </video>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <hostdev supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='mode'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>subsystem</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='startupPolicy'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>default</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>mandatory</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>requisite</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>optional</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='subsysType'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>usb</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>pci</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>scsi</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='capsType'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='pciBackend'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </hostdev>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <rng supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='model'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>virtio</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>virtio-transitional</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>virtio-non-transitional</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='backendModel'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>random</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>egd</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>builtin</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </rng>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <filesystem supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='driverType'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>path</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>handle</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>virtiofs</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </filesystem>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <tpm supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='model'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>tpm-tis</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>tpm-crb</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='backendModel'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>emulator</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>external</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='backendVersion'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>2.0</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </tpm>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <redirdev supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='bus'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>usb</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </redirdev>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <channel supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='type'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>pty</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>unix</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </channel>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <crypto supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='model'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='type'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>qemu</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='backendModel'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>builtin</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </crypto>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <interface supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='backendType'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>default</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>passt</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </interface>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <panic supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='model'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>isa</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>hyperv</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </panic>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <console supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='type'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>null</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>vc</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>pty</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>dev</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>file</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>pipe</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>stdio</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>udp</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>tcp</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>unix</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>qemu-vdagent</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>dbus</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </console>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   </devices>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   <features>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <gic supported='no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <vmcoreinfo supported='yes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <genid supported='yes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <backingStoreInput supported='yes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <backup supported='yes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <async-teardown supported='yes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <ps2 supported='yes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <sev supported='no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <sgx supported='no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <hyperv supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='features'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>relaxed</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>vapic</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>spinlocks</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>vpindex</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>runtime</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>synic</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>stimer</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>reset</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>vendor_id</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>frequencies</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>reenlightenment</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>tlbflush</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>ipi</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>avic</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>emsr_bitmap</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>xmm_input</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <defaults>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <spinlocks>4095</spinlocks>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <stimer_direct>on</stimer_direct>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <tlbflush_direct>on</tlbflush_direct>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <tlbflush_extended>on</tlbflush_extended>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </defaults>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </hyperv>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <launchSecurity supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='sectype'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>tdx</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </launchSecurity>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   </features>
Nov 24 09:47:04 compute-0 nova_compute[256709]: </domainCapabilities>
Nov 24 09:47:04 compute-0 nova_compute[256709]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 24 09:47:04 compute-0 nova_compute[256709]: 2025-11-24 09:47:04.626 256713 DEBUG nova.virt.libvirt.host [None req-81904a61-c487-4bcb-aad8-6e4580912e17 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 24 09:47:04 compute-0 nova_compute[256709]: <domainCapabilities>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   <path>/usr/libexec/qemu-kvm</path>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   <domain>kvm</domain>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   <arch>i686</arch>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   <vcpu max='4096'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   <iothreads supported='yes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   <os supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <enum name='firmware'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <loader supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='type'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>rom</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>pflash</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='readonly'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>yes</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>no</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='secure'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>no</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </loader>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   </os>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   <cpu>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <mode name='host-passthrough' supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='hostPassthroughMigratable'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>on</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>off</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </mode>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <mode name='maximum' supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='maximumMigratable'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>on</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>off</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </mode>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <mode name='host-model' supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <vendor>AMD</vendor>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='x2apic'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='tsc-deadline'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='hypervisor'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='tsc_adjust'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='spec-ctrl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='stibp'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='ssbd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='cmp_legacy'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='overflow-recov'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='succor'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='ibrs'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='amd-ssbd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='virt-ssbd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='lbrv'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='tsc-scale'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='vmcb-clean'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='flushbyasid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='pause-filter'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='pfthreshold'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='svme-addr-chk'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='disable' name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </mode>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <mode name='custom' supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Broadwell'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Broadwell-IBRS'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Broadwell-noTSX'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Broadwell-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Broadwell-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Broadwell-v3'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Broadwell-v4'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Cascadelake-Server'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Cascadelake-Server-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Cascadelake-Server-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Cascadelake-Server-v3'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Cascadelake-Server-v4'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Cascadelake-Server-v5'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Cooperlake'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Cooperlake-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Cooperlake-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Denverton'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='mpx'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Denverton-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='mpx'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Denverton-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Denverton-v3'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Dhyana-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='EPYC-Genoa'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amd-psfd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='auto-ibrs'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='no-nested-data-bp'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='null-sel-clr-base'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='stibp-always-on'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='EPYC-Genoa-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amd-psfd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='auto-ibrs'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='no-nested-data-bp'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='null-sel-clr-base'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='stibp-always-on'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='EPYC-Milan'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='EPYC-Milan-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='EPYC-Milan-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amd-psfd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='no-nested-data-bp'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='null-sel-clr-base'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='stibp-always-on'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='EPYC-Rome'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='EPYC-Rome-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='EPYC-Rome-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='EPYC-Rome-v3'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='EPYC-v3'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='EPYC-v4'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='GraniteRapids'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-fp16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-int8'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-tile'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-fp16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fbsdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrc'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fzrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='mcdt-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pbrsb-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='prefetchiti'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='psdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xfd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='GraniteRapids-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-fp16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-int8'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-tile'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-fp16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fbsdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrc'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fzrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='mcdt-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pbrsb-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='prefetchiti'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='psdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xfd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='GraniteRapids-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-fp16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-int8'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-tile'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx10'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx10-128'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx10-256'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx10-512'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-fp16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='cldemote'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fbsdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrc'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fzrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='mcdt-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='movdir64b'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='movdiri'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pbrsb-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='prefetchiti'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='psdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ss'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xfd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Haswell'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Haswell-IBRS'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Haswell-noTSX'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Haswell-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Haswell-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Haswell-v3'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Haswell-v4'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Icelake-Server'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Icelake-Server-noTSX'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Icelake-Server-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Icelake-Server-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Icelake-Server-v3'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Icelake-Server-v4'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Icelake-Server-v5'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Icelake-Server-v6'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Icelake-Server-v7'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='IvyBridge'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='IvyBridge-IBRS'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='IvyBridge-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='IvyBridge-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='KnightsMill'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-4fmaps'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-4vnniw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512er'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512pf'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ss'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='KnightsMill-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-4fmaps'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-4vnniw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512er'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512pf'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ss'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Opteron_G4'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fma4'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xop'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Opteron_G4-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fma4'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xop'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Opteron_G5'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fma4'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='tbm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xop'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Opteron_G5-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fma4'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='tbm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xop'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='SapphireRapids'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-int8'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-tile'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-fp16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrc'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fzrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xfd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='SapphireRapids-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-int8'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-tile'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-fp16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrc'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fzrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xfd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='SapphireRapids-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-int8'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-tile'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-fp16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fbsdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrc'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fzrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='psdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xfd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='SapphireRapids-v3'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-int8'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-tile'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-fp16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='cldemote'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fbsdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrc'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fzrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='movdir64b'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='movdiri'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='psdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ss'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xfd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='SierraForest'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-ne-convert'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-vnni-int8'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='cmpccxadd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fbsdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='mcdt-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pbrsb-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='psdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='SierraForest-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-ne-convert'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-vnni-int8'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='cmpccxadd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fbsdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='mcdt-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pbrsb-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='psdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Client'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Client-IBRS'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Client-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Client-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Client-v3'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Client-v4'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Server'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Server-IBRS'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Server-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Server-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Server-v3'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Server-v4'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Server-v5'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Snowridge'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='cldemote'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='core-capability'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='movdir64b'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='movdiri'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='mpx'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='split-lock-detect'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Snowridge-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='cldemote'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='core-capability'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='movdir64b'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='movdiri'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='mpx'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='split-lock-detect'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Snowridge-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='cldemote'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='core-capability'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='movdir64b'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='movdiri'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='split-lock-detect'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Snowridge-v3'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='cldemote'/>
Nov 24 09:47:04 compute-0 sudo[257414]: pam_unix(sudo:session): session closed for user root
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='core-capability'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='movdir64b'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='movdiri'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='split-lock-detect'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Snowridge-v4'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='cldemote'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='movdir64b'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='movdiri'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='athlon'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='3dnow'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='3dnowext'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='athlon-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='3dnow'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='3dnowext'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='core2duo'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ss'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='core2duo-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ss'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='coreduo'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ss'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='coreduo-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ss'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='n270'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ss'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='n270-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ss'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='phenom'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='3dnow'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='3dnowext'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='phenom-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='3dnow'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='3dnowext'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </mode>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   </cpu>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   <memoryBacking supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <enum name='sourceType'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <value>file</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <value>anonymous</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <value>memfd</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   </memoryBacking>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   <devices>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <disk supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='diskDevice'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>disk</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>cdrom</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>floppy</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>lun</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='bus'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>fdc</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>scsi</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>virtio</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>usb</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>sata</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='model'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>virtio</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>virtio-transitional</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>virtio-non-transitional</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </disk>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <graphics supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='type'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>vnc</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>egl-headless</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>dbus</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </graphics>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <video supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='modelType'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>vga</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>cirrus</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>virtio</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>none</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>bochs</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>ramfb</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </video>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <hostdev supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='mode'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>subsystem</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='startupPolicy'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>default</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>mandatory</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>requisite</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>optional</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='subsysType'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>usb</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>pci</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>scsi</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='capsType'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='pciBackend'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </hostdev>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <rng supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='model'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>virtio</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>virtio-transitional</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>virtio-non-transitional</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='backendModel'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>random</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>egd</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>builtin</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </rng>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <filesystem supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='driverType'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>path</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>handle</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>virtiofs</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </filesystem>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <tpm supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='model'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>tpm-tis</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>tpm-crb</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='backendModel'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>emulator</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>external</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='backendVersion'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>2.0</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </tpm>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <redirdev supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='bus'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>usb</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </redirdev>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <channel supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='type'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>pty</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>unix</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </channel>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <crypto supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='model'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='type'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>qemu</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='backendModel'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>builtin</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </crypto>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <interface supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='backendType'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>default</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>passt</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </interface>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <panic supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='model'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>isa</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>hyperv</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </panic>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <console supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='type'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>null</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>vc</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>pty</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>dev</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>file</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>pipe</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>stdio</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>udp</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>tcp</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>unix</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>qemu-vdagent</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>dbus</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </console>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   </devices>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   <features>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <gic supported='no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <vmcoreinfo supported='yes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <genid supported='yes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <backingStoreInput supported='yes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <backup supported='yes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <async-teardown supported='yes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <ps2 supported='yes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <sev supported='no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <sgx supported='no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <hyperv supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='features'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>relaxed</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>vapic</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>spinlocks</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>vpindex</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>runtime</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>synic</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>stimer</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>reset</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>vendor_id</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>frequencies</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>reenlightenment</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>tlbflush</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>ipi</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>avic</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>emsr_bitmap</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>xmm_input</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <defaults>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <spinlocks>4095</spinlocks>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <stimer_direct>on</stimer_direct>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <tlbflush_direct>on</tlbflush_direct>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <tlbflush_extended>on</tlbflush_extended>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </defaults>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </hyperv>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <launchSecurity supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='sectype'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>tdx</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </launchSecurity>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   </features>
Nov 24 09:47:04 compute-0 nova_compute[256709]: </domainCapabilities>
Nov 24 09:47:04 compute-0 nova_compute[256709]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 24 09:47:04 compute-0 nova_compute[256709]: 2025-11-24 09:47:04.650 256713 DEBUG nova.virt.libvirt.host [None req-81904a61-c487-4bcb-aad8-6e4580912e17 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 24 09:47:04 compute-0 nova_compute[256709]: 2025-11-24 09:47:04.654 256713 DEBUG nova.virt.libvirt.host [None req-81904a61-c487-4bcb-aad8-6e4580912e17 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 24 09:47:04 compute-0 nova_compute[256709]: <domainCapabilities>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   <path>/usr/libexec/qemu-kvm</path>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   <domain>kvm</domain>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   <arch>x86_64</arch>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   <vcpu max='240'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   <iothreads supported='yes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   <os supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <enum name='firmware'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <loader supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='type'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>rom</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>pflash</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='readonly'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>yes</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>no</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='secure'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>no</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </loader>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   </os>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   <cpu>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <mode name='host-passthrough' supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='hostPassthroughMigratable'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>on</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>off</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </mode>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <mode name='maximum' supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='maximumMigratable'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>on</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>off</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </mode>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <mode name='host-model' supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <vendor>AMD</vendor>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='x2apic'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='tsc-deadline'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='hypervisor'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='tsc_adjust'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='spec-ctrl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='stibp'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='ssbd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='cmp_legacy'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='overflow-recov'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='succor'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='ibrs'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='amd-ssbd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='virt-ssbd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='lbrv'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='tsc-scale'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='vmcb-clean'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='flushbyasid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='pause-filter'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='pfthreshold'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='svme-addr-chk'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='disable' name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </mode>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <mode name='custom' supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Broadwell'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Broadwell-IBRS'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Broadwell-noTSX'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Broadwell-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Broadwell-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Broadwell-v3'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Broadwell-v4'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Cascadelake-Server'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Cascadelake-Server-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Cascadelake-Server-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Cascadelake-Server-v3'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Cascadelake-Server-v4'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Cascadelake-Server-v5'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Cooperlake'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Cooperlake-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Cooperlake-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Denverton'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='mpx'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Denverton-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='mpx'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Denverton-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Denverton-v3'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Dhyana-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='EPYC-Genoa'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amd-psfd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='auto-ibrs'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='no-nested-data-bp'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='null-sel-clr-base'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='stibp-always-on'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='EPYC-Genoa-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amd-psfd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='auto-ibrs'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='no-nested-data-bp'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='null-sel-clr-base'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='stibp-always-on'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='EPYC-Milan'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='EPYC-Milan-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='EPYC-Milan-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amd-psfd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='no-nested-data-bp'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='null-sel-clr-base'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='stibp-always-on'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='EPYC-Rome'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='EPYC-Rome-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='EPYC-Rome-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='EPYC-Rome-v3'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='EPYC-v3'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='EPYC-v4'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='GraniteRapids'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-fp16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-int8'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-tile'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-fp16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fbsdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrc'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fzrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='mcdt-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pbrsb-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='prefetchiti'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='psdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xfd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='GraniteRapids-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-fp16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-int8'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-tile'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-fp16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fbsdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrc'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fzrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='mcdt-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pbrsb-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='prefetchiti'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='psdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xfd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='GraniteRapids-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-fp16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-int8'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-tile'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx10'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx10-128'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx10-256'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx10-512'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-fp16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='cldemote'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fbsdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrc'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fzrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='mcdt-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='movdir64b'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='movdiri'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pbrsb-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='prefetchiti'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='psdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ss'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xfd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Haswell'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Haswell-IBRS'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Haswell-noTSX'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Haswell-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Haswell-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Haswell-v3'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Haswell-v4'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Icelake-Server'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Icelake-Server-noTSX'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Icelake-Server-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Icelake-Server-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Icelake-Server-v3'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Icelake-Server-v4'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Icelake-Server-v5'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Icelake-Server-v6'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Icelake-Server-v7'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='IvyBridge'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='IvyBridge-IBRS'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='IvyBridge-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='IvyBridge-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='KnightsMill'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-4fmaps'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-4vnniw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512er'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512pf'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ss'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='KnightsMill-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-4fmaps'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-4vnniw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512er'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512pf'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ss'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Opteron_G4'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fma4'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xop'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Opteron_G4-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fma4'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xop'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Opteron_G5'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fma4'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='tbm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xop'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Opteron_G5-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fma4'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='tbm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xop'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='SapphireRapids'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-int8'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-tile'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-fp16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrc'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fzrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xfd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='SapphireRapids-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-int8'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-tile'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-fp16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrc'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fzrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xfd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='SapphireRapids-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-int8'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-tile'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-fp16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fbsdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrc'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fzrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='psdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xfd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='SapphireRapids-v3'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-int8'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-tile'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-fp16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='cldemote'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fbsdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrc'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fzrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='movdir64b'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='movdiri'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='psdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ss'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xfd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='SierraForest'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-ne-convert'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-vnni-int8'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='cmpccxadd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fbsdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='mcdt-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pbrsb-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='psdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='SierraForest-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-ne-convert'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-vnni-int8'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='cmpccxadd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fbsdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='mcdt-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pbrsb-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='psdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Client'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Client-IBRS'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Client-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Client-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Client-v3'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Client-v4'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Server'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Server-IBRS'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Server-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Server-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Server-v3'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Server-v4'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Server-v5'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Snowridge'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='cldemote'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='core-capability'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='movdir64b'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='movdiri'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='mpx'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='split-lock-detect'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Snowridge-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='cldemote'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='core-capability'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='movdir64b'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='movdiri'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='mpx'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='split-lock-detect'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Snowridge-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='cldemote'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='core-capability'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='movdir64b'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='movdiri'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='split-lock-detect'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Snowridge-v3'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='cldemote'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='core-capability'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='movdir64b'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='movdiri'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='split-lock-detect'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Snowridge-v4'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='cldemote'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='movdir64b'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='movdiri'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='athlon'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='3dnow'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='3dnowext'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='athlon-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='3dnow'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='3dnowext'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='core2duo'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ss'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='core2duo-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ss'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='coreduo'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ss'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='coreduo-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ss'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='n270'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ss'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='n270-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ss'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='phenom'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='3dnow'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='3dnowext'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='phenom-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='3dnow'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='3dnowext'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </mode>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   </cpu>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   <memoryBacking supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <enum name='sourceType'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <value>file</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <value>anonymous</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <value>memfd</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   </memoryBacking>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   <devices>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <disk supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='diskDevice'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>disk</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>cdrom</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>floppy</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>lun</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='bus'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>ide</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>fdc</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>scsi</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>virtio</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>usb</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>sata</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='model'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>virtio</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>virtio-transitional</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>virtio-non-transitional</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </disk>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <graphics supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='type'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>vnc</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>egl-headless</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>dbus</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </graphics>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <video supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='modelType'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>vga</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>cirrus</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>virtio</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>none</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>bochs</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>ramfb</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </video>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <hostdev supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='mode'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>subsystem</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='startupPolicy'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>default</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>mandatory</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>requisite</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>optional</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='subsysType'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>usb</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>pci</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>scsi</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='capsType'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='pciBackend'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </hostdev>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <rng supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='model'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>virtio</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>virtio-transitional</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>virtio-non-transitional</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='backendModel'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>random</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>egd</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>builtin</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </rng>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <filesystem supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='driverType'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>path</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>handle</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>virtiofs</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </filesystem>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <tpm supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='model'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>tpm-tis</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>tpm-crb</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='backendModel'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>emulator</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>external</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='backendVersion'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>2.0</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </tpm>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <redirdev supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='bus'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>usb</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </redirdev>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <channel supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='type'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>pty</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>unix</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </channel>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <crypto supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='model'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='type'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>qemu</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='backendModel'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>builtin</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </crypto>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <interface supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='backendType'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>default</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>passt</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </interface>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <panic supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='model'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>isa</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>hyperv</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </panic>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <console supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='type'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>null</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>vc</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>pty</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>dev</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>file</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>pipe</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>stdio</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>udp</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>tcp</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>unix</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>qemu-vdagent</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>dbus</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </console>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   </devices>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   <features>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <gic supported='no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <vmcoreinfo supported='yes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <genid supported='yes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <backingStoreInput supported='yes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <backup supported='yes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <async-teardown supported='yes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <ps2 supported='yes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <sev supported='no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <sgx supported='no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <hyperv supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='features'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>relaxed</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>vapic</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>spinlocks</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>vpindex</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>runtime</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>synic</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>stimer</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>reset</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>vendor_id</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>frequencies</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>reenlightenment</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>tlbflush</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>ipi</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>avic</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>emsr_bitmap</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>xmm_input</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <defaults>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <spinlocks>4095</spinlocks>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <stimer_direct>on</stimer_direct>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <tlbflush_direct>on</tlbflush_direct>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <tlbflush_extended>on</tlbflush_extended>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </defaults>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </hyperv>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <launchSecurity supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='sectype'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>tdx</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </launchSecurity>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   </features>
Nov 24 09:47:04 compute-0 nova_compute[256709]: </domainCapabilities>
Nov 24 09:47:04 compute-0 nova_compute[256709]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 24 09:47:04 compute-0 nova_compute[256709]: 2025-11-24 09:47:04.729 256713 DEBUG nova.virt.libvirt.host [None req-81904a61-c487-4bcb-aad8-6e4580912e17 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 24 09:47:04 compute-0 nova_compute[256709]: <domainCapabilities>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   <path>/usr/libexec/qemu-kvm</path>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   <domain>kvm</domain>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   <arch>x86_64</arch>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   <vcpu max='4096'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   <iothreads supported='yes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   <os supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <enum name='firmware'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <value>efi</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <loader supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='type'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>rom</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>pflash</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='readonly'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>yes</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>no</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='secure'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>yes</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>no</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </loader>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   </os>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   <cpu>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <mode name='host-passthrough' supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='hostPassthroughMigratable'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>on</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>off</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </mode>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <mode name='maximum' supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='maximumMigratable'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>on</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>off</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </mode>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <mode name='host-model' supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <vendor>AMD</vendor>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='x2apic'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='tsc-deadline'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='hypervisor'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='tsc_adjust'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='spec-ctrl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='stibp'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='ssbd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='cmp_legacy'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='overflow-recov'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='succor'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='ibrs'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='amd-ssbd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='virt-ssbd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='lbrv'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='tsc-scale'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='vmcb-clean'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='flushbyasid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='pause-filter'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='pfthreshold'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='svme-addr-chk'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <feature policy='disable' name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </mode>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <mode name='custom' supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Broadwell'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Broadwell-IBRS'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Broadwell-noTSX'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Broadwell-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Broadwell-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Broadwell-v3'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Broadwell-v4'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Cascadelake-Server'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Cascadelake-Server-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Cascadelake-Server-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Cascadelake-Server-v3'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Cascadelake-Server-v4'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Cascadelake-Server-v5'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Cooperlake'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Cooperlake-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Cooperlake-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Denverton'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='mpx'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Denverton-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='mpx'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Denverton-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Denverton-v3'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Dhyana-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='EPYC-Genoa'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amd-psfd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='auto-ibrs'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='no-nested-data-bp'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='null-sel-clr-base'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='stibp-always-on'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='EPYC-Genoa-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amd-psfd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='auto-ibrs'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='no-nested-data-bp'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='null-sel-clr-base'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='stibp-always-on'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='EPYC-Milan'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='EPYC-Milan-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='EPYC-Milan-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amd-psfd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='no-nested-data-bp'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='null-sel-clr-base'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='stibp-always-on'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='EPYC-Rome'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='EPYC-Rome-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='EPYC-Rome-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='EPYC-Rome-v3'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='EPYC-v3'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='EPYC-v4'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='GraniteRapids'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-fp16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-int8'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-tile'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-fp16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fbsdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrc'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fzrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='mcdt-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pbrsb-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='prefetchiti'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='psdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xfd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='GraniteRapids-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-fp16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-int8'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-tile'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-fp16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fbsdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrc'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fzrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='mcdt-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pbrsb-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='prefetchiti'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='psdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xfd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='GraniteRapids-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-fp16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-int8'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-tile'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx10'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx10-128'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx10-256'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx10-512'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-fp16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='cldemote'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fbsdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrc'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fzrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='mcdt-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='movdir64b'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='movdiri'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pbrsb-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='prefetchiti'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='psdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ss'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xfd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Haswell'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Haswell-IBRS'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Haswell-noTSX'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Haswell-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Haswell-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Haswell-v3'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Haswell-v4'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Icelake-Server'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Icelake-Server-noTSX'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Icelake-Server-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Icelake-Server-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Icelake-Server-v3'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Icelake-Server-v4'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Icelake-Server-v5'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Icelake-Server-v6'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Icelake-Server-v7'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='IvyBridge'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='IvyBridge-IBRS'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='IvyBridge-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='IvyBridge-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='KnightsMill'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-4fmaps'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-4vnniw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512er'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512pf'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ss'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='KnightsMill-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-4fmaps'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-4vnniw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512er'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512pf'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ss'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Opteron_G4'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fma4'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xop'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Opteron_G4-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fma4'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xop'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Opteron_G5'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fma4'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='tbm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xop'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Opteron_G5-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fma4'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='tbm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xop'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='SapphireRapids'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-int8'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-tile'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-fp16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrc'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fzrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xfd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='SapphireRapids-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-int8'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-tile'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-fp16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrc'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fzrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xfd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='SapphireRapids-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-int8'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-tile'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-fp16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fbsdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrc'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fzrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='psdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xfd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='SapphireRapids-v3'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-int8'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='amx-tile'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-bf16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-fp16'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bitalg'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='cldemote'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fbsdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrc'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fzrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='la57'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='movdir64b'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='movdiri'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='psdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ss'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='taa-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xfd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='SierraForest'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-ne-convert'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-vnni-int8'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='cmpccxadd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fbsdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='mcdt-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pbrsb-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='psdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:04 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa528002920 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='SierraForest-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-ifma'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-ne-convert'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-vnni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx-vnni-int8'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='cmpccxadd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fbsdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='fsrs'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ibrs-all'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='mcdt-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pbrsb-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='psdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='serialize'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vaes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Client'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Client-IBRS'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Client-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Client-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Client-v3'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Client-v4'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Server'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Server-IBRS'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Server-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Server-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='hle'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='rtm'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Server-v3'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Server-v4'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Skylake-Server-v5'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512bw'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512cd'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512dq'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512f'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='avx512vl'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='invpcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pcid'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='pku'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Snowridge'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='cldemote'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='core-capability'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='movdir64b'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='movdiri'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='mpx'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='split-lock-detect'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Snowridge-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='cldemote'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='core-capability'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='movdir64b'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='movdiri'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='mpx'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='split-lock-detect'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Snowridge-v2'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='cldemote'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='core-capability'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='movdir64b'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='movdiri'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='split-lock-detect'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Snowridge-v3'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='cldemote'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='core-capability'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='movdir64b'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='movdiri'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='split-lock-detect'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='Snowridge-v4'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='cldemote'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='erms'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='gfni'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='movdir64b'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='movdiri'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='xsaves'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='athlon'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='3dnow'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='3dnowext'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='athlon-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='3dnow'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='3dnowext'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='core2duo'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ss'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='core2duo-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ss'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='coreduo'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ss'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='coreduo-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ss'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='n270'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ss'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='n270-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='ss'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='phenom'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='3dnow'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='3dnowext'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <blockers model='phenom-v1'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='3dnow'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <feature name='3dnowext'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </blockers>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </mode>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   </cpu>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   <memoryBacking supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <enum name='sourceType'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <value>file</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <value>anonymous</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <value>memfd</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   </memoryBacking>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   <devices>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <disk supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='diskDevice'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>disk</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>cdrom</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>floppy</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>lun</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='bus'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>fdc</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>scsi</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>virtio</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>usb</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>sata</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='model'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>virtio</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>virtio-transitional</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>virtio-non-transitional</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </disk>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <graphics supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='type'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>vnc</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>egl-headless</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>dbus</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </graphics>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <video supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='modelType'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>vga</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>cirrus</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>virtio</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>none</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>bochs</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>ramfb</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </video>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <hostdev supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='mode'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>subsystem</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='startupPolicy'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>default</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>mandatory</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>requisite</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>optional</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='subsysType'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>usb</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>pci</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>scsi</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='capsType'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='pciBackend'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </hostdev>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <rng supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='model'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>virtio</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>virtio-transitional</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>virtio-non-transitional</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='backendModel'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>random</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>egd</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>builtin</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </rng>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <filesystem supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='driverType'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>path</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>handle</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>virtiofs</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </filesystem>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <tpm supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='model'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>tpm-tis</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>tpm-crb</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='backendModel'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>emulator</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>external</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='backendVersion'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>2.0</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </tpm>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <redirdev supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='bus'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>usb</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </redirdev>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <channel supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='type'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>pty</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>unix</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </channel>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <crypto supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='model'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='type'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>qemu</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='backendModel'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>builtin</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </crypto>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <interface supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='backendType'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>default</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>passt</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </interface>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <panic supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='model'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>isa</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>hyperv</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </panic>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <console supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='type'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>null</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>vc</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>pty</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>dev</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>file</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>pipe</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>stdio</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>udp</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>tcp</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>unix</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>qemu-vdagent</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>dbus</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </console>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   </devices>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   <features>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <gic supported='no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <vmcoreinfo supported='yes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <genid supported='yes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <backingStoreInput supported='yes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <backup supported='yes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <async-teardown supported='yes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <ps2 supported='yes'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <sev supported='no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <sgx supported='no'/>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <hyperv supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='features'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>relaxed</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>vapic</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>spinlocks</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>vpindex</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>runtime</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>synic</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>stimer</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>reset</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>vendor_id</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>frequencies</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>reenlightenment</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>tlbflush</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>ipi</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>avic</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>emsr_bitmap</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>xmm_input</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <defaults>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <spinlocks>4095</spinlocks>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <stimer_direct>on</stimer_direct>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <tlbflush_direct>on</tlbflush_direct>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <tlbflush_extended>on</tlbflush_extended>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </defaults>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </hyperv>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     <launchSecurity supported='yes'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       <enum name='sectype'>
Nov 24 09:47:04 compute-0 nova_compute[256709]:         <value>tdx</value>
Nov 24 09:47:04 compute-0 nova_compute[256709]:       </enum>
Nov 24 09:47:04 compute-0 nova_compute[256709]:     </launchSecurity>
Nov 24 09:47:04 compute-0 nova_compute[256709]:   </features>
Nov 24 09:47:04 compute-0 nova_compute[256709]: </domainCapabilities>
Nov 24 09:47:04 compute-0 nova_compute[256709]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 24 09:47:04 compute-0 nova_compute[256709]: 2025-11-24 09:47:04.797 256713 DEBUG nova.virt.libvirt.host [None req-81904a61-c487-4bcb-aad8-6e4580912e17 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 24 09:47:04 compute-0 nova_compute[256709]: 2025-11-24 09:47:04.797 256713 DEBUG nova.virt.libvirt.host [None req-81904a61-c487-4bcb-aad8-6e4580912e17 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 24 09:47:04 compute-0 nova_compute[256709]: 2025-11-24 09:47:04.797 256713 DEBUG nova.virt.libvirt.host [None req-81904a61-c487-4bcb-aad8-6e4580912e17 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 24 09:47:04 compute-0 nova_compute[256709]: 2025-11-24 09:47:04.797 256713 INFO nova.virt.libvirt.host [None req-81904a61-c487-4bcb-aad8-6e4580912e17 - - - - - -] Secure Boot support detected
Nov 24 09:47:04 compute-0 nova_compute[256709]: 2025-11-24 09:47:04.800 256713 INFO nova.virt.libvirt.driver [None req-81904a61-c487-4bcb-aad8-6e4580912e17 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 24 09:47:04 compute-0 nova_compute[256709]: 2025-11-24 09:47:04.800 256713 INFO nova.virt.libvirt.driver [None req-81904a61-c487-4bcb-aad8-6e4580912e17 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 24 09:47:04 compute-0 nova_compute[256709]: 2025-11-24 09:47:04.810 256713 DEBUG nova.virt.libvirt.driver [None req-81904a61-c487-4bcb-aad8-6e4580912e17 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Nov 24 09:47:04 compute-0 nova_compute[256709]: 2025-11-24 09:47:04.848 256713 INFO nova.virt.node [None req-81904a61-c487-4bcb-aad8-6e4580912e17 - - - - - -] Determined node identity a50ce3b5-7e9e-4263-a4aa-c35573ac7257 from /var/lib/nova/compute_id
Nov 24 09:47:04 compute-0 nova_compute[256709]: 2025-11-24 09:47:04.874 256713 WARNING nova.compute.manager [None req-81904a61-c487-4bcb-aad8-6e4580912e17 - - - - - -] Compute nodes ['a50ce3b5-7e9e-4263-a4aa-c35573ac7257'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Nov 24 09:47:04 compute-0 nova_compute[256709]: 2025-11-24 09:47:04.934 256713 INFO nova.compute.manager [None req-81904a61-c487-4bcb-aad8-6e4580912e17 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Nov 24 09:47:05 compute-0 nova_compute[256709]: 2025-11-24 09:47:05.007 256713 WARNING nova.compute.manager [None req-81904a61-c487-4bcb-aad8-6e4580912e17 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Nov 24 09:47:05 compute-0 nova_compute[256709]: 2025-11-24 09:47:05.007 256713 DEBUG oslo_concurrency.lockutils [None req-81904a61-c487-4bcb-aad8-6e4580912e17 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:47:05 compute-0 nova_compute[256709]: 2025-11-24 09:47:05.007 256713 DEBUG oslo_concurrency.lockutils [None req-81904a61-c487-4bcb-aad8-6e4580912e17 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:47:05 compute-0 nova_compute[256709]: 2025-11-24 09:47:05.007 256713 DEBUG oslo_concurrency.lockutils [None req-81904a61-c487-4bcb-aad8-6e4580912e17 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:47:05 compute-0 nova_compute[256709]: 2025-11-24 09:47:05.008 256713 DEBUG nova.compute.resource_tracker [None req-81904a61-c487-4bcb-aad8-6e4580912e17 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 09:47:05 compute-0 nova_compute[256709]: 2025-11-24 09:47:05.008 256713 DEBUG oslo_concurrency.processutils [None req-81904a61-c487-4bcb-aad8-6e4580912e17 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:47:05 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:05 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:47:05 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:05.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:47:05 compute-0 sudo[257613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqzlutnxlkifwoflcfxgwjtfqpyzfeda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977625.0899582-4357-276810942703501/AnsiballZ_systemd.py'
Nov 24 09:47:05 compute-0 sudo[257613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:47:05 compute-0 ceph-mon[74331]: pgmap v595: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 494 B/s rd, 82 B/s wr, 0 op/s
Nov 24 09:47:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:05 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa528002920 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:05 compute-0 nova_compute[256709]: 2025-11-24 09:47:05.506 256713 DEBUG oslo_concurrency.processutils [None req-81904a61-c487-4bcb-aad8-6e4580912e17 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:47:05 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Nov 24 09:47:05 compute-0 systemd[1]: Started libvirt nodedev daemon.
Nov 24 09:47:05 compute-0 python3.9[257615]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 09:47:05 compute-0 systemd[1]: Stopping nova_compute container...
Nov 24 09:47:05 compute-0 nova_compute[256709]: 2025-11-24 09:47:05.726 256713 DEBUG oslo_concurrency.lockutils [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 09:47:05 compute-0 nova_compute[256709]: 2025-11-24 09:47:05.726 256713 DEBUG oslo_concurrency.lockutils [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 09:47:05 compute-0 nova_compute[256709]: 2025-11-24 09:47:05.726 256713 DEBUG oslo_concurrency.lockutils [None req-06d47a53-5200-4772-a1fd-f65aed70f68c - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 09:47:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:06 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa52c002df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:06 compute-0 virtqemud[257224]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 24 09:47:06 compute-0 virtqemud[257224]: hostname: compute-0
Nov 24 09:47:06 compute-0 virtqemud[257224]: End of file while reading data: Input/output error
Nov 24 09:47:06 compute-0 systemd[1]: libpod-2bcc3f6b74feccee69b753e47d7cc4656ba8a54db8bd7c4a29440bb4766f2a4d.scope: Deactivated successfully.
Nov 24 09:47:06 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v596: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:47:06 compute-0 systemd[1]: libpod-2bcc3f6b74feccee69b753e47d7cc4656ba8a54db8bd7c4a29440bb4766f2a4d.scope: Consumed 3.651s CPU time.
Nov 24 09:47:06 compute-0 podman[257643]: 2025-11-24 09:47:06.158350137 +0000 UTC m=+0.473047296 container died 2bcc3f6b74feccee69b753e47d7cc4656ba8a54db8bd7c4a29440bb4766f2a4d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 09:47:06 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2bcc3f6b74feccee69b753e47d7cc4656ba8a54db8bd7c4a29440bb4766f2a4d-userdata-shm.mount: Deactivated successfully.
Nov 24 09:47:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-114372ba9b59567b97dc1ae261a5e4cf4162dd089003ae3a2f2aa8efd72359ee-merged.mount: Deactivated successfully.
Nov 24 09:47:06 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:06 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:47:06 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:06.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:47:06 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2331233881' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:47:06 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/190925437' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:47:06 compute-0 podman[257643]: 2025-11-24 09:47:06.492908149 +0000 UTC m=+0.807605278 container cleanup 2bcc3f6b74feccee69b753e47d7cc4656ba8a54db8bd7c4a29440bb4766f2a4d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 24 09:47:06 compute-0 podman[257643]: nova_compute
Nov 24 09:47:06 compute-0 podman[257673]: nova_compute
Nov 24 09:47:06 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Nov 24 09:47:06 compute-0 systemd[1]: Stopped nova_compute container.
Nov 24 09:47:06 compute-0 systemd[1]: Starting nova_compute container...
Nov 24 09:47:06 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:47:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/114372ba9b59567b97dc1ae261a5e4cf4162dd089003ae3a2f2aa8efd72359ee/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 24 09:47:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/114372ba9b59567b97dc1ae261a5e4cf4162dd089003ae3a2f2aa8efd72359ee/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 24 09:47:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/114372ba9b59567b97dc1ae261a5e4cf4162dd089003ae3a2f2aa8efd72359ee/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 24 09:47:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/114372ba9b59567b97dc1ae261a5e4cf4162dd089003ae3a2f2aa8efd72359ee/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 24 09:47:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/114372ba9b59567b97dc1ae261a5e4cf4162dd089003ae3a2f2aa8efd72359ee/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 24 09:47:06 compute-0 podman[257685]: 2025-11-24 09:47:06.668471704 +0000 UTC m=+0.086994220 container init 2bcc3f6b74feccee69b753e47d7cc4656ba8a54db8bd7c4a29440bb4766f2a4d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 24 09:47:06 compute-0 podman[257685]: 2025-11-24 09:47:06.674618974 +0000 UTC m=+0.093141470 container start 2bcc3f6b74feccee69b753e47d7cc4656ba8a54db8bd7c4a29440bb4766f2a4d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 24 09:47:06 compute-0 podman[257685]: nova_compute
Nov 24 09:47:06 compute-0 nova_compute[257700]: + sudo -E kolla_set_configs
Nov 24 09:47:06 compute-0 systemd[1]: Started nova_compute container.
Nov 24 09:47:06 compute-0 sudo[257613]: pam_unix(sudo:session): session closed for user root
Nov 24 09:47:06 compute-0 nova_compute[257700]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 24 09:47:06 compute-0 nova_compute[257700]: INFO:__main__:Validating config file
Nov 24 09:47:06 compute-0 nova_compute[257700]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 24 09:47:06 compute-0 nova_compute[257700]: INFO:__main__:Copying service configuration files
Nov 24 09:47:06 compute-0 nova_compute[257700]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 24 09:47:06 compute-0 nova_compute[257700]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 24 09:47:06 compute-0 nova_compute[257700]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 24 09:47:06 compute-0 nova_compute[257700]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Nov 24 09:47:06 compute-0 nova_compute[257700]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 24 09:47:06 compute-0 nova_compute[257700]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 24 09:47:06 compute-0 nova_compute[257700]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 24 09:47:06 compute-0 nova_compute[257700]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 24 09:47:06 compute-0 nova_compute[257700]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 24 09:47:06 compute-0 nova_compute[257700]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 24 09:47:06 compute-0 nova_compute[257700]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 24 09:47:06 compute-0 nova_compute[257700]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 24 09:47:06 compute-0 nova_compute[257700]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Nov 24 09:47:06 compute-0 nova_compute[257700]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 24 09:47:06 compute-0 nova_compute[257700]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 24 09:47:06 compute-0 nova_compute[257700]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 24 09:47:06 compute-0 nova_compute[257700]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 24 09:47:06 compute-0 nova_compute[257700]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 24 09:47:06 compute-0 nova_compute[257700]: INFO:__main__:Deleting /etc/ceph
Nov 24 09:47:06 compute-0 nova_compute[257700]: INFO:__main__:Creating directory /etc/ceph
Nov 24 09:47:06 compute-0 nova_compute[257700]: INFO:__main__:Setting permission for /etc/ceph
Nov 24 09:47:06 compute-0 nova_compute[257700]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 24 09:47:06 compute-0 nova_compute[257700]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 24 09:47:06 compute-0 nova_compute[257700]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 24 09:47:06 compute-0 nova_compute[257700]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 24 09:47:06 compute-0 nova_compute[257700]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Nov 24 09:47:06 compute-0 nova_compute[257700]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 24 09:47:06 compute-0 nova_compute[257700]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 24 09:47:06 compute-0 nova_compute[257700]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Nov 24 09:47:06 compute-0 nova_compute[257700]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 24 09:47:06 compute-0 nova_compute[257700]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 24 09:47:06 compute-0 nova_compute[257700]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 24 09:47:06 compute-0 nova_compute[257700]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 24 09:47:06 compute-0 nova_compute[257700]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 24 09:47:06 compute-0 nova_compute[257700]: INFO:__main__:Writing out command to execute
Nov 24 09:47:06 compute-0 nova_compute[257700]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 24 09:47:06 compute-0 nova_compute[257700]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 24 09:47:06 compute-0 nova_compute[257700]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 24 09:47:06 compute-0 nova_compute[257700]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 24 09:47:06 compute-0 nova_compute[257700]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 24 09:47:06 compute-0 nova_compute[257700]: ++ cat /run_command
Nov 24 09:47:06 compute-0 nova_compute[257700]: + CMD=nova-compute
Nov 24 09:47:06 compute-0 nova_compute[257700]: + ARGS=
Nov 24 09:47:06 compute-0 nova_compute[257700]: + sudo kolla_copy_cacerts
Nov 24 09:47:06 compute-0 nova_compute[257700]: + [[ ! -n '' ]]
Nov 24 09:47:06 compute-0 nova_compute[257700]: + . kolla_extend_start
Nov 24 09:47:06 compute-0 nova_compute[257700]: + echo 'Running command: '\''nova-compute'\'''
Nov 24 09:47:06 compute-0 nova_compute[257700]: Running command: 'nova-compute'
Nov 24 09:47:06 compute-0 nova_compute[257700]: + umask 0022
Nov 24 09:47:06 compute-0 nova_compute[257700]: + exec nova-compute
Nov 24 09:47:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:06 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa510000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:47:07.060Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 09:47:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:47:07.061Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:47:07 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:07 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:07 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:07.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:07 : epoch 6924297d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:47:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:07 : epoch 6924297d : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:47:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:07 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa520003340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:07 compute-0 ceph-mon[74331]: pgmap v596: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:47:07 compute-0 sudo[257863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-leprrsjgmlebevmyhprfvbdmbkzhehdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763977627.6255496-4384-73850332726104/AnsiballZ_podman_container.py'
Nov 24 09:47:07 compute-0 sudo[257863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 09:47:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:08 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa528002920 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:08 compute-0 python3.9[257865]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 24 09:47:08 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v597: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:47:08 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:08 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:08 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:08.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:08 compute-0 systemd[1]: Started libpod-conmon-252d85ef275353b7778bcea8d13e017f59765962da915da603356f1a04d46e91.scope.
Nov 24 09:47:08 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:47:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24d888b5e20de281fba92fbb3857b6e6f63a8be07ae3ce790fbd00e83532d125/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Nov 24 09:47:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24d888b5e20de281fba92fbb3857b6e6f63a8be07ae3ce790fbd00e83532d125/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 24 09:47:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24d888b5e20de281fba92fbb3857b6e6f63a8be07ae3ce790fbd00e83532d125/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Nov 24 09:47:08 compute-0 podman[257890]: 2025-11-24 09:47:08.311997631 +0000 UTC m=+0.102297795 container init 252d85ef275353b7778bcea8d13e017f59765962da915da603356f1a04d46e91 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Nov 24 09:47:08 compute-0 podman[257890]: 2025-11-24 09:47:08.320225033 +0000 UTC m=+0.110525177 container start 252d85ef275353b7778bcea8d13e017f59765962da915da603356f1a04d46e91 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 09:47:08 compute-0 python3.9[257865]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Nov 24 09:47:08 compute-0 nova_compute_init[257913]: INFO:nova_statedir:Applying nova statedir ownership
Nov 24 09:47:08 compute-0 nova_compute_init[257913]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Nov 24 09:47:08 compute-0 nova_compute_init[257913]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Nov 24 09:47:08 compute-0 nova_compute_init[257913]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Nov 24 09:47:08 compute-0 nova_compute_init[257913]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Nov 24 09:47:08 compute-0 nova_compute_init[257913]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Nov 24 09:47:08 compute-0 nova_compute_init[257913]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Nov 24 09:47:08 compute-0 nova_compute_init[257913]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Nov 24 09:47:08 compute-0 nova_compute_init[257913]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Nov 24 09:47:08 compute-0 nova_compute_init[257913]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Nov 24 09:47:08 compute-0 nova_compute_init[257913]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Nov 24 09:47:08 compute-0 nova_compute_init[257913]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Nov 24 09:47:08 compute-0 nova_compute_init[257913]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Nov 24 09:47:08 compute-0 nova_compute_init[257913]: INFO:nova_statedir:Nova statedir ownership complete
Nov 24 09:47:08 compute-0 systemd[1]: libpod-252d85ef275353b7778bcea8d13e017f59765962da915da603356f1a04d46e91.scope: Deactivated successfully.
Nov 24 09:47:08 compute-0 podman[257914]: 2025-11-24 09:47:08.377864609 +0000 UTC m=+0.031839132 container died 252d85ef275353b7778bcea8d13e017f59765962da915da603356f1a04d46e91 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 24 09:47:08 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:47:08 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-252d85ef275353b7778bcea8d13e017f59765962da915da603356f1a04d46e91-userdata-shm.mount: Deactivated successfully.
Nov 24 09:47:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-24d888b5e20de281fba92fbb3857b6e6f63a8be07ae3ce790fbd00e83532d125-merged.mount: Deactivated successfully.
Nov 24 09:47:08 compute-0 podman[257926]: 2025-11-24 09:47:08.441627637 +0000 UTC m=+0.055222288 container cleanup 252d85ef275353b7778bcea8d13e017f59765962da915da603356f1a04d46e91 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 09:47:08 compute-0 systemd[1]: libpod-conmon-252d85ef275353b7778bcea8d13e017f59765962da915da603356f1a04d46e91.scope: Deactivated successfully.
Nov 24 09:47:08 compute-0 ceph-mon[74331]: pgmap v597: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:47:08 compute-0 sudo[257863]: pam_unix(sudo:session): session closed for user root
Nov 24 09:47:08 compute-0 sudo[257975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:47:08 compute-0 sudo[257975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:47:08 compute-0 sudo[257975]: pam_unix(sudo:session): session closed for user root
Nov 24 09:47:08 compute-0 sudo[258001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:47:08 compute-0 sudo[258001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:47:08 compute-0 nova_compute[257700]: 2025-11-24 09:47:08.763 257704 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 24 09:47:08 compute-0 nova_compute[257700]: 2025-11-24 09:47:08.764 257704 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 24 09:47:08 compute-0 nova_compute[257700]: 2025-11-24 09:47:08.764 257704 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 24 09:47:08 compute-0 nova_compute[257700]: 2025-11-24 09:47:08.764 257704 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Nov 24 09:47:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:08 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa52c002df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:08 compute-0 nova_compute[257700]: 2025-11-24 09:47:08.898 257704 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:47:08 compute-0 nova_compute[257700]: 2025-11-24 09:47:08.910 257704 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:47:08 compute-0 nova_compute[257700]: 2025-11-24 09:47:08.911 257704 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Nov 24 09:47:08 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 09:47:08 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:47:09 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 09:47:09 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:47:09 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 09:47:09 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:47:09 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 09:47:09 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:47:09 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:09 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:47:09 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:09.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:47:09 compute-0 sshd-session[226338]: Connection closed by 192.168.122.30 port 42592
Nov 24 09:47:09 compute-0 sshd-session[226335]: pam_unix(sshd:session): session closed for user zuul
Nov 24 09:47:09 compute-0 systemd-logind[822]: Session 54 logged out. Waiting for processes to exit.
Nov 24 09:47:09 compute-0 systemd[1]: session-54.scope: Deactivated successfully.
Nov 24 09:47:09 compute-0 systemd[1]: session-54.scope: Consumed 2min 12.126s CPU time.
Nov 24 09:47:09 compute-0 systemd-logind[822]: Removed session 54.
Nov 24 09:47:09 compute-0 sudo[258001]: pam_unix(sudo:session): session closed for user root
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.340 257704 INFO nova.virt.driver [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Nov 24 09:47:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:09 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa5100016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.443 257704 INFO nova.compute.provider_config [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.452 257704 DEBUG oslo_concurrency.lockutils [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.452 257704 DEBUG oslo_concurrency.lockutils [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.453 257704 DEBUG oslo_concurrency.lockutils [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.453 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.453 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.453 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.454 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.454 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.454 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.454 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.454 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.454 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.455 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.455 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.455 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.455 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.455 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.455 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.456 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.456 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.456 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.456 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.456 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.456 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.456 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.457 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.457 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.457 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.457 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.457 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.457 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.458 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.458 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.458 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.458 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.458 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.458 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.459 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.459 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.459 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.459 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.459 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.459 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.460 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.460 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.460 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.461 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.461 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.461 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.461 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.461 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.462 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.462 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.462 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.462 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.462 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.462 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.463 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.463 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.463 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.463 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.463 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.463 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.463 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.464 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.464 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.464 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.464 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.464 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.464 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.464 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.464 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.465 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.465 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.465 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.465 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.465 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.465 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.466 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.466 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.466 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.466 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.466 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.466 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.467 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.467 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.467 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.467 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.467 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.467 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.467 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.468 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.468 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.468 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.468 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.468 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.468 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.469 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.469 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.469 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.469 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.469 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.469 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.469 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.470 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.470 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.470 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.470 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.470 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.470 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.471 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.471 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.471 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.471 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.471 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.471 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.472 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.472 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.472 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.472 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.472 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.472 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.472 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.473 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.473 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.473 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.473 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.473 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.473 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.474 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.474 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.474 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.474 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.474 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.474 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.474 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.475 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.475 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.475 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.475 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.475 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.475 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.475 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.476 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.476 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.476 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.476 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.476 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.476 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.476 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.477 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.477 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.477 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.477 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.477 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.477 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.478 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.478 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.478 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.478 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.478 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.478 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.479 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.479 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.479 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.479 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.479 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.479 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.479 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.480 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.480 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.480 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.480 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.480 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.480 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.481 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.481 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.481 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.481 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.481 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.481 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.482 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.482 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.482 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.482 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.482 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.482 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.483 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.483 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.483 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.483 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.483 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.483 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.484 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.484 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.484 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.484 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.484 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.485 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.485 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.485 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.485 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.485 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.486 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.486 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.486 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.486 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.487 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.487 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.487 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.487 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.487 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.487 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.488 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.488 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.488 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.488 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.488 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.488 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.489 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.489 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.489 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.489 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.489 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.489 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.489 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.490 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.490 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.490 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.490 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.490 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.490 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.491 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.491 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.491 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.491 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.491 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.491 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.492 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.492 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.492 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.492 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.492 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.492 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.493 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.493 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.493 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.493 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.493 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.493 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.494 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.494 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.494 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.494 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.494 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.494 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.494 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.495 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.495 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.495 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.495 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.495 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.495 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.495 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.496 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.496 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.496 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.496 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.496 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.497 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.497 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.497 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.497 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.497 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.497 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.497 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.498 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.498 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.498 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.498 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.498 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.498 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.498 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.499 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.499 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.499 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.499 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.499 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.499 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.499 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.500 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.500 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.500 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.500 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.500 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.500 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.500 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.501 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.501 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.501 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.501 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.501 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.501 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.502 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.502 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.502 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.502 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.502 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.502 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.503 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.503 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.503 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.503 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.503 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.503 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.503 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.504 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.504 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.504 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.504 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.504 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.504 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.505 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.505 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.505 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.505 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.505 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.505 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.506 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.506 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.506 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.506 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.506 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.506 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.506 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.507 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.507 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.507 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.507 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.507 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.507 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.507 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.508 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.508 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.508 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.508 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.508 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.508 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.508 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.509 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.509 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.509 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.509 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.509 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.509 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.510 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.510 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.510 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.510 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.510 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.511 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.511 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.511 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.511 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.511 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.511 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.512 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.512 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.512 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.512 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.512 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.512 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.512 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.513 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.513 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.513 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.513 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.513 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.513 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.514 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.514 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.514 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.514 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.514 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.514 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.514 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.515 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.515 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.515 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.515 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.515 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.515 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.516 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.516 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.516 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.516 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.516 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.516 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.516 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.517 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.517 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.517 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.517 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.517 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.517 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.518 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.518 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.518 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.518 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.518 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.518 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.519 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.519 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.519 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.519 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.519 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.519 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.519 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.520 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.520 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.520 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.520 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.520 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.520 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.520 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.521 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.521 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.521 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.521 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.521 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.521 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.521 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.522 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.522 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.522 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.522 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.522 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.522 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.523 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.523 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.523 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.523 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.523 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.523 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.523 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.524 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.524 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.524 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.524 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.524 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.524 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.524 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.525 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.525 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.525 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.525 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.525 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.525 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.525 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.526 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.526 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.526 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.526 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.526 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.526 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.527 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.527 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.527 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.527 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.527 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.527 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.527 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.528 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.528 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.528 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.528 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.528 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.528 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.529 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.529 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.529 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.529 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.529 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.529 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.529 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.530 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.530 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.530 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.530 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.530 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.530 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.531 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.531 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.531 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.531 257704 WARNING oslo_config.cfg [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 24 09:47:09 compute-0 nova_compute[257700]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 24 09:47:09 compute-0 nova_compute[257700]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 24 09:47:09 compute-0 nova_compute[257700]: and ``live_migration_inbound_addr`` respectively.
Nov 24 09:47:09 compute-0 nova_compute[257700]: ).  Its value may be silently ignored in the future.
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.531 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.532 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.532 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.532 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.532 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.532 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.532 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.533 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.533 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.533 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.533 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.533 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.533 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.534 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.534 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.534 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.534 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.534 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.534 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.rbd_secret_uuid        = 84a084c3-61a7-5de7-8207-1f88efa59a64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.535 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.535 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.535 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.535 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.535 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.535 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.536 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.536 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.536 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.536 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.536 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.536 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.537 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.537 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.537 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.537 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.537 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.537 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.537 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.538 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.538 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.538 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.538 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.538 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.538 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.538 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.539 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.539 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.539 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.539 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.539 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.539 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.540 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.540 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.540 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.540 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.540 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.540 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.540 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.541 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.541 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.541 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.541 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.541 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.541 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.541 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.542 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.542 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.542 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.542 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.542 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.542 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.542 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.543 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.543 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.543 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.543 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.543 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.543 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.543 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.544 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.544 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.544 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.544 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.544 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.544 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.545 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.545 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.545 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.545 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.545 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.545 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.545 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.546 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.546 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.546 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.546 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.546 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.546 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.546 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.547 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.547 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.547 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.547 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.547 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.547 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.547 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.548 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.548 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.548 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.548 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.548 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.548 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.548 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.549 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.549 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.549 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.549 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.549 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.550 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.550 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.550 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.550 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.550 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.550 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.550 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.551 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.551 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.551 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.551 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.551 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.551 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.552 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.552 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.552 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.552 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.552 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.552 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.553 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.553 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.553 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.553 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.553 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.554 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.554 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.554 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.554 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.554 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.554 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.554 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.555 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.555 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.555 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.555 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.555 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.556 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.556 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.556 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.556 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.556 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.556 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.557 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.557 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.557 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.557 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.557 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.558 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.558 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.558 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.558 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.558 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.558 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.559 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.559 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.559 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.559 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.559 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.559 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.559 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.560 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.560 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.560 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.560 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.560 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.561 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.561 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.561 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.561 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.561 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.561 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.562 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.562 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.562 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.562 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.562 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.562 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.562 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.563 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.563 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.563 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.563 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.563 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.563 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.564 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.564 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.564 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.564 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.564 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.564 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.565 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.565 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.565 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.565 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.565 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.565 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.566 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.566 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.566 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.566 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.566 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.566 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.567 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.567 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.567 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.567 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.567 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.567 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.568 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.568 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.568 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.568 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.568 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.568 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.568 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.569 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.569 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.569 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.569 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.569 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.569 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.569 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.570 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.570 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.570 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.570 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.570 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.571 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.571 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.571 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.571 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.571 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.572 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.572 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.572 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.572 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.572 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.573 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.573 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.573 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.573 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.573 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.573 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.574 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.574 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.574 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.574 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.574 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.574 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.574 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.575 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.575 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.575 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.575 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.575 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.575 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.576 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.576 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.576 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.576 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.576 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.576 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.577 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.577 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.577 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.577 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.577 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.578 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.578 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.578 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.578 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.579 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.579 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.579 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.579 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.579 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.580 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.580 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.580 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.580 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.580 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.581 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.581 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.581 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.581 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.581 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.582 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.582 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.582 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.582 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.582 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.583 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.583 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.583 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.583 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.583 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.583 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.583 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.584 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.584 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.584 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.584 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.584 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.584 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.585 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.585 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.585 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.585 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.585 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.585 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.585 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.586 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.586 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.586 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.586 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.586 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.586 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.587 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.587 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.587 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.587 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.587 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.588 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.588 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.588 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.588 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.588 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.588 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.588 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.589 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.589 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.589 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.589 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.589 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.590 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.590 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.590 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.590 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.590 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.590 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.590 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.591 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.591 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.591 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.591 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.591 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.591 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.592 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.592 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.592 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.592 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.592 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.592 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.593 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.593 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.593 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.593 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.593 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.593 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.594 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.594 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.594 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.594 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.594 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.594 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.595 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.595 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.595 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.595 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.595 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.596 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.596 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.596 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.596 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.596 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.597 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.597 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.597 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.597 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.597 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.598 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.598 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.598 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.598 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.598 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.599 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.599 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.599 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.599 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.599 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.600 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.600 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.600 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.600 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.600 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.601 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.601 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.601 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.601 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.601 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.601 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.601 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.602 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.602 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.602 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.602 257704 DEBUG oslo_service.service [None req-077c3465-4393-46c1-ba5b-512acebd3770 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.604 257704 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.619 257704 INFO nova.virt.node [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Determined node identity a50ce3b5-7e9e-4263-a4aa-c35573ac7257 from /var/lib/nova/compute_id
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.620 257704 DEBUG nova.virt.libvirt.host [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.621 257704 DEBUG nova.virt.libvirt.host [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.621 257704 DEBUG nova.virt.libvirt.host [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.621 257704 DEBUG nova.virt.libvirt.host [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.636 257704 DEBUG nova.virt.libvirt.host [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fb107470550> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.639 257704 DEBUG nova.virt.libvirt.host [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fb107470550> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.641 257704 INFO nova.virt.libvirt.driver [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Connection event '1' reason 'None'
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.649 257704 INFO nova.virt.libvirt.host [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Libvirt host capabilities <capabilities>
Nov 24 09:47:09 compute-0 nova_compute[257700]: 
Nov 24 09:47:09 compute-0 nova_compute[257700]:   <host>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <uuid>4c455ecc-8696-436b-b07b-3b4a91ae800f</uuid>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <cpu>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <arch>x86_64</arch>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model>EPYC-Rome-v4</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <vendor>AMD</vendor>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <microcode version='16777317'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <signature family='23' model='49' stepping='0'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <maxphysaddr mode='emulate' bits='40'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature name='x2apic'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature name='tsc-deadline'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature name='osxsave'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature name='hypervisor'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature name='tsc_adjust'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature name='spec-ctrl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature name='stibp'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature name='arch-capabilities'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature name='ssbd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature name='cmp_legacy'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature name='topoext'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature name='virt-ssbd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature name='lbrv'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature name='tsc-scale'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature name='vmcb-clean'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature name='pause-filter'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature name='pfthreshold'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature name='svme-addr-chk'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature name='rdctl-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature name='skip-l1dfl-vmentry'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature name='mds-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature name='pschange-mc-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <pages unit='KiB' size='4'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <pages unit='KiB' size='2048'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <pages unit='KiB' size='1048576'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </cpu>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <power_management>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <suspend_mem/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </power_management>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <iommu support='no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <migration_features>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <live/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <uri_transports>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <uri_transport>tcp</uri_transport>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <uri_transport>rdma</uri_transport>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </uri_transports>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </migration_features>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <topology>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <cells num='1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <cell id='0'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:           <memory unit='KiB'>7864320</memory>
Nov 24 09:47:09 compute-0 nova_compute[257700]:           <pages unit='KiB' size='4'>1966080</pages>
Nov 24 09:47:09 compute-0 nova_compute[257700]:           <pages unit='KiB' size='2048'>0</pages>
Nov 24 09:47:09 compute-0 nova_compute[257700]:           <pages unit='KiB' size='1048576'>0</pages>
Nov 24 09:47:09 compute-0 nova_compute[257700]:           <distances>
Nov 24 09:47:09 compute-0 nova_compute[257700]:             <sibling id='0' value='10'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:           </distances>
Nov 24 09:47:09 compute-0 nova_compute[257700]:           <cpus num='8'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:           </cpus>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         </cell>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </cells>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </topology>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <cache>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </cache>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <secmodel>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model>selinux</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <doi>0</doi>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </secmodel>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <secmodel>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model>dac</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <doi>0</doi>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <baselabel type='kvm'>+107:+107</baselabel>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <baselabel type='qemu'>+107:+107</baselabel>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </secmodel>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   </host>
Nov 24 09:47:09 compute-0 nova_compute[257700]: 
Nov 24 09:47:09 compute-0 nova_compute[257700]:   <guest>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <os_type>hvm</os_type>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <arch name='i686'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <wordsize>32</wordsize>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <domain type='qemu'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <domain type='kvm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </arch>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <features>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <pae/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <nonpae/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <acpi default='on' toggle='yes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <apic default='on' toggle='no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <cpuselection/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <deviceboot/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <disksnapshot default='on' toggle='no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <externalSnapshot/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </features>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   </guest>
Nov 24 09:47:09 compute-0 nova_compute[257700]: 
Nov 24 09:47:09 compute-0 nova_compute[257700]:   <guest>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <os_type>hvm</os_type>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <arch name='x86_64'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <wordsize>64</wordsize>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <domain type='qemu'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <domain type='kvm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </arch>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <features>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <acpi default='on' toggle='yes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <apic default='on' toggle='no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <cpuselection/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <deviceboot/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <disksnapshot default='on' toggle='no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <externalSnapshot/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </features>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   </guest>
Nov 24 09:47:09 compute-0 nova_compute[257700]: 
Nov 24 09:47:09 compute-0 nova_compute[257700]: </capabilities>
Nov 24 09:47:09 compute-0 nova_compute[257700]: 
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.654 257704 DEBUG nova.virt.libvirt.volume.mount [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.659 257704 DEBUG nova.virt.libvirt.host [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.665 257704 DEBUG nova.virt.libvirt.host [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 24 09:47:09 compute-0 nova_compute[257700]: <domainCapabilities>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   <path>/usr/libexec/qemu-kvm</path>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   <domain>kvm</domain>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   <arch>i686</arch>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   <vcpu max='4096'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   <iothreads supported='yes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   <os supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <enum name='firmware'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <loader supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='type'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>rom</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>pflash</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='readonly'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>yes</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>no</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='secure'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>no</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </loader>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   </os>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   <cpu>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <mode name='host-passthrough' supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='hostPassthroughMigratable'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>on</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>off</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </mode>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <mode name='maximum' supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='maximumMigratable'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>on</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>off</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </mode>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <mode name='host-model' supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <vendor>AMD</vendor>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='x2apic'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='tsc-deadline'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='hypervisor'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='tsc_adjust'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='spec-ctrl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='stibp'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='ssbd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='cmp_legacy'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='overflow-recov'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='succor'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='ibrs'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='amd-ssbd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='virt-ssbd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='lbrv'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='tsc-scale'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='vmcb-clean'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='flushbyasid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='pause-filter'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='pfthreshold'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='svme-addr-chk'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='disable' name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </mode>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <mode name='custom' supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Broadwell'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Broadwell-IBRS'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Broadwell-noTSX'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Broadwell-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Broadwell-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Broadwell-v3'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Broadwell-v4'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Cascadelake-Server'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Cascadelake-Server-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Cascadelake-Server-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Cascadelake-Server-v3'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Cascadelake-Server-v4'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Cascadelake-Server-v5'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Cooperlake'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Cooperlake-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Cooperlake-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Denverton'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='mpx'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Denverton-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='mpx'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Denverton-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Denverton-v3'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Dhyana-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='EPYC-Genoa'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amd-psfd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='auto-ibrs'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='no-nested-data-bp'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='null-sel-clr-base'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='stibp-always-on'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='EPYC-Genoa-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amd-psfd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='auto-ibrs'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='no-nested-data-bp'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='null-sel-clr-base'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='stibp-always-on'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='EPYC-Milan'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='EPYC-Milan-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='EPYC-Milan-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amd-psfd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='no-nested-data-bp'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='null-sel-clr-base'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='stibp-always-on'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='EPYC-Rome'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='EPYC-Rome-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='EPYC-Rome-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='EPYC-Rome-v3'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='EPYC-v3'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='EPYC-v4'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='GraniteRapids'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-fp16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-int8'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-tile'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-fp16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fbsdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrc'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fzrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='mcdt-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pbrsb-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='prefetchiti'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='psdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xfd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='GraniteRapids-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-fp16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-int8'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-tile'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-fp16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fbsdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrc'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fzrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='mcdt-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pbrsb-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='prefetchiti'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='psdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xfd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='GraniteRapids-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-fp16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-int8'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-tile'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx10'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx10-128'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx10-256'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx10-512'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-fp16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='cldemote'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fbsdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrc'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fzrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='mcdt-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='movdir64b'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='movdiri'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pbrsb-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='prefetchiti'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='psdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ss'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xfd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Haswell'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Haswell-IBRS'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Haswell-noTSX'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Haswell-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Haswell-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Haswell-v3'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Haswell-v4'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Icelake-Server'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Icelake-Server-noTSX'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Icelake-Server-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Icelake-Server-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Icelake-Server-v3'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Icelake-Server-v4'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Icelake-Server-v5'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Icelake-Server-v6'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Icelake-Server-v7'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='IvyBridge'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='IvyBridge-IBRS'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='IvyBridge-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='IvyBridge-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='KnightsMill'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-4fmaps'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-4vnniw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512er'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512pf'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ss'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='KnightsMill-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-4fmaps'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-4vnniw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512er'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512pf'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ss'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Opteron_G4'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fma4'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xop'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Opteron_G4-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fma4'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xop'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Opteron_G5'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fma4'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='tbm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xop'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Opteron_G5-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fma4'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='tbm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xop'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='SapphireRapids'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-int8'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-tile'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-fp16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrc'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fzrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xfd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='SapphireRapids-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-int8'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-tile'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-fp16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrc'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fzrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xfd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='SapphireRapids-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-int8'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-tile'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-fp16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fbsdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrc'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fzrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='psdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xfd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='SapphireRapids-v3'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-int8'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-tile'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-fp16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='cldemote'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fbsdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrc'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fzrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='movdir64b'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='movdiri'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='psdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ss'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xfd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='SierraForest'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-ne-convert'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-vnni-int8'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='cmpccxadd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fbsdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='mcdt-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pbrsb-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='psdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='SierraForest-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-ne-convert'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-vnni-int8'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='cmpccxadd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fbsdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='mcdt-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pbrsb-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='psdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Client'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Client-IBRS'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Client-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Client-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Client-v3'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Client-v4'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Server'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Server-IBRS'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Server-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Server-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Server-v3'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Server-v4'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Server-v5'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Snowridge'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='cldemote'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='core-capability'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='movdir64b'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='movdiri'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='mpx'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='split-lock-detect'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Snowridge-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='cldemote'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='core-capability'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='movdir64b'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='movdiri'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='mpx'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='split-lock-detect'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Snowridge-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='cldemote'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='core-capability'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='movdir64b'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='movdiri'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='split-lock-detect'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Snowridge-v3'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='cldemote'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='core-capability'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='movdir64b'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='movdiri'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='split-lock-detect'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Snowridge-v4'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='cldemote'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='movdir64b'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='movdiri'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='athlon'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='3dnow'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='3dnowext'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='athlon-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='3dnow'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='3dnowext'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='core2duo'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ss'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='core2duo-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ss'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='coreduo'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ss'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='coreduo-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ss'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='n270'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ss'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='n270-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ss'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='phenom'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='3dnow'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='3dnowext'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='phenom-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='3dnow'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='3dnowext'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </mode>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   </cpu>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   <memoryBacking supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <enum name='sourceType'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <value>file</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <value>anonymous</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <value>memfd</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   </memoryBacking>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   <devices>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <disk supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='diskDevice'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>disk</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>cdrom</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>floppy</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>lun</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='bus'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>fdc</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>scsi</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>virtio</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>usb</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>sata</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='model'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>virtio</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>virtio-transitional</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>virtio-non-transitional</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </disk>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <graphics supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='type'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>vnc</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>egl-headless</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>dbus</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </graphics>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <video supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='modelType'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>vga</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>cirrus</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>virtio</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>none</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>bochs</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>ramfb</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </video>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <hostdev supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='mode'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>subsystem</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='startupPolicy'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>default</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>mandatory</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>requisite</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>optional</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='subsysType'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>usb</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>pci</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>scsi</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='capsType'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='pciBackend'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </hostdev>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <rng supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='model'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>virtio</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>virtio-transitional</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>virtio-non-transitional</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='backendModel'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>random</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>egd</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>builtin</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </rng>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <filesystem supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='driverType'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>path</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>handle</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>virtiofs</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </filesystem>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <tpm supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='model'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>tpm-tis</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>tpm-crb</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='backendModel'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>emulator</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>external</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='backendVersion'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>2.0</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </tpm>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <redirdev supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='bus'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>usb</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </redirdev>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <channel supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='type'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>pty</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>unix</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </channel>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <crypto supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='model'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='type'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>qemu</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='backendModel'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>builtin</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </crypto>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <interface supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='backendType'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>default</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>passt</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </interface>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <panic supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='model'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>isa</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>hyperv</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </panic>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <console supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='type'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>null</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>vc</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>pty</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>dev</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>file</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>pipe</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>stdio</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>udp</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>tcp</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>unix</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>qemu-vdagent</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>dbus</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </console>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   </devices>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   <features>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <gic supported='no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <vmcoreinfo supported='yes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <genid supported='yes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <backingStoreInput supported='yes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <backup supported='yes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <async-teardown supported='yes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <ps2 supported='yes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <sev supported='no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <sgx supported='no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <hyperv supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='features'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>relaxed</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>vapic</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>spinlocks</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>vpindex</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>runtime</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>synic</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>stimer</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>reset</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>vendor_id</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>frequencies</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>reenlightenment</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>tlbflush</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>ipi</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>avic</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>emsr_bitmap</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>xmm_input</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <defaults>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <spinlocks>4095</spinlocks>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <stimer_direct>on</stimer_direct>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <tlbflush_direct>on</tlbflush_direct>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <tlbflush_extended>on</tlbflush_extended>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </defaults>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </hyperv>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <launchSecurity supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='sectype'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>tdx</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </launchSecurity>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   </features>
Nov 24 09:47:09 compute-0 nova_compute[257700]: </domainCapabilities>
Nov 24 09:47:09 compute-0 nova_compute[257700]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.675 257704 DEBUG nova.virt.libvirt.host [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 24 09:47:09 compute-0 nova_compute[257700]: <domainCapabilities>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   <path>/usr/libexec/qemu-kvm</path>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   <domain>kvm</domain>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   <arch>i686</arch>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   <vcpu max='240'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   <iothreads supported='yes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   <os supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <enum name='firmware'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <loader supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='type'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>rom</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>pflash</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='readonly'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>yes</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>no</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='secure'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>no</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </loader>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   </os>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   <cpu>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <mode name='host-passthrough' supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='hostPassthroughMigratable'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>on</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>off</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </mode>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <mode name='maximum' supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='maximumMigratable'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>on</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>off</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </mode>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <mode name='host-model' supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <vendor>AMD</vendor>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='x2apic'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='tsc-deadline'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='hypervisor'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='tsc_adjust'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='spec-ctrl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='stibp'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='ssbd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='cmp_legacy'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='overflow-recov'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='succor'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='ibrs'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='amd-ssbd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='virt-ssbd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='lbrv'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='tsc-scale'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='vmcb-clean'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='flushbyasid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='pause-filter'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='pfthreshold'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='svme-addr-chk'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='disable' name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </mode>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <mode name='custom' supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Broadwell'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Broadwell-IBRS'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Broadwell-noTSX'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Broadwell-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Broadwell-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Broadwell-v3'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Broadwell-v4'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Cascadelake-Server'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Cascadelake-Server-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Cascadelake-Server-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Cascadelake-Server-v3'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Cascadelake-Server-v4'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Cascadelake-Server-v5'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Cooperlake'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Cooperlake-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Cooperlake-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Denverton'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='mpx'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Denverton-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='mpx'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Denverton-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Denverton-v3'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Dhyana-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='EPYC-Genoa'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amd-psfd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='auto-ibrs'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='no-nested-data-bp'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='null-sel-clr-base'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='stibp-always-on'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='EPYC-Genoa-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amd-psfd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='auto-ibrs'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='no-nested-data-bp'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='null-sel-clr-base'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='stibp-always-on'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='EPYC-Milan'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='EPYC-Milan-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='EPYC-Milan-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amd-psfd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='no-nested-data-bp'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='null-sel-clr-base'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='stibp-always-on'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='EPYC-Rome'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='EPYC-Rome-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='EPYC-Rome-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='EPYC-Rome-v3'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='EPYC-v3'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='EPYC-v4'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='GraniteRapids'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-fp16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-int8'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-tile'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-fp16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fbsdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrc'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fzrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='mcdt-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pbrsb-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='prefetchiti'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='psdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xfd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='GraniteRapids-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-fp16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-int8'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-tile'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-fp16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fbsdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrc'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fzrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='mcdt-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pbrsb-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='prefetchiti'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='psdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xfd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='GraniteRapids-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-fp16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-int8'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-tile'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx10'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx10-128'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx10-256'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx10-512'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-fp16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='cldemote'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fbsdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrc'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fzrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='mcdt-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='movdir64b'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='movdiri'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pbrsb-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='prefetchiti'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='psdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ss'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xfd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Haswell'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Haswell-IBRS'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Haswell-noTSX'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Haswell-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Haswell-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Haswell-v3'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Haswell-v4'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Icelake-Server'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Icelake-Server-noTSX'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Icelake-Server-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Icelake-Server-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Icelake-Server-v3'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Icelake-Server-v4'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Icelake-Server-v5'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Icelake-Server-v6'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Icelake-Server-v7'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='IvyBridge'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='IvyBridge-IBRS'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='IvyBridge-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='IvyBridge-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='KnightsMill'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-4fmaps'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-4vnniw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512er'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512pf'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ss'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='KnightsMill-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-4fmaps'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-4vnniw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512er'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512pf'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ss'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Opteron_G4'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fma4'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xop'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Opteron_G4-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fma4'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xop'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Opteron_G5'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fma4'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='tbm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xop'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Opteron_G5-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fma4'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='tbm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xop'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='SapphireRapids'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-int8'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-tile'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-fp16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrc'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fzrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xfd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='SapphireRapids-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-int8'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-tile'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-fp16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrc'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fzrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xfd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='SapphireRapids-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-int8'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-tile'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-fp16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fbsdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrc'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fzrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='psdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xfd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='SapphireRapids-v3'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-int8'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-tile'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-fp16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='cldemote'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fbsdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrc'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fzrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='movdir64b'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='movdiri'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='psdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ss'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xfd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='SierraForest'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-ne-convert'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-vnni-int8'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='cmpccxadd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fbsdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='mcdt-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pbrsb-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='psdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='SierraForest-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-ne-convert'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-vnni-int8'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='cmpccxadd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fbsdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='mcdt-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pbrsb-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='psdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Client'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Client-IBRS'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Client-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Client-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Client-v3'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Client-v4'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Server'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Server-IBRS'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Server-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Server-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Server-v3'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Server-v4'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Server-v5'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Snowridge'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='cldemote'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='core-capability'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='movdir64b'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='movdiri'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='mpx'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='split-lock-detect'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Snowridge-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='cldemote'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='core-capability'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='movdir64b'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='movdiri'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='mpx'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='split-lock-detect'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Snowridge-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='cldemote'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='core-capability'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='movdir64b'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='movdiri'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='split-lock-detect'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Snowridge-v3'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='cldemote'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='core-capability'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='movdir64b'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='movdiri'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='split-lock-detect'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Snowridge-v4'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='cldemote'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='movdir64b'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='movdiri'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='athlon'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='3dnow'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='3dnowext'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='athlon-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='3dnow'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='3dnowext'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='core2duo'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ss'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='core2duo-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ss'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='coreduo'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ss'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='coreduo-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ss'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='n270'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ss'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='n270-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ss'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='phenom'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='3dnow'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='3dnowext'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='phenom-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='3dnow'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='3dnowext'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </mode>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   </cpu>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   <memoryBacking supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <enum name='sourceType'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <value>file</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <value>anonymous</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <value>memfd</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   </memoryBacking>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   <devices>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <disk supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='diskDevice'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>disk</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>cdrom</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>floppy</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>lun</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='bus'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>ide</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>fdc</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>scsi</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>virtio</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>usb</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>sata</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='model'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>virtio</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>virtio-transitional</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>virtio-non-transitional</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </disk>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <graphics supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='type'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>vnc</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>egl-headless</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>dbus</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </graphics>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <video supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='modelType'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>vga</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>cirrus</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>virtio</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>none</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>bochs</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>ramfb</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </video>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <hostdev supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='mode'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>subsystem</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='startupPolicy'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>default</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>mandatory</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>requisite</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>optional</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='subsysType'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>usb</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>pci</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>scsi</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='capsType'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='pciBackend'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </hostdev>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <rng supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='model'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>virtio</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>virtio-transitional</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>virtio-non-transitional</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='backendModel'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>random</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>egd</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>builtin</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </rng>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <filesystem supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='driverType'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>path</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>handle</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>virtiofs</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </filesystem>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <tpm supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='model'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>tpm-tis</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>tpm-crb</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='backendModel'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>emulator</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>external</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='backendVersion'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>2.0</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </tpm>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <redirdev supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='bus'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>usb</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </redirdev>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <channel supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='type'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>pty</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>unix</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </channel>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <crypto supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='model'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='type'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>qemu</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='backendModel'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>builtin</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </crypto>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <interface supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='backendType'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>default</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>passt</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </interface>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <panic supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='model'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>isa</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>hyperv</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </panic>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <console supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='type'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>null</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>vc</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>pty</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>dev</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>file</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>pipe</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>stdio</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>udp</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>tcp</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>unix</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>qemu-vdagent</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>dbus</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </console>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   </devices>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   <features>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <gic supported='no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <vmcoreinfo supported='yes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <genid supported='yes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <backingStoreInput supported='yes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <backup supported='yes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <async-teardown supported='yes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <ps2 supported='yes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <sev supported='no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <sgx supported='no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <hyperv supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='features'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>relaxed</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>vapic</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>spinlocks</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>vpindex</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>runtime</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>synic</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>stimer</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>reset</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>vendor_id</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>frequencies</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>reenlightenment</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>tlbflush</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>ipi</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>avic</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>emsr_bitmap</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>xmm_input</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <defaults>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <spinlocks>4095</spinlocks>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <stimer_direct>on</stimer_direct>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <tlbflush_direct>on</tlbflush_direct>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <tlbflush_extended>on</tlbflush_extended>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </defaults>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </hyperv>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <launchSecurity supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='sectype'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>tdx</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </launchSecurity>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   </features>
Nov 24 09:47:09 compute-0 nova_compute[257700]: </domainCapabilities>
Nov 24 09:47:09 compute-0 nova_compute[257700]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.707 257704 DEBUG nova.virt.libvirt.host [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.712 257704 DEBUG nova.virt.libvirt.host [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 24 09:47:09 compute-0 nova_compute[257700]: <domainCapabilities>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   <path>/usr/libexec/qemu-kvm</path>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   <domain>kvm</domain>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   <arch>x86_64</arch>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   <vcpu max='4096'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   <iothreads supported='yes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   <os supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <enum name='firmware'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <value>efi</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <loader supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='type'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>rom</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>pflash</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='readonly'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>yes</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>no</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='secure'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>yes</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>no</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </loader>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   </os>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   <cpu>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <mode name='host-passthrough' supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='hostPassthroughMigratable'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>on</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>off</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </mode>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <mode name='maximum' supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='maximumMigratable'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>on</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>off</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </mode>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <mode name='host-model' supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <vendor>AMD</vendor>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='x2apic'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='tsc-deadline'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='hypervisor'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='tsc_adjust'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='spec-ctrl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='stibp'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='ssbd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='cmp_legacy'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='overflow-recov'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='succor'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='ibrs'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='amd-ssbd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='virt-ssbd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='lbrv'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='tsc-scale'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='vmcb-clean'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='flushbyasid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='pause-filter'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='pfthreshold'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='svme-addr-chk'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='disable' name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </mode>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <mode name='custom' supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Broadwell'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Broadwell-IBRS'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Broadwell-noTSX'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Broadwell-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Broadwell-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Broadwell-v3'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Broadwell-v4'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Cascadelake-Server'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Cascadelake-Server-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Cascadelake-Server-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Cascadelake-Server-v3'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Cascadelake-Server-v4'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Cascadelake-Server-v5'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Cooperlake'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Cooperlake-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Cooperlake-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Denverton'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='mpx'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Denverton-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='mpx'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Denverton-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Denverton-v3'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Dhyana-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='EPYC-Genoa'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amd-psfd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='auto-ibrs'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='no-nested-data-bp'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='null-sel-clr-base'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='stibp-always-on'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='EPYC-Genoa-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amd-psfd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='auto-ibrs'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='no-nested-data-bp'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='null-sel-clr-base'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='stibp-always-on'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='EPYC-Milan'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='EPYC-Milan-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='EPYC-Milan-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amd-psfd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='no-nested-data-bp'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='null-sel-clr-base'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='stibp-always-on'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='EPYC-Rome'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='EPYC-Rome-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='EPYC-Rome-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='EPYC-Rome-v3'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='EPYC-v3'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='EPYC-v4'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='GraniteRapids'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-fp16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-int8'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-tile'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-fp16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fbsdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrc'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fzrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='mcdt-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pbrsb-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='prefetchiti'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='psdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xfd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='GraniteRapids-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-fp16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-int8'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-tile'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-fp16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fbsdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrc'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fzrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='mcdt-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pbrsb-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='prefetchiti'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='psdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xfd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='GraniteRapids-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-fp16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-int8'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-tile'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx10'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx10-128'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx10-256'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx10-512'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-fp16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='cldemote'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fbsdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrc'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fzrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='mcdt-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='movdir64b'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='movdiri'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pbrsb-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='prefetchiti'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='psdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ss'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xfd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Haswell'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Haswell-IBRS'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Haswell-noTSX'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Haswell-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Haswell-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Haswell-v3'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Haswell-v4'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Icelake-Server'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Icelake-Server-noTSX'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Icelake-Server-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Icelake-Server-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Icelake-Server-v3'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Icelake-Server-v4'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Icelake-Server-v5'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Icelake-Server-v6'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Icelake-Server-v7'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='IvyBridge'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='IvyBridge-IBRS'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='IvyBridge-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='IvyBridge-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='KnightsMill'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-4fmaps'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-4vnniw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512er'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512pf'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ss'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='KnightsMill-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-4fmaps'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-4vnniw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512er'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512pf'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ss'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Opteron_G4'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fma4'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xop'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Opteron_G4-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fma4'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xop'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Opteron_G5'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fma4'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='tbm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xop'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Opteron_G5-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fma4'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='tbm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xop'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='SapphireRapids'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-int8'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-tile'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-fp16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrc'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fzrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xfd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='SapphireRapids-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-int8'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-tile'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-fp16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrc'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fzrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xfd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='SapphireRapids-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-int8'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-tile'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-fp16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fbsdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrc'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fzrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='psdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xfd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='SapphireRapids-v3'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-int8'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-tile'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-fp16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='cldemote'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fbsdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrc'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fzrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='movdir64b'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='movdiri'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='psdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ss'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xfd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='SierraForest'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-ne-convert'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-vnni-int8'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='cmpccxadd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fbsdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='mcdt-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pbrsb-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='psdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='SierraForest-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-ne-convert'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-vnni-int8'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='cmpccxadd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fbsdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='mcdt-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pbrsb-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='psdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Client'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Client-IBRS'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Client-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Client-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Client-v3'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Client-v4'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Server'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Server-IBRS'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Server-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Server-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Server-v3'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Server-v4'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Server-v5'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Snowridge'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='cldemote'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='core-capability'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='movdir64b'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='movdiri'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='mpx'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='split-lock-detect'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Snowridge-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='cldemote'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='core-capability'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='movdir64b'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='movdiri'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='mpx'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='split-lock-detect'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Snowridge-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='cldemote'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='core-capability'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='movdir64b'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='movdiri'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='split-lock-detect'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Snowridge-v3'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='cldemote'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='core-capability'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='movdir64b'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='movdiri'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='split-lock-detect'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Snowridge-v4'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='cldemote'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='movdir64b'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='movdiri'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='athlon'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='3dnow'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='3dnowext'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='athlon-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='3dnow'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='3dnowext'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='core2duo'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ss'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='core2duo-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ss'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='coreduo'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ss'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='coreduo-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ss'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='n270'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ss'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='n270-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ss'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='phenom'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='3dnow'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='3dnowext'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='phenom-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='3dnow'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='3dnowext'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </mode>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   </cpu>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   <memoryBacking supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <enum name='sourceType'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <value>file</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <value>anonymous</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <value>memfd</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   </memoryBacking>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   <devices>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <disk supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='diskDevice'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>disk</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>cdrom</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>floppy</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>lun</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='bus'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>fdc</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>scsi</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>virtio</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>usb</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>sata</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='model'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>virtio</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>virtio-transitional</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>virtio-non-transitional</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </disk>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <graphics supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='type'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>vnc</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>egl-headless</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>dbus</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </graphics>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <video supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='modelType'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>vga</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>cirrus</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>virtio</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>none</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>bochs</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>ramfb</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </video>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <hostdev supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='mode'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>subsystem</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='startupPolicy'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>default</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>mandatory</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>requisite</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>optional</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='subsysType'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>usb</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>pci</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>scsi</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='capsType'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='pciBackend'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </hostdev>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <rng supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='model'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>virtio</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>virtio-transitional</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>virtio-non-transitional</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='backendModel'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>random</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>egd</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>builtin</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </rng>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <filesystem supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='driverType'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>path</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>handle</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>virtiofs</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </filesystem>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <tpm supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='model'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>tpm-tis</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>tpm-crb</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='backendModel'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>emulator</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>external</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='backendVersion'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>2.0</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </tpm>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <redirdev supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='bus'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>usb</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </redirdev>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <channel supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='type'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>pty</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>unix</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </channel>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <crypto supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='model'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='type'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>qemu</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='backendModel'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>builtin</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </crypto>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <interface supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='backendType'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>default</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>passt</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </interface>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <panic supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='model'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>isa</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>hyperv</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </panic>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <console supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='type'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>null</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>vc</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>pty</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>dev</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>file</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>pipe</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>stdio</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>udp</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>tcp</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>unix</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>qemu-vdagent</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>dbus</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </console>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   </devices>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   <features>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <gic supported='no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <vmcoreinfo supported='yes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <genid supported='yes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <backingStoreInput supported='yes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <backup supported='yes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <async-teardown supported='yes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <ps2 supported='yes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <sev supported='no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <sgx supported='no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <hyperv supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='features'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>relaxed</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>vapic</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>spinlocks</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>vpindex</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>runtime</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>synic</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>stimer</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>reset</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>vendor_id</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>frequencies</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>reenlightenment</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>tlbflush</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>ipi</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>avic</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>emsr_bitmap</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>xmm_input</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <defaults>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <spinlocks>4095</spinlocks>
Nov 24 09:47:09 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <stimer_direct>on</stimer_direct>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <tlbflush_direct>on</tlbflush_direct>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <tlbflush_extended>on</tlbflush_extended>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </defaults>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </hyperv>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <launchSecurity supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='sectype'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>tdx</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </launchSecurity>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   </features>
Nov 24 09:47:09 compute-0 nova_compute[257700]: </domainCapabilities>
Nov 24 09:47:09 compute-0 nova_compute[257700]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.790 257704 DEBUG nova.virt.libvirt.host [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 24 09:47:09 compute-0 nova_compute[257700]: <domainCapabilities>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   <path>/usr/libexec/qemu-kvm</path>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   <domain>kvm</domain>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   <arch>x86_64</arch>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   <vcpu max='240'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   <iothreads supported='yes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   <os supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <enum name='firmware'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <loader supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='type'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>rom</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>pflash</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='readonly'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>yes</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>no</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='secure'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>no</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </loader>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   </os>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   <cpu>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <mode name='host-passthrough' supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='hostPassthroughMigratable'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>on</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>off</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </mode>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <mode name='maximum' supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='maximumMigratable'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>on</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>off</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </mode>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <mode name='host-model' supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <vendor>AMD</vendor>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='x2apic'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='tsc-deadline'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='hypervisor'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='tsc_adjust'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='spec-ctrl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='stibp'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='ssbd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='cmp_legacy'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='overflow-recov'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='succor'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='ibrs'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='amd-ssbd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='virt-ssbd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='lbrv'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='tsc-scale'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='vmcb-clean'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='flushbyasid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='pause-filter'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='pfthreshold'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='svme-addr-chk'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <feature policy='disable' name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </mode>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <mode name='custom' supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Broadwell'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Broadwell-IBRS'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Broadwell-noTSX'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Broadwell-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Broadwell-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Broadwell-v3'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Broadwell-v4'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Cascadelake-Server'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Cascadelake-Server-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Cascadelake-Server-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Cascadelake-Server-v3'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 24 09:47:09 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Cascadelake-Server-v4'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Cascadelake-Server-v5'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Cooperlake'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Cooperlake-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Cooperlake-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Denverton'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='mpx'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Denverton-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='mpx'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Denverton-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Denverton-v3'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Dhyana-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='EPYC-Genoa'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amd-psfd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='auto-ibrs'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='no-nested-data-bp'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='null-sel-clr-base'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='stibp-always-on'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='EPYC-Genoa-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amd-psfd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='auto-ibrs'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='no-nested-data-bp'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='null-sel-clr-base'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='stibp-always-on'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='EPYC-Milan'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='EPYC-Milan-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='EPYC-Milan-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amd-psfd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='no-nested-data-bp'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='null-sel-clr-base'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='stibp-always-on'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='EPYC-Rome'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='EPYC-Rome-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='EPYC-Rome-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='EPYC-Rome-v3'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='EPYC-v3'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='EPYC-v4'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='GraniteRapids'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-fp16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-int8'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-tile'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-fp16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fbsdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrc'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fzrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='mcdt-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pbrsb-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='prefetchiti'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='psdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xfd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='GraniteRapids-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-fp16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-int8'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-tile'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-fp16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fbsdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrc'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fzrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='mcdt-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pbrsb-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='prefetchiti'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='psdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xfd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='GraniteRapids-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-fp16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-int8'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-tile'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx10'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx10-128'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx10-256'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx10-512'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-fp16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='cldemote'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fbsdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrc'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fzrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='mcdt-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='movdir64b'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='movdiri'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pbrsb-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='prefetchiti'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='psdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ss'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xfd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Haswell'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Haswell-IBRS'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Haswell-noTSX'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Haswell-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Haswell-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Haswell-v3'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Haswell-v4'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Icelake-Server'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Icelake-Server-noTSX'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Icelake-Server-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Icelake-Server-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Icelake-Server-v3'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Icelake-Server-v4'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Icelake-Server-v5'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Icelake-Server-v6'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Icelake-Server-v7'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='IvyBridge'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='IvyBridge-IBRS'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='IvyBridge-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='IvyBridge-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='KnightsMill'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-4fmaps'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-4vnniw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512er'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512pf'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ss'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='KnightsMill-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-4fmaps'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-4vnniw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512er'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512pf'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ss'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Opteron_G4'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fma4'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xop'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Opteron_G4-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fma4'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xop'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Opteron_G5'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fma4'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='tbm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xop'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Opteron_G5-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fma4'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='tbm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xop'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='SapphireRapids'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-int8'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-tile'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-fp16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrc'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fzrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xfd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='SapphireRapids-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-int8'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-tile'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-fp16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrc'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fzrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xfd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='SapphireRapids-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-int8'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-tile'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-fp16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fbsdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrc'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fzrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='psdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xfd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='SapphireRapids-v3'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-int8'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='amx-tile'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-bf16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-fp16'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512-vpopcntdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bitalg'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vbmi2'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='cldemote'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fbsdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrc'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fzrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='la57'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='movdir64b'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='movdiri'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='psdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ss'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='taa-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='tsx-ldtrk'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xfd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='SierraForest'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-ne-convert'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-vnni-int8'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='cmpccxadd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fbsdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='mcdt-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pbrsb-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='psdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='SierraForest-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-ifma'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-ne-convert'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-vnni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx-vnni-int8'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='bus-lock-detect'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='cmpccxadd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fbsdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='fsrs'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ibrs-all'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='mcdt-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pbrsb-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='psdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='sbdr-ssdp-no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='serialize'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vaes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='vpclmulqdq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Client'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Client-IBRS'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Client-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Client-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Client-v3'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Client-v4'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Server'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Server-IBRS'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Server-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Server-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='hle'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='rtm'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Server-v3'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Server-v4'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Skylake-Server-v5'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512bw'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512cd'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512dq'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512f'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='avx512vl'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='invpcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pcid'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='pku'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Snowridge'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='cldemote'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='core-capability'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='movdir64b'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='movdiri'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='mpx'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='split-lock-detect'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Snowridge-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='cldemote'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='core-capability'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='movdir64b'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='movdiri'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='mpx'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='split-lock-detect'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Snowridge-v2'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='cldemote'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='core-capability'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='movdir64b'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='movdiri'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='split-lock-detect'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Snowridge-v3'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='cldemote'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='core-capability'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='movdir64b'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='movdiri'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='split-lock-detect'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='Snowridge-v4'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='cldemote'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='erms'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='gfni'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='movdir64b'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='movdiri'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='xsaves'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='athlon'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='3dnow'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='3dnowext'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='athlon-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='3dnow'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='3dnowext'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='core2duo'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ss'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='core2duo-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ss'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='coreduo'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ss'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='coreduo-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ss'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='n270'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ss'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='n270-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='ss'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='phenom'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='3dnow'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='3dnowext'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <blockers model='phenom-v1'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='3dnow'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <feature name='3dnowext'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </blockers>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </mode>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   </cpu>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   <memoryBacking supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <enum name='sourceType'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <value>file</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <value>anonymous</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <value>memfd</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   </memoryBacking>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   <devices>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <disk supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='diskDevice'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>disk</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>cdrom</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>floppy</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>lun</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='bus'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>ide</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>fdc</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>scsi</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>virtio</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>usb</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>sata</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='model'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>virtio</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>virtio-transitional</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>virtio-non-transitional</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </disk>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <graphics supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='type'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>vnc</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>egl-headless</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>dbus</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </graphics>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <video supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='modelType'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>vga</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>cirrus</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>virtio</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>none</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>bochs</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>ramfb</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </video>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <hostdev supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='mode'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>subsystem</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='startupPolicy'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>default</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>mandatory</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>requisite</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>optional</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='subsysType'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>usb</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>pci</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>scsi</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='capsType'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='pciBackend'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </hostdev>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <rng supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='model'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>virtio</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>virtio-transitional</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>virtio-non-transitional</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='backendModel'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>random</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>egd</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>builtin</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </rng>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <filesystem supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='driverType'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>path</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>handle</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>virtiofs</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </filesystem>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <tpm supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='model'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>tpm-tis</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>tpm-crb</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='backendModel'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>emulator</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>external</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='backendVersion'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>2.0</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </tpm>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <redirdev supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='bus'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>usb</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </redirdev>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <channel supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='type'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>pty</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>unix</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </channel>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <crypto supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='model'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='type'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>qemu</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='backendModel'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>builtin</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </crypto>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <interface supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='backendType'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>default</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>passt</value>
Nov 24 09:47:09 compute-0 sudo[258083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </interface>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <panic supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='model'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>isa</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>hyperv</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </panic>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <console supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='type'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>null</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>vc</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>pty</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>dev</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>file</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>pipe</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>stdio</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>udp</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>tcp</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>unix</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>qemu-vdagent</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>dbus</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </console>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   </devices>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   <features>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <gic supported='no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <vmcoreinfo supported='yes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <genid supported='yes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <backingStoreInput supported='yes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <backup supported='yes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <async-teardown supported='yes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <ps2 supported='yes'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <sev supported='no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <sgx supported='no'/>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <hyperv supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='features'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>relaxed</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>vapic</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>spinlocks</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>vpindex</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>runtime</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>synic</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>stimer</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>reset</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>vendor_id</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>frequencies</value>
Nov 24 09:47:09 compute-0 sudo[258083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>reenlightenment</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>tlbflush</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>ipi</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>avic</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>emsr_bitmap</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>xmm_input</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <defaults>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <spinlocks>4095</spinlocks>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <stimer_direct>on</stimer_direct>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <tlbflush_direct>on</tlbflush_direct>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <tlbflush_extended>on</tlbflush_extended>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </defaults>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </hyperv>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     <launchSecurity supported='yes'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       <enum name='sectype'>
Nov 24 09:47:09 compute-0 nova_compute[257700]:         <value>tdx</value>
Nov 24 09:47:09 compute-0 nova_compute[257700]:       </enum>
Nov 24 09:47:09 compute-0 nova_compute[257700]:     </launchSecurity>
Nov 24 09:47:09 compute-0 nova_compute[257700]:   </features>
Nov 24 09:47:09 compute-0 nova_compute[257700]: </domainCapabilities>
Nov 24 09:47:09 compute-0 nova_compute[257700]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.866 257704 DEBUG nova.virt.libvirt.host [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.866 257704 INFO nova.virt.libvirt.host [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Secure Boot support detected
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.868 257704 INFO nova.virt.libvirt.driver [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.868 257704 INFO nova.virt.libvirt.driver [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.876 257704 DEBUG nova.virt.libvirt.driver [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.890 257704 INFO nova.virt.node [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Determined node identity a50ce3b5-7e9e-4263-a4aa-c35573ac7257 from /var/lib/nova/compute_id
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.902 257704 WARNING nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Compute nodes ['a50ce3b5-7e9e-4263-a4aa-c35573ac7257'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.922 257704 INFO nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.934 257704 WARNING nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.934 257704 DEBUG oslo_concurrency.lockutils [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.934 257704 DEBUG oslo_concurrency.lockutils [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.934 257704 DEBUG oslo_concurrency.lockutils [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.935 257704 DEBUG nova.compute.resource_tracker [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 09:47:09 compute-0 nova_compute[257700]: 2025-11-24 09:47:09.935 257704 DEBUG oslo_concurrency.processutils [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:47:09 compute-0 sudo[258083]: pam_unix(sudo:session): session closed for user root
Nov 24 09:47:10 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:47:10 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:47:10 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:47:10 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:47:10 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:47:10 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:47:10 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:47:10 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:47:10 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:47:10 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:47:10 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:47:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:10 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa520003340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:10 compute-0 sudo[258109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 24 09:47:10 compute-0 sudo[258109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:47:10 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v598: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:47:10 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:10 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:10 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:10.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:10 compute-0 rsyslogd[1004]: imjournal from <np0005533251:nova_compute>: begin to drop messages due to rate-limiting
Nov 24 09:47:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:10 : epoch 6924297d : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:47:10 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:47:10 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4043361646' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:47:10 compute-0 nova_compute[257700]: 2025-11-24 09:47:10.395 257704 DEBUG oslo_concurrency.processutils [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:47:10 compute-0 podman[258195]: 2025-11-24 09:47:10.457031433 +0000 UTC m=+0.055798062 container create b34a0f48706a260fd570c60e2429940689bca04d521781b9bde05a03a7f7a6c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_borg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 09:47:10 compute-0 systemd[1]: Started libpod-conmon-b34a0f48706a260fd570c60e2429940689bca04d521781b9bde05a03a7f7a6c3.scope.
Nov 24 09:47:10 compute-0 podman[258195]: 2025-11-24 09:47:10.432434169 +0000 UTC m=+0.031200828 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:47:10 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:47:10 compute-0 podman[258195]: 2025-11-24 09:47:10.552316834 +0000 UTC m=+0.151083473 container init b34a0f48706a260fd570c60e2429940689bca04d521781b9bde05a03a7f7a6c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_borg, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 24 09:47:10 compute-0 podman[258195]: 2025-11-24 09:47:10.55987724 +0000 UTC m=+0.158643869 container start b34a0f48706a260fd570c60e2429940689bca04d521781b9bde05a03a7f7a6c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:47:10 compute-0 nova_compute[257700]: 2025-11-24 09:47:10.560 257704 WARNING nova.virt.libvirt.driver [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 09:47:10 compute-0 nova_compute[257700]: 2025-11-24 09:47:10.562 257704 DEBUG nova.compute.resource_tracker [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4882MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 09:47:10 compute-0 nova_compute[257700]: 2025-11-24 09:47:10.562 257704 DEBUG oslo_concurrency.lockutils [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:47:10 compute-0 nova_compute[257700]: 2025-11-24 09:47:10.562 257704 DEBUG oslo_concurrency.lockutils [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:47:10 compute-0 flamboyant_borg[258211]: 167 167
Nov 24 09:47:10 compute-0 systemd[1]: libpod-b34a0f48706a260fd570c60e2429940689bca04d521781b9bde05a03a7f7a6c3.scope: Deactivated successfully.
Nov 24 09:47:10 compute-0 conmon[258211]: conmon b34a0f48706a260fd570 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b34a0f48706a260fd570c60e2429940689bca04d521781b9bde05a03a7f7a6c3.scope/container/memory.events
Nov 24 09:47:10 compute-0 podman[258195]: 2025-11-24 09:47:10.566405151 +0000 UTC m=+0.165171780 container attach b34a0f48706a260fd570c60e2429940689bca04d521781b9bde05a03a7f7a6c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_borg, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:47:10 compute-0 podman[258195]: 2025-11-24 09:47:10.566902323 +0000 UTC m=+0.165668952 container died b34a0f48706a260fd570c60e2429940689bca04d521781b9bde05a03a7f7a6c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_borg, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 09:47:10 compute-0 nova_compute[257700]: 2025-11-24 09:47:10.589 257704 WARNING nova.compute.resource_tracker [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] No compute node record for compute-0.ctlplane.example.com:a50ce3b5-7e9e-4263-a4aa-c35573ac7257: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host a50ce3b5-7e9e-4263-a4aa-c35573ac7257 could not be found.
Nov 24 09:47:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-7192f052a6c366bd0b76a578e24d802e21425ae5f1598d7ad901f976101e7922-merged.mount: Deactivated successfully.
Nov 24 09:47:10 compute-0 nova_compute[257700]: 2025-11-24 09:47:10.610 257704 INFO nova.compute.resource_tracker [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: a50ce3b5-7e9e-4263-a4aa-c35573ac7257
Nov 24 09:47:10 compute-0 podman[258195]: 2025-11-24 09:47:10.622454888 +0000 UTC m=+0.221221507 container remove b34a0f48706a260fd570c60e2429940689bca04d521781b9bde05a03a7f7a6c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_borg, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:47:10 compute-0 systemd[1]: libpod-conmon-b34a0f48706a260fd570c60e2429940689bca04d521781b9bde05a03a7f7a6c3.scope: Deactivated successfully.
Nov 24 09:47:10 compute-0 nova_compute[257700]: 2025-11-24 09:47:10.667 257704 DEBUG nova.compute.resource_tracker [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 09:47:10 compute-0 nova_compute[257700]: 2025-11-24 09:47:10.667 257704 DEBUG nova.compute.resource_tracker [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 09:47:10 compute-0 podman[258237]: 2025-11-24 09:47:10.799609121 +0000 UTC m=+0.053156837 container create f72f9a1f4035b5eb4bb9e8914dffbb19ee07a3bf924b5d17eace55331e92d3a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_curran, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 09:47:10 compute-0 systemd[1]: Started libpod-conmon-f72f9a1f4035b5eb4bb9e8914dffbb19ee07a3bf924b5d17eace55331e92d3a3.scope.
Nov 24 09:47:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:10 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa520003340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:10 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:47:10 compute-0 podman[258237]: 2025-11-24 09:47:10.781761673 +0000 UTC m=+0.035309409 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:47:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49b2b57a8d55ee27938ef3e78fe9a0dadc4977cdb0d7a176ffd18a5079801a8e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:47:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49b2b57a8d55ee27938ef3e78fe9a0dadc4977cdb0d7a176ffd18a5079801a8e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:47:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49b2b57a8d55ee27938ef3e78fe9a0dadc4977cdb0d7a176ffd18a5079801a8e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:47:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49b2b57a8d55ee27938ef3e78fe9a0dadc4977cdb0d7a176ffd18a5079801a8e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:47:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49b2b57a8d55ee27938ef3e78fe9a0dadc4977cdb0d7a176ffd18a5079801a8e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:47:10 compute-0 podman[258237]: 2025-11-24 09:47:10.900618273 +0000 UTC m=+0.154166009 container init f72f9a1f4035b5eb4bb9e8914dffbb19ee07a3bf924b5d17eace55331e92d3a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_curran, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Nov 24 09:47:10 compute-0 podman[258237]: 2025-11-24 09:47:10.913617443 +0000 UTC m=+0.167165159 container start f72f9a1f4035b5eb4bb9e8914dffbb19ee07a3bf924b5d17eace55331e92d3a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:47:10 compute-0 podman[258237]: 2025-11-24 09:47:10.916856653 +0000 UTC m=+0.170404369 container attach f72f9a1f4035b5eb4bb9e8914dffbb19ee07a3bf924b5d17eace55331e92d3a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_curran, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 24 09:47:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:47:10] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Nov 24 09:47:10 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:47:10] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Nov 24 09:47:11 compute-0 ceph-mon[74331]: pgmap v598: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:47:11 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/1422678948' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:47:11 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/4043361646' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:47:11 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:11 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:11 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:11.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:11 compute-0 nova_compute[257700]: 2025-11-24 09:47:11.163 257704 INFO nova.scheduler.client.report [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [req-413eb372-d956-4aae-94c5-b9ca22721c4f] Created resource provider record via placement API for resource provider with UUID a50ce3b5-7e9e-4263-a4aa-c35573ac7257 and name compute-0.ctlplane.example.com.
Nov 24 09:47:11 compute-0 nova_compute[257700]: 2025-11-24 09:47:11.185 257704 DEBUG oslo_concurrency.processutils [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:47:11 compute-0 ecstatic_curran[258254]: --> passed data devices: 0 physical, 1 LVM
Nov 24 09:47:11 compute-0 ecstatic_curran[258254]: --> All data devices are unavailable
Nov 24 09:47:11 compute-0 systemd[1]: libpod-f72f9a1f4035b5eb4bb9e8914dffbb19ee07a3bf924b5d17eace55331e92d3a3.scope: Deactivated successfully.
Nov 24 09:47:11 compute-0 podman[258237]: 2025-11-24 09:47:11.285796219 +0000 UTC m=+0.539343945 container died f72f9a1f4035b5eb4bb9e8914dffbb19ee07a3bf924b5d17eace55331e92d3a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_curran, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Nov 24 09:47:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-49b2b57a8d55ee27938ef3e78fe9a0dadc4977cdb0d7a176ffd18a5079801a8e-merged.mount: Deactivated successfully.
Nov 24 09:47:11 compute-0 podman[258237]: 2025-11-24 09:47:11.325198957 +0000 UTC m=+0.578746673 container remove f72f9a1f4035b5eb4bb9e8914dffbb19ee07a3bf924b5d17eace55331e92d3a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_curran, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Nov 24 09:47:11 compute-0 systemd[1]: libpod-conmon-f72f9a1f4035b5eb4bb9e8914dffbb19ee07a3bf924b5d17eace55331e92d3a3.scope: Deactivated successfully.
Nov 24 09:47:11 compute-0 sudo[258109]: pam_unix(sudo:session): session closed for user root
Nov 24 09:47:11 compute-0 sudo[258301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:47:11 compute-0 sudo[258301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:47:11 compute-0 sudo[258301]: pam_unix(sudo:session): session closed for user root
Nov 24 09:47:11 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:11 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa52c002df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:11 compute-0 sudo[258326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- lvm list --format json
Nov 24 09:47:11 compute-0 sudo[258326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:47:11 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:47:11 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1599294403' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:47:11 compute-0 nova_compute[257700]: 2025-11-24 09:47:11.630 257704 DEBUG oslo_concurrency.processutils [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:47:11 compute-0 nova_compute[257700]: 2025-11-24 09:47:11.637 257704 DEBUG nova.virt.libvirt.host [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Nov 24 09:47:11 compute-0 nova_compute[257700]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Nov 24 09:47:11 compute-0 nova_compute[257700]: 2025-11-24 09:47:11.637 257704 INFO nova.virt.libvirt.host [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] kernel doesn't support AMD SEV
Nov 24 09:47:11 compute-0 nova_compute[257700]: 2025-11-24 09:47:11.638 257704 DEBUG nova.compute.provider_tree [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Updating inventory in ProviderTree for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 24 09:47:11 compute-0 nova_compute[257700]: 2025-11-24 09:47:11.639 257704 DEBUG nova.virt.libvirt.driver [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 24 09:47:11 compute-0 nova_compute[257700]: 2025-11-24 09:47:11.691 257704 DEBUG nova.scheduler.client.report [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Updated inventory for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Nov 24 09:47:11 compute-0 nova_compute[257700]: 2025-11-24 09:47:11.692 257704 DEBUG nova.compute.provider_tree [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Updating resource provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 24 09:47:11 compute-0 nova_compute[257700]: 2025-11-24 09:47:11.692 257704 DEBUG nova.compute.provider_tree [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Updating inventory in ProviderTree for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 24 09:47:11 compute-0 podman[258394]: 2025-11-24 09:47:11.821168765 +0000 UTC m=+0.036893088 container create c2ad8fb65dc23d183c5db9110b7f56097a8d24b5c6babe7d74414bf5b6abeeb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_babbage, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:47:11 compute-0 nova_compute[257700]: 2025-11-24 09:47:11.832 257704 DEBUG nova.compute.provider_tree [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Updating resource provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 24 09:47:11 compute-0 nova_compute[257700]: 2025-11-24 09:47:11.851 257704 DEBUG nova.compute.resource_tracker [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 09:47:11 compute-0 nova_compute[257700]: 2025-11-24 09:47:11.851 257704 DEBUG oslo_concurrency.lockutils [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.289s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:47:11 compute-0 nova_compute[257700]: 2025-11-24 09:47:11.851 257704 DEBUG nova.service [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Nov 24 09:47:11 compute-0 systemd[1]: Started libpod-conmon-c2ad8fb65dc23d183c5db9110b7f56097a8d24b5c6babe7d74414bf5b6abeeb8.scope.
Nov 24 09:47:11 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:47:11 compute-0 nova_compute[257700]: 2025-11-24 09:47:11.890 257704 DEBUG nova.service [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Nov 24 09:47:11 compute-0 nova_compute[257700]: 2025-11-24 09:47:11.891 257704 DEBUG nova.servicegroup.drivers.db [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Nov 24 09:47:11 compute-0 podman[258394]: 2025-11-24 09:47:11.893382799 +0000 UTC m=+0.109107122 container init c2ad8fb65dc23d183c5db9110b7f56097a8d24b5c6babe7d74414bf5b6abeeb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_babbage, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Nov 24 09:47:11 compute-0 podman[258394]: 2025-11-24 09:47:11.899296575 +0000 UTC m=+0.115020898 container start c2ad8fb65dc23d183c5db9110b7f56097a8d24b5c6babe7d74414bf5b6abeeb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_babbage, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Nov 24 09:47:11 compute-0 podman[258394]: 2025-11-24 09:47:11.805289644 +0000 UTC m=+0.021013987 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:47:11 compute-0 podman[258394]: 2025-11-24 09:47:11.902656148 +0000 UTC m=+0.118380471 container attach c2ad8fb65dc23d183c5db9110b7f56097a8d24b5c6babe7d74414bf5b6abeeb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Nov 24 09:47:11 compute-0 hungry_babbage[258410]: 167 167
Nov 24 09:47:11 compute-0 systemd[1]: libpod-c2ad8fb65dc23d183c5db9110b7f56097a8d24b5c6babe7d74414bf5b6abeeb8.scope: Deactivated successfully.
Nov 24 09:47:11 compute-0 conmon[258410]: conmon c2ad8fb65dc23d183c5d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c2ad8fb65dc23d183c5db9110b7f56097a8d24b5c6babe7d74414bf5b6abeeb8.scope/container/memory.events
Nov 24 09:47:11 compute-0 podman[258394]: 2025-11-24 09:47:11.905086978 +0000 UTC m=+0.120811301 container died c2ad8fb65dc23d183c5db9110b7f56097a8d24b5c6babe7d74414bf5b6abeeb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:47:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-570b02cd96b910c795f5cc0009ec1404945ab98ad08e5ef3246ead0a63d3b305-merged.mount: Deactivated successfully.
Nov 24 09:47:11 compute-0 podman[258394]: 2025-11-24 09:47:11.933999007 +0000 UTC m=+0.149723330 container remove c2ad8fb65dc23d183c5db9110b7f56097a8d24b5c6babe7d74414bf5b6abeeb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:47:11 compute-0 systemd[1]: libpod-conmon-c2ad8fb65dc23d183c5db9110b7f56097a8d24b5c6babe7d74414bf5b6abeeb8.scope: Deactivated successfully.
Nov 24 09:47:12 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:12 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa5100016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:12 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/2145502785' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:47:12 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1599294403' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:47:12 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/1945923946' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:47:12 compute-0 podman[258433]: 2025-11-24 09:47:12.079415611 +0000 UTC m=+0.036673202 container create 63af510259f2cbf7a6e7d3c1fe4431c8d77eb7781c8de3d12a10a32525b1f635 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Nov 24 09:47:12 compute-0 systemd[1]: Started libpod-conmon-63af510259f2cbf7a6e7d3c1fe4431c8d77eb7781c8de3d12a10a32525b1f635.scope.
Nov 24 09:47:12 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:47:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edc576dbe38aa31da420064a06d462d63618394f1bbe1a605bd108cd7e175b53/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:47:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edc576dbe38aa31da420064a06d462d63618394f1bbe1a605bd108cd7e175b53/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:47:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edc576dbe38aa31da420064a06d462d63618394f1bbe1a605bd108cd7e175b53/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:47:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edc576dbe38aa31da420064a06d462d63618394f1bbe1a605bd108cd7e175b53/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:47:12 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v599: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:47:12 compute-0 podman[258433]: 2025-11-24 09:47:12.15506588 +0000 UTC m=+0.112323491 container init 63af510259f2cbf7a6e7d3c1fe4431c8d77eb7781c8de3d12a10a32525b1f635 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_shirley, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:47:12 compute-0 podman[258433]: 2025-11-24 09:47:12.064916735 +0000 UTC m=+0.022174346 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:47:12 compute-0 podman[258433]: 2025-11-24 09:47:12.161902018 +0000 UTC m=+0.119159609 container start 63af510259f2cbf7a6e7d3c1fe4431c8d77eb7781c8de3d12a10a32525b1f635 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_shirley, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Nov 24 09:47:12 compute-0 podman[258433]: 2025-11-24 09:47:12.166654714 +0000 UTC m=+0.123912325 container attach 63af510259f2cbf7a6e7d3c1fe4431c8d77eb7781c8de3d12a10a32525b1f635 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_shirley, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:47:12 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:12 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:12 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:12.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:12 compute-0 kind_shirley[258449]: {
Nov 24 09:47:12 compute-0 kind_shirley[258449]:     "0": [
Nov 24 09:47:12 compute-0 kind_shirley[258449]:         {
Nov 24 09:47:12 compute-0 kind_shirley[258449]:             "devices": [
Nov 24 09:47:12 compute-0 kind_shirley[258449]:                 "/dev/loop3"
Nov 24 09:47:12 compute-0 kind_shirley[258449]:             ],
Nov 24 09:47:12 compute-0 kind_shirley[258449]:             "lv_name": "ceph_lv0",
Nov 24 09:47:12 compute-0 kind_shirley[258449]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:47:12 compute-0 kind_shirley[258449]:             "lv_size": "21470642176",
Nov 24 09:47:12 compute-0 kind_shirley[258449]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=84a084c3-61a7-5de7-8207-1f88efa59a64,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 24 09:47:12 compute-0 kind_shirley[258449]:             "lv_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:47:12 compute-0 kind_shirley[258449]:             "name": "ceph_lv0",
Nov 24 09:47:12 compute-0 kind_shirley[258449]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:47:12 compute-0 kind_shirley[258449]:             "tags": {
Nov 24 09:47:12 compute-0 kind_shirley[258449]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:47:12 compute-0 kind_shirley[258449]:                 "ceph.block_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:47:12 compute-0 kind_shirley[258449]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 09:47:12 compute-0 kind_shirley[258449]:                 "ceph.cluster_fsid": "84a084c3-61a7-5de7-8207-1f88efa59a64",
Nov 24 09:47:12 compute-0 kind_shirley[258449]:                 "ceph.cluster_name": "ceph",
Nov 24 09:47:12 compute-0 kind_shirley[258449]:                 "ceph.crush_device_class": "",
Nov 24 09:47:12 compute-0 kind_shirley[258449]:                 "ceph.encrypted": "0",
Nov 24 09:47:12 compute-0 kind_shirley[258449]:                 "ceph.osd_fsid": "4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c",
Nov 24 09:47:12 compute-0 kind_shirley[258449]:                 "ceph.osd_id": "0",
Nov 24 09:47:12 compute-0 kind_shirley[258449]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 09:47:12 compute-0 kind_shirley[258449]:                 "ceph.type": "block",
Nov 24 09:47:12 compute-0 kind_shirley[258449]:                 "ceph.vdo": "0",
Nov 24 09:47:12 compute-0 kind_shirley[258449]:                 "ceph.with_tpm": "0"
Nov 24 09:47:12 compute-0 kind_shirley[258449]:             },
Nov 24 09:47:12 compute-0 kind_shirley[258449]:             "type": "block",
Nov 24 09:47:12 compute-0 kind_shirley[258449]:             "vg_name": "ceph_vg0"
Nov 24 09:47:12 compute-0 kind_shirley[258449]:         }
Nov 24 09:47:12 compute-0 kind_shirley[258449]:     ]
Nov 24 09:47:12 compute-0 kind_shirley[258449]: }
Nov 24 09:47:12 compute-0 systemd[1]: libpod-63af510259f2cbf7a6e7d3c1fe4431c8d77eb7781c8de3d12a10a32525b1f635.scope: Deactivated successfully.
Nov 24 09:47:12 compute-0 podman[258433]: 2025-11-24 09:47:12.417977361 +0000 UTC m=+0.375234952 container died 63af510259f2cbf7a6e7d3c1fe4431c8d77eb7781c8de3d12a10a32525b1f635 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_shirley, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid)
Nov 24 09:47:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-edc576dbe38aa31da420064a06d462d63618394f1bbe1a605bd108cd7e175b53-merged.mount: Deactivated successfully.
Nov 24 09:47:12 compute-0 podman[258433]: 2025-11-24 09:47:12.459980943 +0000 UTC m=+0.417238554 container remove 63af510259f2cbf7a6e7d3c1fe4431c8d77eb7781c8de3d12a10a32525b1f635 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:47:12 compute-0 systemd[1]: libpod-conmon-63af510259f2cbf7a6e7d3c1fe4431c8d77eb7781c8de3d12a10a32525b1f635.scope: Deactivated successfully.
Nov 24 09:47:12 compute-0 sudo[258326]: pam_unix(sudo:session): session closed for user root
Nov 24 09:47:12 compute-0 sudo[258469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:47:12 compute-0 sudo[258469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:47:12 compute-0 sudo[258469]: pam_unix(sudo:session): session closed for user root
Nov 24 09:47:12 compute-0 sudo[258494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- raw list --format json
Nov 24 09:47:12 compute-0 sudo[258494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:47:12 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:12 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa528002920 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:12 compute-0 podman[258558]: 2025-11-24 09:47:12.98686041 +0000 UTC m=+0.044409731 container create 6f8fa85d2aa40246199f7d8047771340bb9332131e2f70033330cd403b773019 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Nov 24 09:47:13 compute-0 systemd[1]: Started libpod-conmon-6f8fa85d2aa40246199f7d8047771340bb9332131e2f70033330cd403b773019.scope.
Nov 24 09:47:13 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/2021533179' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:47:13 compute-0 ceph-mon[74331]: pgmap v599: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:47:13 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:47:13 compute-0 podman[258558]: 2025-11-24 09:47:12.970265852 +0000 UTC m=+0.027815193 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:47:13 compute-0 podman[258558]: 2025-11-24 09:47:13.076614926 +0000 UTC m=+0.134164307 container init 6f8fa85d2aa40246199f7d8047771340bb9332131e2f70033330cd403b773019 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:47:13 compute-0 podman[258558]: 2025-11-24 09:47:13.083085465 +0000 UTC m=+0.140634786 container start 6f8fa85d2aa40246199f7d8047771340bb9332131e2f70033330cd403b773019 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_faraday, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1)
Nov 24 09:47:13 compute-0 podman[258558]: 2025-11-24 09:47:13.086091679 +0000 UTC m=+0.143641020 container attach 6f8fa85d2aa40246199f7d8047771340bb9332131e2f70033330cd403b773019 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_faraday, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 09:47:13 compute-0 agitated_faraday[258574]: 167 167
Nov 24 09:47:13 compute-0 systemd[1]: libpod-6f8fa85d2aa40246199f7d8047771340bb9332131e2f70033330cd403b773019.scope: Deactivated successfully.
Nov 24 09:47:13 compute-0 podman[258558]: 2025-11-24 09:47:13.088957649 +0000 UTC m=+0.146506980 container died 6f8fa85d2aa40246199f7d8047771340bb9332131e2f70033330cd403b773019 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_faraday, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Nov 24 09:47:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-390ecd1a186aebe6646d6ce14345ab8fe72630916fcadee46d935095e04306c9-merged.mount: Deactivated successfully.
Nov 24 09:47:13 compute-0 podman[258558]: 2025-11-24 09:47:13.121969421 +0000 UTC m=+0.179518742 container remove 6f8fa85d2aa40246199f7d8047771340bb9332131e2f70033330cd403b773019 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:47:13 compute-0 systemd[1]: libpod-conmon-6f8fa85d2aa40246199f7d8047771340bb9332131e2f70033330cd403b773019.scope: Deactivated successfully.
Nov 24 09:47:13 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:13 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:13 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:13.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:13 compute-0 podman[258599]: 2025-11-24 09:47:13.286019992 +0000 UTC m=+0.041923381 container create 722654eaaa8b354cc785701f704038a387ab6b0f7296c17b34426413c04960bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_easley, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True)
Nov 24 09:47:13 compute-0 systemd[1]: Started libpod-conmon-722654eaaa8b354cc785701f704038a387ab6b0f7296c17b34426413c04960bf.scope.
Nov 24 09:47:13 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:47:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1599cc5e992023621acf375e973e86bec95898a0d37285e9f112af24788c2690/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:47:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1599cc5e992023621acf375e973e86bec95898a0d37285e9f112af24788c2690/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:47:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1599cc5e992023621acf375e973e86bec95898a0d37285e9f112af24788c2690/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:47:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1599cc5e992023621acf375e973e86bec95898a0d37285e9f112af24788c2690/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:47:13 compute-0 podman[258599]: 2025-11-24 09:47:13.353595573 +0000 UTC m=+0.109498962 container init 722654eaaa8b354cc785701f704038a387ab6b0f7296c17b34426413c04960bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:47:13 compute-0 podman[258599]: 2025-11-24 09:47:13.265814746 +0000 UTC m=+0.021718145 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:47:13 compute-0 podman[258599]: 2025-11-24 09:47:13.361969688 +0000 UTC m=+0.117873057 container start 722654eaaa8b354cc785701f704038a387ab6b0f7296c17b34426413c04960bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325)
Nov 24 09:47:13 compute-0 podman[258599]: 2025-11-24 09:47:13.365496265 +0000 UTC m=+0.121399634 container attach 722654eaaa8b354cc785701f704038a387ab6b0f7296c17b34426413c04960bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_easley, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 24 09:47:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:47:13 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:13 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa520004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:13 compute-0 lvm[258689]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 09:47:13 compute-0 lvm[258689]: VG ceph_vg0 finished
Nov 24 09:47:14 compute-0 admiring_easley[258615]: {}
Nov 24 09:47:14 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:14 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa52c002df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:14 compute-0 systemd[1]: libpod-722654eaaa8b354cc785701f704038a387ab6b0f7296c17b34426413c04960bf.scope: Deactivated successfully.
Nov 24 09:47:14 compute-0 podman[258599]: 2025-11-24 09:47:14.041890357 +0000 UTC m=+0.797793726 container died 722654eaaa8b354cc785701f704038a387ab6b0f7296c17b34426413c04960bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_easley, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Nov 24 09:47:14 compute-0 systemd[1]: libpod-722654eaaa8b354cc785701f704038a387ab6b0f7296c17b34426413c04960bf.scope: Consumed 1.058s CPU time.
Nov 24 09:47:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-1599cc5e992023621acf375e973e86bec95898a0d37285e9f112af24788c2690-merged.mount: Deactivated successfully.
Nov 24 09:47:14 compute-0 podman[258599]: 2025-11-24 09:47:14.081724895 +0000 UTC m=+0.837628264 container remove 722654eaaa8b354cc785701f704038a387ab6b0f7296c17b34426413c04960bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_easley, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 09:47:14 compute-0 systemd[1]: libpod-conmon-722654eaaa8b354cc785701f704038a387ab6b0f7296c17b34426413c04960bf.scope: Deactivated successfully.
Nov 24 09:47:14 compute-0 sudo[258494]: pam_unix(sudo:session): session closed for user root
Nov 24 09:47:14 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:47:14 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:47:14 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:47:14 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v600: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:47:14 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:47:14 compute-0 sudo[258703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:47:14 compute-0 sudo[258703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:47:14 compute-0 sudo[258703]: pam_unix(sudo:session): session closed for user root
Nov 24 09:47:14 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:14 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:47:14 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:14.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:47:14 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:14 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa5100016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:15 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:47:15 compute-0 ceph-mon[74331]: pgmap v600: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:47:15 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:47:15 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:15 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:47:15 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:15.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:47:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:47:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:47:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:15 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa528002920 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:47:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:47:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:47:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:47:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/094715 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:47:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:16 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa520004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:16 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v601: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:47:16 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:47:16 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:16 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:16 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:16.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:16 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa528002920 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:16 compute-0 nova_compute[257700]: 2025-11-24 09:47:16.894 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:47:16 compute-0 nova_compute[257700]: 2025-11-24 09:47:16.941 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:47:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:47:17.061Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:47:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:47:17.062Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 09:47:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:47:17.062Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:47:17 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:17 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:47:17 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:17.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:47:17 compute-0 ceph-mon[74331]: pgmap v601: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:47:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:17 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa510002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:17 compute-0 podman[258732]: 2025-11-24 09:47:17.787115022 +0000 UTC m=+0.065770967 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 24 09:47:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:18 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa52c002df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:18 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v602: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:47:18 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:18 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:47:18 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:18.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:47:18 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:47:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:18 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa520004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:19 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:19 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:19 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:19.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:19 compute-0 ceph-mon[74331]: pgmap v602: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:47:19 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:19 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa528002920 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:19 compute-0 podman[258755]: 2025-11-24 09:47:19.834887364 +0000 UTC m=+0.075524867 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 24 09:47:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:20 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa510002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:20 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v603: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:47:20 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:20 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:20 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:20.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:47:20.557 165073 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:47:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:47:20.557 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:47:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:47:20.557 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:47:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:20 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa52c002df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:47:20] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Nov 24 09:47:20 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:47:20] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Nov 24 09:47:21 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:21 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:47:21 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:21.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:47:21 compute-0 ceph-mon[74331]: pgmap v603: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:47:21 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:21 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa520004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:22 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:22 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa528002920 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:22 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v604: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Nov 24 09:47:22 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:22 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:22 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:22.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:22 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:22 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa510002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:23 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:23 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:47:23 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:23.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:47:23 compute-0 ceph-mon[74331]: pgmap v604: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Nov 24 09:47:23 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:47:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:23 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa52c002df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:23 compute-0 sudo[258786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:47:23 compute-0 sudo[258786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:47:23 compute-0 sudo[258786]: pam_unix(sudo:session): session closed for user root
Nov 24 09:47:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:24 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa520004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:24 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v605: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:47:24 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:24 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:24 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:24.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:24 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa528002920 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:25 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:25 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:47:25 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:25.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:47:25 compute-0 ceph-mon[74331]: pgmap v605: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:47:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:25 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa534001080 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:26 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa52c002df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:26 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v606: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:47:26 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:26 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:26 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:26.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:26 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa520004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:47:27.063Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:47:27 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:27 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:47:27 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:27.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:47:27 compute-0 ceph-mon[74331]: pgmap v606: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:47:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:27 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa528002920 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:27 compute-0 podman[258816]: 2025-11-24 09:47:27.768897905 +0000 UTC m=+0.045710845 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent)
Nov 24 09:47:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:28 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa534001bc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:28 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v607: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:47:28 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:28 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:28 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:28.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:28 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:47:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:28 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa52c0042e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:29 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:29 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:47:29 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:29.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:47:29 compute-0 ceph-mon[74331]: pgmap v607: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:47:29 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:29 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa520004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:30 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa528002920 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:30 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v608: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:47:30 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:30 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:30 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:30.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:30 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa528002920 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:47:30] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Nov 24 09:47:30 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:47:30] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Nov 24 09:47:31 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:31 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:31 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:31.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:31 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:31 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa52c0042e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:31 compute-0 ceph-mon[74331]: pgmap v608: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:47:31 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:47:32 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:32 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa52c0042e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:32 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v609: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:47:32 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:32 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:47:32 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:32.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:47:32 compute-0 ceph-osd[82549]: bluestore.MempoolThread fragmentation_score=0.000032 took=0.000042s
Nov 24 09:47:32 compute-0 ceph-mon[74331]: pgmap v609: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:47:32 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:32 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa508000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:47:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:33.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:47:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:47:33 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:33 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa510003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:34 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:34 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa5340024e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:34 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v610: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:47:34 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:34 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:34 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:34.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:34 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:34 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa52c0042e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:35 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:35 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:35 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:35.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:35 compute-0 ceph-mon[74331]: pgmap v610: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:47:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:35 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa5080016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:36 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:36 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa510003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:36 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v611: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:47:36 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:36 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:36 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:36.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:36 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:36 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa5340024e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:47:37.063Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 09:47:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:47:37.063Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:47:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:47:37.064Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:47:37 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:37 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:37 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:37.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:37 compute-0 ceph-mon[74331]: pgmap v611: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:47:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:37 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa52c0042e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:38 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa5080016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:38 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v612: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:47:38 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:38 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:38 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:38.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:38 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:47:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:38 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa510003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:39 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:39 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:47:39 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:39.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:47:39 compute-0 ceph-mon[74331]: pgmap v612: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:47:39 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:39 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa5340024e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:40 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa52c0042e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:40 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v613: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:47:40 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:40 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:40 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:40.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:40 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa5080016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:47:40] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Nov 24 09:47:40 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:47:40] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Nov 24 09:47:41 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 24 09:47:41 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4101053094' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 09:47:41 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 24 09:47:41 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4101053094' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 09:47:41 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:41 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:47:41 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:41.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:47:41 compute-0 ceph-mon[74331]: pgmap v613: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:47:41 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/4101053094' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 09:47:41 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/4101053094' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 09:47:41 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:41 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa510003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:42 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa534003810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:42 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v614: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:47:42 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:42 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:42 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:42.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:42 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/1397509015' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 09:47:42 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/1397509015' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 09:47:42 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/4177524823' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 09:47:42 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/4177524823' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 09:47:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:42 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa52c0042e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:43 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:43 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:43 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:43.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:43 compute-0 ceph-mon[74331]: pgmap v614: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:47:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:47:43 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:43 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa508002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:44 compute-0 sudo[258854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:47:44 compute-0 sudo[258854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:47:44 compute-0 sudo[258854]: pam_unix(sudo:session): session closed for user root
Nov 24 09:47:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:44 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa510003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:44 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v615: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:47:44 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:44 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:44 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:44.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:44 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa534003810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:45 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:45 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:45 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:45.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Optimize plan auto_2025-11-24_09:47:45
Nov 24 09:47:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 09:47:45 compute-0 ceph-mgr[74626]: [balancer INFO root] do_upmap
Nov 24 09:47:45 compute-0 ceph-mgr[74626]: [balancer INFO root] pools ['images', '.mgr', 'backups', '.nfs', 'default.rgw.control', 'volumes', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', '.rgw.root', 'vms', 'cephfs.cephfs.meta']
Nov 24 09:47:45 compute-0 ceph-mgr[74626]: [balancer INFO root] prepared 0/10 upmap changes
Nov 24 09:47:45 compute-0 ceph-mon[74331]: pgmap v615: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:47:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:47:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:47:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:45 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa52c0042e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:47:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:47:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:47:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:47:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 09:47:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 09:47:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:47:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 09:47:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:47:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 09:47:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:47:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:47:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:47:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:47:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:47:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:47:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:47:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:47:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:47:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 09:47:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:47:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:47:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:47:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 24 09:47:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:47:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 09:47:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:47:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:47:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:47:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 09:47:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:47:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 09:47:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:47:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:47:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:47:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:47:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:47:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:47:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:47:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:46 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa508002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:46 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v616: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:47:46 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:46 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:46 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:46.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:46 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:47:46 compute-0 ceph-mon[74331]: pgmap v616: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:47:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:46 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa510003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:47:47.065Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:47:47 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:47 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:47 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:47.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:47 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa534004520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:48 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa52c0042e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:48 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v617: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:47:48 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:48 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:48 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:48.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:48 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:47:48 compute-0 podman[258884]: 2025-11-24 09:47:48.784295224 +0000 UTC m=+0.062001084 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 24 09:47:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:48 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa52c0042e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:49 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:49 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:49 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:49.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:49 compute-0 ceph-mon[74331]: pgmap v617: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:47:49 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:49 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa510003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:50 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa534004520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:50 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v618: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:47:50 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:50 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:50 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:50.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:50 compute-0 podman[258907]: 2025-11-24 09:47:50.817060988 +0000 UTC m=+0.096382430 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 24 09:47:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:50 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa508003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:47:50] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Nov 24 09:47:50 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:47:50] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Nov 24 09:47:51 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:51 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:51 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:51.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:51 compute-0 ceph-mon[74331]: pgmap v618: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:47:51 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:51 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa52c0042e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:51 compute-0 radosgw[89481]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Nov 24 09:47:51 compute-0 radosgw[89481]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Nov 24 09:47:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:52 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa510003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:52 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v619: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:47:52 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:52 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:52 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:52.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:52 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa534004520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:53 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:53 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:47:53 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:53.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:47:53 compute-0 ceph-mon[74331]: pgmap v619: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:47:53 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:47:53 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:53 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa508003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:54 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:54 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa52c0042e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:54 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v620: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:47:54 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:54 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:54 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:54.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:54 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:54 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa510003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:55 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:55 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:55 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:55.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:55 compute-0 ceph-mon[74331]: pgmap v620: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:47:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:55 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa534004520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:56 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa508003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:56 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v621: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:47:56 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:56 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:56 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:56.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:56 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa52c0042e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:47:57.066Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:47:57 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:57 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:47:57 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:57.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:47:57 compute-0 ceph-mon[74331]: pgmap v621: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:47:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:57 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa510003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:58 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa534004520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:58 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v622: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 0 B/s wr, 176 op/s
Nov 24 09:47:58 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:58 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:47:58 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:47:58.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:47:58 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:47:58 compute-0 podman[258941]: 2025-11-24 09:47:58.777259871 +0000 UTC m=+0.055167426 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 09:47:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:58 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa508003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:47:59 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:47:59 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:47:59 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:47:59.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:47:59 compute-0 ceph-mon[74331]: pgmap v622: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 0 B/s wr, 176 op/s
Nov 24 09:47:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:47:59 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa52c0042e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:00 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa52c0042e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:00 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v623: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 0 B/s wr, 176 op/s
Nov 24 09:48:00 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:00 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:00 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:00.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:00 compute-0 ceph-mon[74331]: pgmap v623: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 0 B/s wr, 176 op/s
Nov 24 09:48:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:00 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa534004520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:48:00] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Nov 24 09:48:00 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:48:00] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Nov 24 09:48:01 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:01 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:48:01 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:01.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:48:01 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:48:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:01 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa508003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:02 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:02 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa510003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:02 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v624: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 0 B/s wr, 177 op/s
Nov 24 09:48:02 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:02 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:02 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:02.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:02 compute-0 ceph-mon[74331]: pgmap v624: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 0 B/s wr, 177 op/s
Nov 24 09:48:02 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:02 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa534004520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:03 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:03 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:48:03 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:03.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:48:03 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:48:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:03 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa534004520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:04 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:04 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa534004520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:04 compute-0 sudo[258967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:48:04 compute-0 sudo[258967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:48:04 compute-0 sudo[258967]: pam_unix(sudo:session): session closed for user root
Nov 24 09:48:04 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v625: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 0 B/s wr, 176 op/s
Nov 24 09:48:04 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:04 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:04 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:04.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:04 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:04 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa510003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:05 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:05 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:05 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:05.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:05 compute-0 ceph-mon[74331]: pgmap v625: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 0 B/s wr, 176 op/s
Nov 24 09:48:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:05 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa5280008d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:06 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa5280008d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:06 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v626: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 0 B/s wr, 176 op/s
Nov 24 09:48:06 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:06 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:06 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:06.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:06 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa534004520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:48:07.067Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 09:48:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:48:07.067Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:48:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:48:07.068Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:48:07 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:07 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:07 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:07.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:07 compute-0 ceph-mon[74331]: pgmap v626: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 0 B/s wr, 176 op/s
Nov 24 09:48:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:07 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa510003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:08 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa5280008d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:08 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v627: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 0 B/s wr, 176 op/s
Nov 24 09:48:08 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:08 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:08 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:08.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:08 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:48:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:08 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa520001670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:08 compute-0 nova_compute[257700]: 2025-11-24 09:48:08.923 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:48:08 compute-0 nova_compute[257700]: 2025-11-24 09:48:08.923 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:48:08 compute-0 nova_compute[257700]: 2025-11-24 09:48:08.924 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 09:48:08 compute-0 nova_compute[257700]: 2025-11-24 09:48:08.924 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 09:48:08 compute-0 nova_compute[257700]: 2025-11-24 09:48:08.946 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 09:48:08 compute-0 nova_compute[257700]: 2025-11-24 09:48:08.946 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:48:08 compute-0 nova_compute[257700]: 2025-11-24 09:48:08.947 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:48:08 compute-0 nova_compute[257700]: 2025-11-24 09:48:08.947 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:48:08 compute-0 nova_compute[257700]: 2025-11-24 09:48:08.947 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:48:08 compute-0 nova_compute[257700]: 2025-11-24 09:48:08.948 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:48:08 compute-0 nova_compute[257700]: 2025-11-24 09:48:08.948 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:48:08 compute-0 nova_compute[257700]: 2025-11-24 09:48:08.948 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 09:48:08 compute-0 nova_compute[257700]: 2025-11-24 09:48:08.948 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:48:08 compute-0 nova_compute[257700]: 2025-11-24 09:48:08.977 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:48:08 compute-0 nova_compute[257700]: 2025-11-24 09:48:08.977 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:48:08 compute-0 nova_compute[257700]: 2025-11-24 09:48:08.977 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:48:08 compute-0 nova_compute[257700]: 2025-11-24 09:48:08.977 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 09:48:08 compute-0 nova_compute[257700]: 2025-11-24 09:48:08.978 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:48:09 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:09 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:09 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:09.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:09 compute-0 ceph-mon[74331]: pgmap v627: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 0 B/s wr, 176 op/s
Nov 24 09:48:09 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/3564109661' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:48:09 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:48:09 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2959557878' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:48:09 compute-0 nova_compute[257700]: 2025-11-24 09:48:09.446 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:48:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:09 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa534004520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:09 compute-0 nova_compute[257700]: 2025-11-24 09:48:09.606 257704 WARNING nova.virt.libvirt.driver [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 09:48:09 compute-0 nova_compute[257700]: 2025-11-24 09:48:09.607 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4972MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 09:48:09 compute-0 nova_compute[257700]: 2025-11-24 09:48:09.607 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:48:09 compute-0 nova_compute[257700]: 2025-11-24 09:48:09.608 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:48:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/094809 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:48:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:10 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa510003dd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:10 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v628: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:48:10 compute-0 nova_compute[257700]: 2025-11-24 09:48:10.198 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 09:48:10 compute-0 nova_compute[257700]: 2025-11-24 09:48:10.198 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 09:48:10 compute-0 nova_compute[257700]: 2025-11-24 09:48:10.234 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:48:10 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:10 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:10 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:10.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:10 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2959557878' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:48:10 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/308233780' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:48:10 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:48:10 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/179360064' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:48:10 compute-0 nova_compute[257700]: 2025-11-24 09:48:10.669 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:48:10 compute-0 nova_compute[257700]: 2025-11-24 09:48:10.676 257704 DEBUG nova.compute.provider_tree [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Inventory has not changed in ProviderTree for provider: a50ce3b5-7e9e-4263-a4aa-c35573ac7257 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 09:48:10 compute-0 nova_compute[257700]: 2025-11-24 09:48:10.693 257704 DEBUG nova.scheduler.client.report [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Inventory has not changed for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 09:48:10 compute-0 nova_compute[257700]: 2025-11-24 09:48:10.694 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 09:48:10 compute-0 nova_compute[257700]: 2025-11-24 09:48:10.694 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.087s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:48:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:10 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa520001670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:48:10] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Nov 24 09:48:10 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:48:10] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Nov 24 09:48:11 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:11 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:48:11 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:11.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:48:11 compute-0 ceph-mon[74331]: pgmap v628: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:48:11 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/4255591343' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:48:11 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/179360064' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:48:11 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/579533964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:48:11 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:11 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa5280008d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:12 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:12 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa534004520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:12 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v629: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:48:12 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:12 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:12 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:12.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:12 compute-0 ceph-mon[74331]: pgmap v629: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:48:12 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:12 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa510003df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:13 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:13 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:13 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:13.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:48:13 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:13 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa520001670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:14 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:14 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa5280008d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:14 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v630: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:48:14 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:14 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:14 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:14.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:14 compute-0 sudo[259047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:48:14 compute-0 sudo[259047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:48:14 compute-0 sudo[259047]: pam_unix(sudo:session): session closed for user root
Nov 24 09:48:14 compute-0 sudo[259072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:48:14 compute-0 sudo[259072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:48:14 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:14 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa534004520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:15 compute-0 sudo[259072]: pam_unix(sudo:session): session closed for user root
Nov 24 09:48:15 compute-0 ceph-mon[74331]: pgmap v630: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:48:15 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:15 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:48:15 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:15.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:48:15 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:48:15 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:48:15 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:48:15 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:48:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:48:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:48:15 compute-0 sudo[259128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:48:15 compute-0 sudo[259128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:48:15 compute-0 sudo[259128]: pam_unix(sudo:session): session closed for user root
Nov 24 09:48:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:48:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:48:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:48:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:48:15 compute-0 sudo[259153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 24 09:48:15 compute-0 sudo[259153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:48:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:15 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa534004520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:15 compute-0 podman[259219]: 2025-11-24 09:48:15.889030239 +0000 UTC m=+0.064103769 container create c2bd91a0626b3b74d5fa8bead794465e3c7917e7059f2936f81efd8644b1cf12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Nov 24 09:48:15 compute-0 podman[259219]: 2025-11-24 09:48:15.845775611 +0000 UTC m=+0.020849161 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:48:15 compute-0 systemd[1]: Started libpod-conmon-c2bd91a0626b3b74d5fa8bead794465e3c7917e7059f2936f81efd8644b1cf12.scope.
Nov 24 09:48:15 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:48:16 compute-0 podman[259219]: 2025-11-24 09:48:16.006340318 +0000 UTC m=+0.181413848 container init c2bd91a0626b3b74d5fa8bead794465e3c7917e7059f2936f81efd8644b1cf12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_chatelet, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Nov 24 09:48:16 compute-0 podman[259219]: 2025-11-24 09:48:16.013696667 +0000 UTC m=+0.188770197 container start c2bd91a0626b3b74d5fa8bead794465e3c7917e7059f2936f81efd8644b1cf12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_chatelet, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:48:16 compute-0 focused_chatelet[259237]: 167 167
Nov 24 09:48:16 compute-0 systemd[1]: libpod-c2bd91a0626b3b74d5fa8bead794465e3c7917e7059f2936f81efd8644b1cf12.scope: Deactivated successfully.
Nov 24 09:48:16 compute-0 podman[259219]: 2025-11-24 09:48:16.025623759 +0000 UTC m=+0.200697309 container attach c2bd91a0626b3b74d5fa8bead794465e3c7917e7059f2936f81efd8644b1cf12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_chatelet, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:48:16 compute-0 podman[259219]: 2025-11-24 09:48:16.026353747 +0000 UTC m=+0.201427277 container died c2bd91a0626b3b74d5fa8bead794465e3c7917e7059f2936f81efd8644b1cf12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:48:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:16 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa520001670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a4913c0c455e99b31eb9e2587ef9ec0c78b33e6bec7dd6f5927afb77710e34a-merged.mount: Deactivated successfully.
Nov 24 09:48:16 compute-0 podman[259219]: 2025-11-24 09:48:16.149485138 +0000 UTC m=+0.324558668 container remove c2bd91a0626b3b74d5fa8bead794465e3c7917e7059f2936f81efd8644b1cf12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_chatelet, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 24 09:48:16 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v631: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:48:16 compute-0 systemd[1]: libpod-conmon-c2bd91a0626b3b74d5fa8bead794465e3c7917e7059f2936f81efd8644b1cf12.scope: Deactivated successfully.
Nov 24 09:48:16 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:16 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:16 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:16.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:16 compute-0 podman[259263]: 2025-11-24 09:48:16.303881374 +0000 UTC m=+0.022668645 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:48:16 compute-0 podman[259263]: 2025-11-24 09:48:16.497398676 +0000 UTC m=+0.216185937 container create 7544849f0fc44b569f75090e514af0918254dd1071b7764129bf675335d0670a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Nov 24 09:48:16 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:48:16 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:48:16 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:48:16 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:48:16 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:48:16 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:48:16 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:48:16 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:48:16 compute-0 systemd[1]: Started libpod-conmon-7544849f0fc44b569f75090e514af0918254dd1071b7764129bf675335d0670a.scope.
Nov 24 09:48:16 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:48:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73c0953cd25346448eee62dd467c69c6573a92fb1a932404e3f3321e63465cea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:48:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73c0953cd25346448eee62dd467c69c6573a92fb1a932404e3f3321e63465cea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:48:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73c0953cd25346448eee62dd467c69c6573a92fb1a932404e3f3321e63465cea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:48:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73c0953cd25346448eee62dd467c69c6573a92fb1a932404e3f3321e63465cea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:48:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73c0953cd25346448eee62dd467c69c6573a92fb1a932404e3f3321e63465cea/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:48:16 compute-0 podman[259263]: 2025-11-24 09:48:16.833527267 +0000 UTC m=+0.552314598 container init 7544849f0fc44b569f75090e514af0918254dd1071b7764129bf675335d0670a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_boyd, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:48:16 compute-0 podman[259263]: 2025-11-24 09:48:16.841079372 +0000 UTC m=+0.559866633 container start 7544849f0fc44b569f75090e514af0918254dd1071b7764129bf675335d0670a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_boyd, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 24 09:48:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:16 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa528003200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:16 compute-0 podman[259263]: 2025-11-24 09:48:16.894405006 +0000 UTC m=+0.613192297 container attach 7544849f0fc44b569f75090e514af0918254dd1071b7764129bf675335d0670a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:48:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:48:17.068Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:48:17 compute-0 naughty_boyd[259279]: --> passed data devices: 0 physical, 1 LVM
Nov 24 09:48:17 compute-0 naughty_boyd[259279]: --> All data devices are unavailable
Nov 24 09:48:17 compute-0 systemd[1]: libpod-7544849f0fc44b569f75090e514af0918254dd1071b7764129bf675335d0670a.scope: Deactivated successfully.
Nov 24 09:48:17 compute-0 podman[259263]: 2025-11-24 09:48:17.146619454 +0000 UTC m=+0.865406715 container died 7544849f0fc44b569f75090e514af0918254dd1071b7764129bf675335d0670a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_boyd, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:48:17 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:17 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:48:17 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:17.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:48:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-73c0953cd25346448eee62dd467c69c6573a92fb1a932404e3f3321e63465cea-merged.mount: Deactivated successfully.
Nov 24 09:48:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:17 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa510003e30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:17 compute-0 ceph-mon[74331]: pgmap v631: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:48:17 compute-0 podman[259263]: 2025-11-24 09:48:17.809023234 +0000 UTC m=+1.527810495 container remove 7544849f0fc44b569f75090e514af0918254dd1071b7764129bf675335d0670a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325)
Nov 24 09:48:17 compute-0 sudo[259153]: pam_unix(sudo:session): session closed for user root
Nov 24 09:48:17 compute-0 sudo[259307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:48:17 compute-0 sudo[259307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:48:17 compute-0 sudo[259307]: pam_unix(sudo:session): session closed for user root
Nov 24 09:48:17 compute-0 systemd[1]: libpod-conmon-7544849f0fc44b569f75090e514af0918254dd1071b7764129bf675335d0670a.scope: Deactivated successfully.
Nov 24 09:48:17 compute-0 sudo[259332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- lvm list --format json
Nov 24 09:48:17 compute-0 sudo[259332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:48:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:18 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa534004520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:18 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v632: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:48:18 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:18 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:18 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:18.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:18 compute-0 podman[259401]: 2025-11-24 09:48:18.311994514 +0000 UTC m=+0.019541329 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:48:18 compute-0 podman[259401]: 2025-11-24 09:48:18.417925845 +0000 UTC m=+0.125472670 container create fddf20a0c69bd99371769a79947d0c5b35058d1e80a200475da4aadab5f30da8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_goldwasser, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 24 09:48:18 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:48:18 compute-0 systemd[1]: Started libpod-conmon-fddf20a0c69bd99371769a79947d0c5b35058d1e80a200475da4aadab5f30da8.scope.
Nov 24 09:48:18 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:48:18 compute-0 podman[259401]: 2025-11-24 09:48:18.766309065 +0000 UTC m=+0.473855880 container init fddf20a0c69bd99371769a79947d0c5b35058d1e80a200475da4aadab5f30da8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_goldwasser, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:48:18 compute-0 podman[259401]: 2025-11-24 09:48:18.774786662 +0000 UTC m=+0.482333457 container start fddf20a0c69bd99371769a79947d0c5b35058d1e80a200475da4aadab5f30da8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_goldwasser, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:48:18 compute-0 sharp_goldwasser[259418]: 167 167
Nov 24 09:48:18 compute-0 systemd[1]: libpod-fddf20a0c69bd99371769a79947d0c5b35058d1e80a200475da4aadab5f30da8.scope: Deactivated successfully.
Nov 24 09:48:18 compute-0 conmon[259418]: conmon fddf20a0c69bd9937176 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fddf20a0c69bd99371769a79947d0c5b35058d1e80a200475da4aadab5f30da8.scope/container/memory.events
Nov 24 09:48:18 compute-0 podman[259401]: 2025-11-24 09:48:18.870076173 +0000 UTC m=+0.577622968 container attach fddf20a0c69bd99371769a79947d0c5b35058d1e80a200475da4aadab5f30da8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_goldwasser, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:48:18 compute-0 podman[259401]: 2025-11-24 09:48:18.87119911 +0000 UTC m=+0.578745915 container died fddf20a0c69bd99371769a79947d0c5b35058d1e80a200475da4aadab5f30da8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_goldwasser, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Nov 24 09:48:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:18 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa5200038f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:18 : epoch 6924297d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:48:19 compute-0 ceph-mon[74331]: pgmap v632: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:48:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc45c62b8626a52000ae87ed502b22b899fbd07c9861f93499b81648364a2b3d-merged.mount: Deactivated successfully.
Nov 24 09:48:19 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:19 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:19 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:19.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:19 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:19 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa528003200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:20 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa510003e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:20 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v633: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:48:20 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:20 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:20 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:20.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:20 compute-0 podman[259401]: 2025-11-24 09:48:20.533734308 +0000 UTC m=+2.241281103 container remove fddf20a0c69bd99371769a79947d0c5b35058d1e80a200475da4aadab5f30da8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_goldwasser, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Nov 24 09:48:20 compute-0 systemd[1]: libpod-conmon-fddf20a0c69bd99371769a79947d0c5b35058d1e80a200475da4aadab5f30da8.scope: Deactivated successfully.
Nov 24 09:48:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:48:20.557 165073 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:48:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:48:20.558 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:48:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:48:20.558 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:48:20 compute-0 podman[259437]: 2025-11-24 09:48:20.592869794 +0000 UTC m=+1.511513215 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 24 09:48:20 compute-0 ceph-mon[74331]: pgmap v633: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:48:20 compute-0 podman[259465]: 2025-11-24 09:48:20.688213797 +0000 UTC m=+0.024631464 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:48:20 compute-0 podman[259465]: 2025-11-24 09:48:20.813984812 +0000 UTC m=+0.150402449 container create 814a7034473689acf81640c47f58352330070b7aa110de875ac45c078cd10d9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_montalcini, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Nov 24 09:48:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:20 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa534004520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:48:20] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Nov 24 09:48:20 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:48:20] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Nov 24 09:48:21 compute-0 systemd[1]: Started libpod-conmon-814a7034473689acf81640c47f58352330070b7aa110de875ac45c078cd10d9c.scope.
Nov 24 09:48:21 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:48:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab5a20ec3edda09291358b983ce6ddff0845d8947c90def6c1fc38fe7a37c871/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:48:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab5a20ec3edda09291358b983ce6ddff0845d8947c90def6c1fc38fe7a37c871/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:48:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab5a20ec3edda09291358b983ce6ddff0845d8947c90def6c1fc38fe7a37c871/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:48:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab5a20ec3edda09291358b983ce6ddff0845d8947c90def6c1fc38fe7a37c871/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:48:21 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:21 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:21 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:21.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:21 compute-0 podman[259465]: 2025-11-24 09:48:21.298660556 +0000 UTC m=+0.635078193 container init 814a7034473689acf81640c47f58352330070b7aa110de875ac45c078cd10d9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_montalcini, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Nov 24 09:48:21 compute-0 podman[259465]: 2025-11-24 09:48:21.306485786 +0000 UTC m=+0.642903463 container start 814a7034473689acf81640c47f58352330070b7aa110de875ac45c078cd10d9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Nov 24 09:48:21 compute-0 podman[259465]: 2025-11-24 09:48:21.407938508 +0000 UTC m=+0.744356155 container attach 814a7034473689acf81640c47f58352330070b7aa110de875ac45c078cd10d9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_montalcini, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:48:21 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:21 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa5200038f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:21 compute-0 podman[259483]: 2025-11-24 09:48:21.547341167 +0000 UTC m=+0.420092925 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 24 09:48:21 compute-0 tender_montalcini[259482]: {
Nov 24 09:48:21 compute-0 tender_montalcini[259482]:     "0": [
Nov 24 09:48:21 compute-0 tender_montalcini[259482]:         {
Nov 24 09:48:21 compute-0 tender_montalcini[259482]:             "devices": [
Nov 24 09:48:21 compute-0 tender_montalcini[259482]:                 "/dev/loop3"
Nov 24 09:48:21 compute-0 tender_montalcini[259482]:             ],
Nov 24 09:48:21 compute-0 tender_montalcini[259482]:             "lv_name": "ceph_lv0",
Nov 24 09:48:21 compute-0 tender_montalcini[259482]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:48:21 compute-0 tender_montalcini[259482]:             "lv_size": "21470642176",
Nov 24 09:48:21 compute-0 tender_montalcini[259482]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=84a084c3-61a7-5de7-8207-1f88efa59a64,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 24 09:48:21 compute-0 tender_montalcini[259482]:             "lv_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:48:21 compute-0 tender_montalcini[259482]:             "name": "ceph_lv0",
Nov 24 09:48:21 compute-0 tender_montalcini[259482]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:48:21 compute-0 tender_montalcini[259482]:             "tags": {
Nov 24 09:48:21 compute-0 tender_montalcini[259482]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:48:21 compute-0 tender_montalcini[259482]:                 "ceph.block_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:48:21 compute-0 tender_montalcini[259482]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 09:48:21 compute-0 tender_montalcini[259482]:                 "ceph.cluster_fsid": "84a084c3-61a7-5de7-8207-1f88efa59a64",
Nov 24 09:48:21 compute-0 tender_montalcini[259482]:                 "ceph.cluster_name": "ceph",
Nov 24 09:48:21 compute-0 tender_montalcini[259482]:                 "ceph.crush_device_class": "",
Nov 24 09:48:21 compute-0 tender_montalcini[259482]:                 "ceph.encrypted": "0",
Nov 24 09:48:21 compute-0 tender_montalcini[259482]:                 "ceph.osd_fsid": "4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c",
Nov 24 09:48:21 compute-0 tender_montalcini[259482]:                 "ceph.osd_id": "0",
Nov 24 09:48:21 compute-0 tender_montalcini[259482]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 09:48:21 compute-0 tender_montalcini[259482]:                 "ceph.type": "block",
Nov 24 09:48:21 compute-0 tender_montalcini[259482]:                 "ceph.vdo": "0",
Nov 24 09:48:21 compute-0 tender_montalcini[259482]:                 "ceph.with_tpm": "0"
Nov 24 09:48:21 compute-0 tender_montalcini[259482]:             },
Nov 24 09:48:21 compute-0 tender_montalcini[259482]:             "type": "block",
Nov 24 09:48:21 compute-0 tender_montalcini[259482]:             "vg_name": "ceph_vg0"
Nov 24 09:48:21 compute-0 tender_montalcini[259482]:         }
Nov 24 09:48:21 compute-0 tender_montalcini[259482]:     ]
Nov 24 09:48:21 compute-0 tender_montalcini[259482]: }
Nov 24 09:48:21 compute-0 systemd[1]: libpod-814a7034473689acf81640c47f58352330070b7aa110de875ac45c078cd10d9c.scope: Deactivated successfully.
Nov 24 09:48:21 compute-0 podman[259465]: 2025-11-24 09:48:21.589060397 +0000 UTC m=+0.925478034 container died 814a7034473689acf81640c47f58352330070b7aa110de875ac45c078cd10d9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_montalcini, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:48:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab5a20ec3edda09291358b983ce6ddff0845d8947c90def6c1fc38fe7a37c871-merged.mount: Deactivated successfully.
Nov 24 09:48:21 compute-0 podman[259465]: 2025-11-24 09:48:21.968791293 +0000 UTC m=+1.305208930 container remove 814a7034473689acf81640c47f58352330070b7aa110de875ac45c078cd10d9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_montalcini, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325)
Nov 24 09:48:21 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:21 : epoch 6924297d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:48:21 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:21 : epoch 6924297d : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:48:22 compute-0 sudo[259332]: pam_unix(sudo:session): session closed for user root
Nov 24 09:48:22 compute-0 systemd[1]: libpod-conmon-814a7034473689acf81640c47f58352330070b7aa110de875ac45c078cd10d9c.scope: Deactivated successfully.
Nov 24 09:48:22 compute-0 sudo[259532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:48:22 compute-0 sudo[259532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:48:22 compute-0 sudo[259532]: pam_unix(sudo:session): session closed for user root
Nov 24 09:48:22 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:22 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa510003e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:22 compute-0 sudo[259557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- raw list --format json
Nov 24 09:48:22 compute-0 sudo[259557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:48:22 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v634: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:48:22 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:22 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:22 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:22.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:22 compute-0 podman[259622]: 2025-11-24 09:48:22.536150898 +0000 UTC m=+0.046376135 container create 89f7a2631072afa27154ee8c39e2f578827101a54b6a190c5b447b17c53a3705 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_murdock, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:48:22 compute-0 systemd[1]: Started libpod-conmon-89f7a2631072afa27154ee8c39e2f578827101a54b6a190c5b447b17c53a3705.scope.
Nov 24 09:48:22 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:48:22 compute-0 podman[259622]: 2025-11-24 09:48:22.510303617 +0000 UTC m=+0.020528874 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:48:22 compute-0 podman[259622]: 2025-11-24 09:48:22.632850943 +0000 UTC m=+0.143076210 container init 89f7a2631072afa27154ee8c39e2f578827101a54b6a190c5b447b17c53a3705 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_murdock, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 24 09:48:22 compute-0 podman[259622]: 2025-11-24 09:48:22.640160023 +0000 UTC m=+0.150385260 container start 89f7a2631072afa27154ee8c39e2f578827101a54b6a190c5b447b17c53a3705 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_murdock, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Nov 24 09:48:22 compute-0 awesome_murdock[259639]: 167 167
Nov 24 09:48:22 compute-0 systemd[1]: libpod-89f7a2631072afa27154ee8c39e2f578827101a54b6a190c5b447b17c53a3705.scope: Deactivated successfully.
Nov 24 09:48:22 compute-0 podman[259622]: 2025-11-24 09:48:22.654747139 +0000 UTC m=+0.164972396 container attach 89f7a2631072afa27154ee8c39e2f578827101a54b6a190c5b447b17c53a3705 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_murdock, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 24 09:48:22 compute-0 podman[259622]: 2025-11-24 09:48:22.655254121 +0000 UTC m=+0.165479368 container died 89f7a2631072afa27154ee8c39e2f578827101a54b6a190c5b447b17c53a3705 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_murdock, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 24 09:48:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d2f7345aee716a8487d5016600752d4adcd215b34cab0b961df25f0764c5b95-merged.mount: Deactivated successfully.
Nov 24 09:48:22 compute-0 podman[259622]: 2025-11-24 09:48:22.763076288 +0000 UTC m=+0.273301525 container remove 89f7a2631072afa27154ee8c39e2f578827101a54b6a190c5b447b17c53a3705 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:48:22 compute-0 systemd[1]: libpod-conmon-89f7a2631072afa27154ee8c39e2f578827101a54b6a190c5b447b17c53a3705.scope: Deactivated successfully.
Nov 24 09:48:22 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:22 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa528003f10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:22 compute-0 podman[259663]: 2025-11-24 09:48:22.917006382 +0000 UTC m=+0.047782369 container create 1ff410b9c86ce44e3b3c3ca2694af5da72a9038c2393445cf126aea61b51a937 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_saha, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 09:48:22 compute-0 systemd[1]: Started libpod-conmon-1ff410b9c86ce44e3b3c3ca2694af5da72a9038c2393445cf126aea61b51a937.scope.
Nov 24 09:48:22 compute-0 podman[259663]: 2025-11-24 09:48:22.890370741 +0000 UTC m=+0.021146748 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:48:22 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:48:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78778b6580833e5b0da2a7a834f9539116db0cc75ab7d64760dfbef2a84f26fd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:48:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78778b6580833e5b0da2a7a834f9539116db0cc75ab7d64760dfbef2a84f26fd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:48:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78778b6580833e5b0da2a7a834f9539116db0cc75ab7d64760dfbef2a84f26fd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:48:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78778b6580833e5b0da2a7a834f9539116db0cc75ab7d64760dfbef2a84f26fd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:48:23 compute-0 podman[259663]: 2025-11-24 09:48:23.043562018 +0000 UTC m=+0.174338035 container init 1ff410b9c86ce44e3b3c3ca2694af5da72a9038c2393445cf126aea61b51a937 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_saha, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Nov 24 09:48:23 compute-0 podman[259663]: 2025-11-24 09:48:23.050630691 +0000 UTC m=+0.181406688 container start 1ff410b9c86ce44e3b3c3ca2694af5da72a9038c2393445cf126aea61b51a937 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:48:23 compute-0 podman[259663]: 2025-11-24 09:48:23.058672228 +0000 UTC m=+0.189448225 container attach 1ff410b9c86ce44e3b3c3ca2694af5da72a9038c2393445cf126aea61b51a937 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_saha, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Nov 24 09:48:23 compute-0 ceph-mon[74331]: pgmap v634: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:48:23 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:23 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:23 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:23.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:23 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:48:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:23 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa534004520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:23 compute-0 lvm[259755]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 09:48:23 compute-0 lvm[259755]: VG ceph_vg0 finished
Nov 24 09:48:23 compute-0 optimistic_saha[259680]: {}
Nov 24 09:48:23 compute-0 systemd[1]: libpod-1ff410b9c86ce44e3b3c3ca2694af5da72a9038c2393445cf126aea61b51a937.scope: Deactivated successfully.
Nov 24 09:48:23 compute-0 systemd[1]: libpod-1ff410b9c86ce44e3b3c3ca2694af5da72a9038c2393445cf126aea61b51a937.scope: Consumed 1.037s CPU time.
Nov 24 09:48:23 compute-0 podman[259758]: 2025-11-24 09:48:23.760628494 +0000 UTC m=+0.025036263 container died 1ff410b9c86ce44e3b3c3ca2694af5da72a9038c2393445cf126aea61b51a937 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_saha, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:48:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-78778b6580833e5b0da2a7a834f9539116db0cc75ab7d64760dfbef2a84f26fd-merged.mount: Deactivated successfully.
Nov 24 09:48:23 compute-0 podman[259758]: 2025-11-24 09:48:23.843159733 +0000 UTC m=+0.107567492 container remove 1ff410b9c86ce44e3b3c3ca2694af5da72a9038c2393445cf126aea61b51a937 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:48:23 compute-0 systemd[1]: libpod-conmon-1ff410b9c86ce44e3b3c3ca2694af5da72a9038c2393445cf126aea61b51a937.scope: Deactivated successfully.
Nov 24 09:48:23 compute-0 sudo[259557]: pam_unix(sudo:session): session closed for user root
Nov 24 09:48:23 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:48:23 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:48:23 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:48:23 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:48:23 compute-0 sudo[259773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:48:24 compute-0 sudo[259773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:48:24 compute-0 sudo[259773]: pam_unix(sudo:session): session closed for user root
Nov 24 09:48:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:24 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa5200038f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:24 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v635: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:48:24 compute-0 sudo[259798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:48:24 compute-0 sudo[259798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:48:24 compute-0 sudo[259798]: pam_unix(sudo:session): session closed for user root
Nov 24 09:48:24 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:24 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:24 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:24.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:24 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa510003e90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:24 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:48:24 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:48:24 compute-0 ceph-mon[74331]: pgmap v635: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:48:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:25 : epoch 6924297d : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:48:25 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:25 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:25 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:25.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:25 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa528003f10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:26 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa534004520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:26 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v636: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:48:26 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:26 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:26 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:26.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:26 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa5200038f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:48:27.070Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:48:27 compute-0 ceph-mon[74331]: pgmap v636: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:48:27 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:27 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:27 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:27.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:27 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa510003eb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:28 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa528003f10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:28 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v637: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:48:28 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:28 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:48:28 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:28.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:48:28 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:48:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:28 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa534004520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:29 compute-0 ceph-mon[74331]: pgmap v637: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:48:29 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:29 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:29 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:29.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:29 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:29 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa5200038f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:29 compute-0 podman[259829]: 2025-11-24 09:48:29.799577403 +0000 UTC m=+0.073078348 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 24 09:48:29 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/094829 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:48:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:30 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa510003ed0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:30 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v638: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:48:30 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:30 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:30 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:30.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:30 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa528003f10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:48:30] "GET /metrics HTTP/1.1" 200 48274 "" "Prometheus/2.51.0"
Nov 24 09:48:30 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:48:30] "GET /metrics HTTP/1.1" 200 48274 "" "Prometheus/2.51.0"
Nov 24 09:48:31 compute-0 ceph-mon[74331]: pgmap v638: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:48:31 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:48:31 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:31 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:48:31 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:31.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:48:31 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:31 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa534004520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:32 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:32 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa5200038f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:32 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v639: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:48:32 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:32 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:32 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:32.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:32 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:32 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa510003ef0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:48:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:33.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:48:33 compute-0 ceph-mon[74331]: pgmap v639: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:48:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:48:33 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:33 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa528003f10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:34 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:34 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa534004520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:34 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v640: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:48:34 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:34 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:34 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:34.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:34 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:34 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa510003f80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:35 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:35 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:48:35 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:35.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:48:35 compute-0 ceph-mon[74331]: pgmap v640: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:48:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:35 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa52c001760 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:36 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:36 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa528003f10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:48:36 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v641: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:48:36 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:36 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:36 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:36.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:36 compute-0 ceph-mon[74331]: pgmap v641: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:48:36 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[255049]: 24/11/2025 09:48:36 : epoch 6924297d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa534004520 fd 39 proxy ignored for local
Nov 24 09:48:36 compute-0 kernel: ganesha.nfsd[258811]: segfault at 50 ip 00007fa5ea29e32e sp 00007fa5b4ff8210 error 4 in libntirpc.so.5.8[7fa5ea283000+2c000] likely on CPU 1 (core 0, socket 1)
Nov 24 09:48:36 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 24 09:48:36 compute-0 systemd[1]: Started Process Core Dump (PID 259856/UID 0).
Nov 24 09:48:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:48:37.070Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:48:37 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:37 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:37 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:37.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:38 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v642: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:48:38 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:38 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:38 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:38.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:38 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:48:38 compute-0 systemd-coredump[259857]: Process 255053 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 55:
                                                    #0  0x00007fa5ea29e32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 24 09:48:38 compute-0 systemd[1]: systemd-coredump@9-259856-0.service: Deactivated successfully.
Nov 24 09:48:38 compute-0 systemd[1]: systemd-coredump@9-259856-0.service: Consumed 1.201s CPU time.
Nov 24 09:48:38 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 09:48:38 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 09:48:38 compute-0 podman[259865]: 2025-11-24 09:48:38.786442135 +0000 UTC m=+0.023867766 container died 23a5a18a3f0edbf33c725c8301f9a3f79a26049224ffead44197162fdd659a4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:48:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-52e4efe15d3eab0f916d9eefcafbd4d304e88d2bbed7a4576062987b323e1697-merged.mount: Deactivated successfully.
Nov 24 09:48:38 compute-0 podman[259865]: 2025-11-24 09:48:38.844509094 +0000 UTC m=+0.081934715 container remove 23a5a18a3f0edbf33c725c8301f9a3f79a26049224ffead44197162fdd659a4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:48:38 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.2.0.compute-0.ssprex.service: Main process exited, code=exited, status=139/n/a
Nov 24 09:48:38 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.2.0.compute-0.ssprex.service: Failed with result 'exit-code'.
Nov 24 09:48:38 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.2.0.compute-0.ssprex.service: Consumed 1.595s CPU time.
Nov 24 09:48:39 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:39 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:39 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:39.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:39 compute-0 ceph-mon[74331]: pgmap v642: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:48:40 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v643: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:48:40 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:40 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:48:40 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:40.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:48:40 compute-0 ceph-mon[74331]: pgmap v643: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:48:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:48:40] "GET /metrics HTTP/1.1" 200 48274 "" "Prometheus/2.51.0"
Nov 24 09:48:40 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:48:40] "GET /metrics HTTP/1.1" 200 48274 "" "Prometheus/2.51.0"
Nov 24 09:48:41 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:41 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:48:41 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:41.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:48:42 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v644: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:48:42 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:42 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:48:42 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:42.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:48:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/094842 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:48:43 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:43 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:48:43 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:43.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:48:43 compute-0 ceph-mon[74331]: pgmap v644: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:48:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:48:44 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v645: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:48:44 compute-0 sudo[259915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:48:44 compute-0 sudo[259915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:48:44 compute-0 sudo[259915]: pam_unix(sudo:session): session closed for user root
Nov 24 09:48:44 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:44 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:44 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:44.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:45 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:45 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:45 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:45.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Optimize plan auto_2025-11-24_09:48:45
Nov 24 09:48:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 09:48:45 compute-0 ceph-mgr[74626]: [balancer INFO root] do_upmap
Nov 24 09:48:45 compute-0 ceph-mgr[74626]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'volumes', 'vms', 'default.rgw.log', '.rgw.root', 'default.rgw.control', '.nfs', '.mgr', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta']
Nov 24 09:48:45 compute-0 ceph-mgr[74626]: [balancer INFO root] prepared 0/10 upmap changes
Nov 24 09:48:45 compute-0 ceph-mon[74331]: pgmap v645: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:48:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:48:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:48:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:48:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:48:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:48:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:48:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 09:48:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:48:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 09:48:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:48:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:48:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:48:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:48:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:48:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:48:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:48:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:48:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:48:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 09:48:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:48:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:48:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:48:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 24 09:48:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:48:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 09:48:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:48:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:48:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:48:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 09:48:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:48:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 09:48:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 09:48:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:48:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 09:48:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:48:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:48:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:48:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:48:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:48:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:48:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:48:46 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v646: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:48:46 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:46 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:46 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:46.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:46 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:48:46 compute-0 ceph-mon[74331]: pgmap v646: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:48:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:48:47.072Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:48:47 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:47 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:48:47 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:47.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:48:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/094847 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:48:48 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v647: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:48:48 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:48 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:48 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:48.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:48 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:48:49 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.2.0.compute-0.ssprex.service: Scheduled restart job, restart counter is at 10.
Nov 24 09:48:49 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ssprex for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:48:49 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.2.0.compute-0.ssprex.service: Consumed 1.595s CPU time.
Nov 24 09:48:49 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ssprex for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:48:49 compute-0 ceph-mon[74331]: pgmap v647: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:48:49 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:49 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:49 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:49.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:49 compute-0 podman[259994]: 2025-11-24 09:48:49.405552715 +0000 UTC m=+0.042110101 container create 00c13a7990cd7d517ad65204534333af4acd6989a91aff71b1c84a43c5349db8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 24 09:48:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27f42a2e27276e1a66ee3acb84ad172394ad5277417af75a2ecb77fa22ec0f14/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 24 09:48:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27f42a2e27276e1a66ee3acb84ad172394ad5277417af75a2ecb77fa22ec0f14/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:48:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27f42a2e27276e1a66ee3acb84ad172394ad5277417af75a2ecb77fa22ec0f14/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:48:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27f42a2e27276e1a66ee3acb84ad172394ad5277417af75a2ecb77fa22ec0f14/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ssprex-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:48:49 compute-0 podman[259994]: 2025-11-24 09:48:49.460249602 +0000 UTC m=+0.096807018 container init 00c13a7990cd7d517ad65204534333af4acd6989a91aff71b1c84a43c5349db8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Nov 24 09:48:49 compute-0 podman[259994]: 2025-11-24 09:48:49.465618384 +0000 UTC m=+0.102175770 container start 00c13a7990cd7d517ad65204534333af4acd6989a91aff71b1c84a43c5349db8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Nov 24 09:48:49 compute-0 bash[259994]: 00c13a7990cd7d517ad65204534333af4acd6989a91aff71b1c84a43c5349db8
Nov 24 09:48:49 compute-0 podman[259994]: 2025-11-24 09:48:49.384012887 +0000 UTC m=+0.020570293 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:48:49 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:48:49 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 24 09:48:49 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:48:49 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 24 09:48:49 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ssprex for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:48:49 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:48:49 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 24 09:48:49 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:48:49 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 24 09:48:49 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:48:49 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 24 09:48:49 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:48:49 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 24 09:48:49 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:48:49 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 24 09:48:49 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:48:49 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:48:50 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v648: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:48:50 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:50 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:50 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:50.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:50 compute-0 podman[260052]: 2025-11-24 09:48:50.775408675 +0000 UTC m=+0.054653187 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 24 09:48:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:48:50] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Nov 24 09:48:50 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:48:50] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Nov 24 09:48:51 compute-0 ceph-mon[74331]: pgmap v648: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:48:51 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:51 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:48:51 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:51.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:48:51 compute-0 podman[260073]: 2025-11-24 09:48:51.79581509 +0000 UTC m=+0.074294758 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 09:48:52 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v649: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:48:52 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:52 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:52 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:52.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:53 compute-0 ceph-mon[74331]: pgmap v649: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:48:53 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:53 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:53 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:53.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:53 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:48:54 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v650: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:48:54 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:54 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:54 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:54.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:55 compute-0 ceph-mon[74331]: pgmap v650: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:48:55 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:55 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:48:55 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:55.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:48:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:48:55 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:48:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:48:55 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:48:56 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v651: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:48:56 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:56 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:56 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:56.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:48:57.073Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:48:57 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:57 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:57 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:57.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:57 compute-0 ceph-mon[74331]: pgmap v651: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:48:58 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v652: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s
Nov 24 09:48:58 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:58 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:48:58 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:48:58.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:48:58 compute-0 ceph-mon[74331]: pgmap v652: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s
Nov 24 09:48:58 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:48:59 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:48:59 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:48:59 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:48:59.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:49:00 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v653: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 767 B/s wr, 2 op/s
Nov 24 09:49:00 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:00 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:00 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:00.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:00 compute-0 podman[260109]: 2025-11-24 09:49:00.769067739 +0000 UTC m=+0.049479041 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 09:49:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:49:00] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Nov 24 09:49:00 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:49:00] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Nov 24 09:49:01 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 24 09:49:01 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3169303797' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 09:49:01 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 24 09:49:01 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3169303797' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 09:49:01 compute-0 ceph-mon[74331]: pgmap v653: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 767 B/s wr, 2 op/s
Nov 24 09:49:01 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:49:01 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/3169303797' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 09:49:01 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/3169303797' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 09:49:01 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:01 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:01 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:01.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:01 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:49:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:01 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 24 09:49:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:01 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 24 09:49:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:01 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 24 09:49:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:01 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 24 09:49:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:01 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 24 09:49:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:01 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 24 09:49:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:01 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:49:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:01 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:49:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:01 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:49:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:01 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 24 09:49:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:01 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 24 09:49:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:01 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 24 09:49:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:01 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 24 09:49:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:01 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 24 09:49:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:01 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 24 09:49:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:01 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 24 09:49:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:01 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 24 09:49:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:01 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 24 09:49:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:01 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 24 09:49:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:01 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 24 09:49:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:01 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 24 09:49:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:01 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 09:49:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:01 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 24 09:49:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:01 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 24 09:49:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:01 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 24 09:49:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:01 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 24 09:49:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:01 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:49:02 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:02 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e78000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:02 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v654: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1.2 KiB/s wr, 3 op/s
Nov 24 09:49:02 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:02 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:02 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:02.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:02 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:02 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:03 compute-0 ceph-mon[74331]: pgmap v654: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1.2 KiB/s wr, 3 op/s
Nov 24 09:49:03 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:03 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:03 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:03.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:03 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:49:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:03 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e4c000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:04 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:04 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e50000e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:04 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v655: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Nov 24 09:49:04 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:04 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:04 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:04.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:04 compute-0 sudo[260148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:49:04 compute-0 sudo[260148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:49:04 compute-0 sudo[260148]: pam_unix(sudo:session): session closed for user root
Nov 24 09:49:04 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:04 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:49:04 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:04 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:49:04 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/094904 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:49:04 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:04 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e78001d70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:05 compute-0 ceph-mon[74331]: pgmap v655: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Nov 24 09:49:05 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:05 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:49:05 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:05.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:49:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:05 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:06 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:06 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v656: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Nov 24 09:49:06 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:06 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:06 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:06.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:06 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e50001920 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:49:07.073Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:49:07 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:07 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:07 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:07.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:07 compute-0 ceph-mon[74331]: pgmap v656: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Nov 24 09:49:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:07 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e78001d70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:07 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:49:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:08 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e4c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:08 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v657: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1.7 KiB/s wr, 5 op/s
Nov 24 09:49:08 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:08 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:08 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:08.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:08 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:49:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:08 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:09 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:09 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:49:09 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:09.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:49:09 compute-0 ceph-mon[74331]: pgmap v657: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1.7 KiB/s wr, 5 op/s
Nov 24 09:49:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:09 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e50001920 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/094909 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:49:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:10 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e78001d70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:10 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v658: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:49:10 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:10 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:10 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:10.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:10 compute-0 nova_compute[257700]: 2025-11-24 09:49:10.686 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:49:10 compute-0 nova_compute[257700]: 2025-11-24 09:49:10.703 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:49:10 compute-0 nova_compute[257700]: 2025-11-24 09:49:10.703 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 09:49:10 compute-0 nova_compute[257700]: 2025-11-24 09:49:10.703 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 09:49:10 compute-0 nova_compute[257700]: 2025-11-24 09:49:10.712 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 09:49:10 compute-0 nova_compute[257700]: 2025-11-24 09:49:10.712 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:49:10 compute-0 nova_compute[257700]: 2025-11-24 09:49:10.713 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:49:10 compute-0 nova_compute[257700]: 2025-11-24 09:49:10.713 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 09:49:10 compute-0 nova_compute[257700]: 2025-11-24 09:49:10.713 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:49:10 compute-0 nova_compute[257700]: 2025-11-24 09:49:10.728 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:49:10 compute-0 nova_compute[257700]: 2025-11-24 09:49:10.728 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:49:10 compute-0 nova_compute[257700]: 2025-11-24 09:49:10.728 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:49:10 compute-0 nova_compute[257700]: 2025-11-24 09:49:10.728 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 09:49:10 compute-0 nova_compute[257700]: 2025-11-24 09:49:10.729 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:49:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:10 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e4c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:49:10] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Nov 24 09:49:10 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:49:10] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Nov 24 09:49:11 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:49:11 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4210752601' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:49:11 compute-0 nova_compute[257700]: 2025-11-24 09:49:11.178 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:49:11 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:11 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:49:11 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:11.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:49:11 compute-0 nova_compute[257700]: 2025-11-24 09:49:11.377 257704 WARNING nova.virt.libvirt.driver [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 09:49:11 compute-0 nova_compute[257700]: 2025-11-24 09:49:11.379 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4951MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 09:49:11 compute-0 nova_compute[257700]: 2025-11-24 09:49:11.379 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:49:11 compute-0 nova_compute[257700]: 2025-11-24 09:49:11.379 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:49:11 compute-0 ceph-mon[74331]: pgmap v658: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:49:11 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/1645154969' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:49:11 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/4210752601' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:49:11 compute-0 nova_compute[257700]: 2025-11-24 09:49:11.430 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 09:49:11 compute-0 nova_compute[257700]: 2025-11-24 09:49:11.430 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 09:49:11 compute-0 nova_compute[257700]: 2025-11-24 09:49:11.455 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:49:11 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:11 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:11 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:49:11 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3250163179' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:49:11 compute-0 nova_compute[257700]: 2025-11-24 09:49:11.960 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:49:11 compute-0 nova_compute[257700]: 2025-11-24 09:49:11.965 257704 DEBUG nova.compute.provider_tree [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Inventory has not changed in ProviderTree for provider: a50ce3b5-7e9e-4263-a4aa-c35573ac7257 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 09:49:11 compute-0 nova_compute[257700]: 2025-11-24 09:49:11.979 257704 DEBUG nova.scheduler.client.report [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Inventory has not changed for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 09:49:11 compute-0 nova_compute[257700]: 2025-11-24 09:49:11.981 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 09:49:11 compute-0 nova_compute[257700]: 2025-11-24 09:49:11.981 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:49:12 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:12 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e50001920 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:12 compute-0 nova_compute[257700]: 2025-11-24 09:49:12.189 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:49:12 compute-0 nova_compute[257700]: 2025-11-24 09:49:12.190 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:49:12 compute-0 nova_compute[257700]: 2025-11-24 09:49:12.190 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:49:12 compute-0 nova_compute[257700]: 2025-11-24 09:49:12.190 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:49:12 compute-0 nova_compute[257700]: 2025-11-24 09:49:12.190 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:49:12 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v659: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:49:12 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:12 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:12 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:12.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:12 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/2451919561' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:49:12 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/1597135010' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:49:12 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3250163179' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:49:12 compute-0 ceph-mon[74331]: pgmap v659: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:49:12 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/3464303203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:49:12 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:12 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e780091b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:13 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:13 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:13 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:13.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:49:13 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:13 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e4c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:14 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:14 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:14 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v660: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 597 B/s wr, 2 op/s
Nov 24 09:49:14 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:14 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:14 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:14.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:14 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:14 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:15 compute-0 ceph-mon[74331]: pgmap v660: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 597 B/s wr, 2 op/s
Nov 24 09:49:15 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:15 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:15 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:15.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:49:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:49:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:49:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:49:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:49:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:49:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:15 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e780091b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:16 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e4c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:16 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v661: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 597 B/s wr, 2 op/s
Nov 24 09:49:16 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:49:16 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:16 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:16 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:16.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:16 compute-0 sshd-session[260227]: Invalid user invitado from 121.31.210.125 port 55796
Nov 24 09:49:16 compute-0 sshd-session[260227]: Received disconnect from 121.31.210.125 port 55796:11: Bye Bye [preauth]
Nov 24 09:49:16 compute-0 sshd-session[260227]: Disconnected from invalid user invitado 121.31.210.125 port 55796 [preauth]
Nov 24 09:49:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:16 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e4c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:49:17.074Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:49:17 compute-0 ceph-mon[74331]: pgmap v661: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 597 B/s wr, 2 op/s
Nov 24 09:49:17 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:17 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:17 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:17.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:17 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:18 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e78009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:18 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v662: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 597 B/s wr, 2 op/s
Nov 24 09:49:18 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:18 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:18 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:18.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:18 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:49:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:18 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e4c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:19 compute-0 ceph-mon[74331]: pgmap v662: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 597 B/s wr, 2 op/s
Nov 24 09:49:19 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:19 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:19 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:19.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:19 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:19 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e50002f30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:20 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e44000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:20 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v663: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:49:20 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:20 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:20 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:20.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:49:20.558 165073 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:49:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:49:20.559 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:49:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:49:20.559 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:49:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:20 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:49:20] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Nov 24 09:49:20 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:49:20] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Nov 24 09:49:21 compute-0 ceph-mon[74331]: pgmap v663: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:49:21 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:21 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:21 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:21.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:21 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:21 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e78009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:21 compute-0 podman[260236]: 2025-11-24 09:49:21.775809768 +0000 UTC m=+0.052552355 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 09:49:22 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:22 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e78009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:22 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v664: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:49:22 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:22 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:22 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:22.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:22 compute-0 podman[260258]: 2025-11-24 09:49:22.853802251 +0000 UTC m=+0.130337358 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 24 09:49:22 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:22 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e440016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:23 compute-0 ceph-mon[74331]: pgmap v664: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:49:23 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:23 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:23 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:23.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:23 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:49:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:23 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:24 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e50003850 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:24 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v665: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:49:24 compute-0 sudo[260285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:49:24 compute-0 sudo[260285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:49:24 compute-0 sudo[260285]: pam_unix(sudo:session): session closed for user root
Nov 24 09:49:24 compute-0 sudo[260311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:49:24 compute-0 sudo[260311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:49:24 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:24 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:24 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:24.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:24 compute-0 sudo[260336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:49:24 compute-0 sudo[260336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:49:24 compute-0 sudo[260336]: pam_unix(sudo:session): session closed for user root
Nov 24 09:49:24 compute-0 sudo[260311]: pam_unix(sudo:session): session closed for user root
Nov 24 09:49:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:24 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e78009ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:25 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:49:25 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:49:25 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:49:25 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:49:25 compute-0 sudo[260391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:49:25 compute-0 sudo[260391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:49:25 compute-0 sudo[260391]: pam_unix(sudo:session): session closed for user root
Nov 24 09:49:25 compute-0 sudo[260416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 24 09:49:25 compute-0 sudo[260416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:49:25 compute-0 ceph-mon[74331]: pgmap v665: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:49:25 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:49:25 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:49:25 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:49:25 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:49:25 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:49:25 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:49:25 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:49:25 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:25 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:49:25 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:25.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:49:25 compute-0 podman[260484]: 2025-11-24 09:49:25.547878037 +0000 UTC m=+0.043108545 container create 91800b36ef874cf630049c44ce1b090c534a19049b7d4dada6d23f3c6afdf156 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_williams, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:49:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:25 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e440016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:25 compute-0 systemd[1]: Started libpod-conmon-91800b36ef874cf630049c44ce1b090c534a19049b7d4dada6d23f3c6afdf156.scope.
Nov 24 09:49:25 compute-0 podman[260484]: 2025-11-24 09:49:25.529071058 +0000 UTC m=+0.024301576 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:49:25 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:49:25 compute-0 podman[260484]: 2025-11-24 09:49:25.652208969 +0000 UTC m=+0.147439487 container init 91800b36ef874cf630049c44ce1b090c534a19049b7d4dada6d23f3c6afdf156 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_williams, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True)
Nov 24 09:49:25 compute-0 podman[260484]: 2025-11-24 09:49:25.664429987 +0000 UTC m=+0.159660495 container start 91800b36ef874cf630049c44ce1b090c534a19049b7d4dada6d23f3c6afdf156 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 09:49:25 compute-0 podman[260484]: 2025-11-24 09:49:25.670310102 +0000 UTC m=+0.165540600 container attach 91800b36ef874cf630049c44ce1b090c534a19049b7d4dada6d23f3c6afdf156 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_williams, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 09:49:25 compute-0 amazing_williams[260500]: 167 167
Nov 24 09:49:25 compute-0 systemd[1]: libpod-91800b36ef874cf630049c44ce1b090c534a19049b7d4dada6d23f3c6afdf156.scope: Deactivated successfully.
Nov 24 09:49:25 compute-0 podman[260484]: 2025-11-24 09:49:25.674139375 +0000 UTC m=+0.169369883 container died 91800b36ef874cf630049c44ce1b090c534a19049b7d4dada6d23f3c6afdf156 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_williams, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:49:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-da4e3347d5acfe968d9415871dd0a7f21ed6979cf63d8445aa5561ca9b6a5419-merged.mount: Deactivated successfully.
Nov 24 09:49:25 compute-0 podman[260484]: 2025-11-24 09:49:25.715979778 +0000 UTC m=+0.211210276 container remove 91800b36ef874cf630049c44ce1b090c534a19049b7d4dada6d23f3c6afdf156 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 24 09:49:25 compute-0 systemd[1]: libpod-conmon-91800b36ef874cf630049c44ce1b090c534a19049b7d4dada6d23f3c6afdf156.scope: Deactivated successfully.
Nov 24 09:49:25 compute-0 podman[260524]: 2025-11-24 09:49:25.863585289 +0000 UTC m=+0.037403466 container create 0d14b68447d5ca5774c3909ed0dfe47cbccae31a7dc637911faed38d777a7101 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_nash, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid)
Nov 24 09:49:25 compute-0 systemd[1]: Started libpod-conmon-0d14b68447d5ca5774c3909ed0dfe47cbccae31a7dc637911faed38d777a7101.scope.
Nov 24 09:49:25 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:49:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f70e0af4ce7f08b53b685096c95d6adf5c7c95fabe49ab89e29faebeed8f09b5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:49:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f70e0af4ce7f08b53b685096c95d6adf5c7c95fabe49ab89e29faebeed8f09b5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:49:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f70e0af4ce7f08b53b685096c95d6adf5c7c95fabe49ab89e29faebeed8f09b5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:49:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f70e0af4ce7f08b53b685096c95d6adf5c7c95fabe49ab89e29faebeed8f09b5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:49:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f70e0af4ce7f08b53b685096c95d6adf5c7c95fabe49ab89e29faebeed8f09b5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:49:25 compute-0 podman[260524]: 2025-11-24 09:49:25.936882491 +0000 UTC m=+0.110700698 container init 0d14b68447d5ca5774c3909ed0dfe47cbccae31a7dc637911faed38d777a7101 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_nash, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid)
Nov 24 09:49:25 compute-0 podman[260524]: 2025-11-24 09:49:25.847806652 +0000 UTC m=+0.021624869 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:49:25 compute-0 podman[260524]: 2025-11-24 09:49:25.944017225 +0000 UTC m=+0.117835412 container start 0d14b68447d5ca5774c3909ed0dfe47cbccae31a7dc637911faed38d777a7101 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_nash, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:49:25 compute-0 podman[260524]: 2025-11-24 09:49:25.946787433 +0000 UTC m=+0.120605620 container attach 0d14b68447d5ca5774c3909ed0dfe47cbccae31a7dc637911faed38d777a7101 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_nash, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:49:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/094925 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:49:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:26 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:26 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v666: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:49:26 compute-0 upbeat_nash[260541]: --> passed data devices: 0 physical, 1 LVM
Nov 24 09:49:26 compute-0 upbeat_nash[260541]: --> All data devices are unavailable
Nov 24 09:49:26 compute-0 systemd[1]: libpod-0d14b68447d5ca5774c3909ed0dfe47cbccae31a7dc637911faed38d777a7101.scope: Deactivated successfully.
Nov 24 09:49:26 compute-0 podman[260524]: 2025-11-24 09:49:26.260147577 +0000 UTC m=+0.433965774 container died 0d14b68447d5ca5774c3909ed0dfe47cbccae31a7dc637911faed38d777a7101 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Nov 24 09:49:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-f70e0af4ce7f08b53b685096c95d6adf5c7c95fabe49ab89e29faebeed8f09b5-merged.mount: Deactivated successfully.
Nov 24 09:49:26 compute-0 podman[260524]: 2025-11-24 09:49:26.307410553 +0000 UTC m=+0.481228750 container remove 0d14b68447d5ca5774c3909ed0dfe47cbccae31a7dc637911faed38d777a7101 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Nov 24 09:49:26 compute-0 systemd[1]: libpod-conmon-0d14b68447d5ca5774c3909ed0dfe47cbccae31a7dc637911faed38d777a7101.scope: Deactivated successfully.
Nov 24 09:49:26 compute-0 sudo[260416]: pam_unix(sudo:session): session closed for user root
Nov 24 09:49:26 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:26 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:26 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:26.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:26 compute-0 sudo[260570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:49:26 compute-0 sudo[260570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:49:26 compute-0 sudo[260570]: pam_unix(sudo:session): session closed for user root
Nov 24 09:49:26 compute-0 sudo[260595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- lvm list --format json
Nov 24 09:49:26 compute-0 sudo[260595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:49:26 compute-0 podman[260661]: 2025-11-24 09:49:26.8388755 +0000 UTC m=+0.034979157 container create b0299cb349a18d3babd9a68d967455a7329ab99bdd445530973119c924e1c5c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:49:26 compute-0 systemd[1]: Started libpod-conmon-b0299cb349a18d3babd9a68d967455a7329ab99bdd445530973119c924e1c5c8.scope.
Nov 24 09:49:26 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:49:26 compute-0 podman[260661]: 2025-11-24 09:49:26.910978044 +0000 UTC m=+0.107081721 container init b0299cb349a18d3babd9a68d967455a7329ab99bdd445530973119c924e1c5c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_carver, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2)
Nov 24 09:49:26 compute-0 podman[260661]: 2025-11-24 09:49:26.91740917 +0000 UTC m=+0.113512827 container start b0299cb349a18d3babd9a68d967455a7329ab99bdd445530973119c924e1c5c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_carver, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True)
Nov 24 09:49:26 compute-0 podman[260661]: 2025-11-24 09:49:26.823707849 +0000 UTC m=+0.019811526 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:49:26 compute-0 podman[260661]: 2025-11-24 09:49:26.920606929 +0000 UTC m=+0.116710606 container attach b0299cb349a18d3babd9a68d967455a7329ab99bdd445530973119c924e1c5c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_carver, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:49:26 compute-0 nice_carver[260678]: 167 167
Nov 24 09:49:26 compute-0 systemd[1]: libpod-b0299cb349a18d3babd9a68d967455a7329ab99bdd445530973119c924e1c5c8.scope: Deactivated successfully.
Nov 24 09:49:26 compute-0 podman[260661]: 2025-11-24 09:49:26.922253299 +0000 UTC m=+0.118356956 container died b0299cb349a18d3babd9a68d967455a7329ab99bdd445530973119c924e1c5c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:49:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:26 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e50003850 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b1a3ce760ab93676398596e05ace4daf630cac0f4f423fdb7d499de4f977c2c-merged.mount: Deactivated successfully.
Nov 24 09:49:26 compute-0 podman[260661]: 2025-11-24 09:49:26.953870372 +0000 UTC m=+0.149974029 container remove b0299cb349a18d3babd9a68d967455a7329ab99bdd445530973119c924e1c5c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_carver, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:49:26 compute-0 systemd[1]: libpod-conmon-b0299cb349a18d3babd9a68d967455a7329ab99bdd445530973119c924e1c5c8.scope: Deactivated successfully.
Nov 24 09:49:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:49:27.075Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:49:27 compute-0 podman[260702]: 2025-11-24 09:49:27.097156667 +0000 UTC m=+0.035656294 container create b28dfe59d40df70bf471a172a20cf6fe132cb2b1b147ad8b85fa9aca37d1953d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_benz, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 09:49:27 compute-0 systemd[1]: Started libpod-conmon-b28dfe59d40df70bf471a172a20cf6fe132cb2b1b147ad8b85fa9aca37d1953d.scope.
Nov 24 09:49:27 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:49:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47e4505060f5562bb5046c58f0d78c0dd5f8623f28c956c8587d7c2bd365501a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:49:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47e4505060f5562bb5046c58f0d78c0dd5f8623f28c956c8587d7c2bd365501a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:49:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47e4505060f5562bb5046c58f0d78c0dd5f8623f28c956c8587d7c2bd365501a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:49:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47e4505060f5562bb5046c58f0d78c0dd5f8623f28c956c8587d7c2bd365501a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:49:27 compute-0 podman[260702]: 2025-11-24 09:49:27.174049267 +0000 UTC m=+0.112548914 container init b28dfe59d40df70bf471a172a20cf6fe132cb2b1b147ad8b85fa9aca37d1953d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 24 09:49:27 compute-0 podman[260702]: 2025-11-24 09:49:27.082352064 +0000 UTC m=+0.020851711 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:49:27 compute-0 podman[260702]: 2025-11-24 09:49:27.181201902 +0000 UTC m=+0.119701529 container start b28dfe59d40df70bf471a172a20cf6fe132cb2b1b147ad8b85fa9aca37d1953d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_benz, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Nov 24 09:49:27 compute-0 podman[260702]: 2025-11-24 09:49:27.185170889 +0000 UTC m=+0.123670556 container attach b28dfe59d40df70bf471a172a20cf6fe132cb2b1b147ad8b85fa9aca37d1953d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_benz, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Nov 24 09:49:27 compute-0 ceph-mon[74331]: pgmap v666: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:49:27 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:27 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:27 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:27.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:27 compute-0 sweet_benz[260719]: {
Nov 24 09:49:27 compute-0 sweet_benz[260719]:     "0": [
Nov 24 09:49:27 compute-0 sweet_benz[260719]:         {
Nov 24 09:49:27 compute-0 sweet_benz[260719]:             "devices": [
Nov 24 09:49:27 compute-0 sweet_benz[260719]:                 "/dev/loop3"
Nov 24 09:49:27 compute-0 sweet_benz[260719]:             ],
Nov 24 09:49:27 compute-0 sweet_benz[260719]:             "lv_name": "ceph_lv0",
Nov 24 09:49:27 compute-0 sweet_benz[260719]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:49:27 compute-0 sweet_benz[260719]:             "lv_size": "21470642176",
Nov 24 09:49:27 compute-0 sweet_benz[260719]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=84a084c3-61a7-5de7-8207-1f88efa59a64,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 24 09:49:27 compute-0 sweet_benz[260719]:             "lv_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:49:27 compute-0 sweet_benz[260719]:             "name": "ceph_lv0",
Nov 24 09:49:27 compute-0 sweet_benz[260719]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:49:27 compute-0 sweet_benz[260719]:             "tags": {
Nov 24 09:49:27 compute-0 sweet_benz[260719]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:49:27 compute-0 sweet_benz[260719]:                 "ceph.block_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:49:27 compute-0 sweet_benz[260719]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 09:49:27 compute-0 sweet_benz[260719]:                 "ceph.cluster_fsid": "84a084c3-61a7-5de7-8207-1f88efa59a64",
Nov 24 09:49:27 compute-0 sweet_benz[260719]:                 "ceph.cluster_name": "ceph",
Nov 24 09:49:27 compute-0 sweet_benz[260719]:                 "ceph.crush_device_class": "",
Nov 24 09:49:27 compute-0 sweet_benz[260719]:                 "ceph.encrypted": "0",
Nov 24 09:49:27 compute-0 sweet_benz[260719]:                 "ceph.osd_fsid": "4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c",
Nov 24 09:49:27 compute-0 sweet_benz[260719]:                 "ceph.osd_id": "0",
Nov 24 09:49:27 compute-0 sweet_benz[260719]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 09:49:27 compute-0 sweet_benz[260719]:                 "ceph.type": "block",
Nov 24 09:49:27 compute-0 sweet_benz[260719]:                 "ceph.vdo": "0",
Nov 24 09:49:27 compute-0 sweet_benz[260719]:                 "ceph.with_tpm": "0"
Nov 24 09:49:27 compute-0 sweet_benz[260719]:             },
Nov 24 09:49:27 compute-0 sweet_benz[260719]:             "type": "block",
Nov 24 09:49:27 compute-0 sweet_benz[260719]:             "vg_name": "ceph_vg0"
Nov 24 09:49:27 compute-0 sweet_benz[260719]:         }
Nov 24 09:49:27 compute-0 sweet_benz[260719]:     ]
Nov 24 09:49:27 compute-0 sweet_benz[260719]: }
Nov 24 09:49:27 compute-0 systemd[1]: libpod-b28dfe59d40df70bf471a172a20cf6fe132cb2b1b147ad8b85fa9aca37d1953d.scope: Deactivated successfully.
Nov 24 09:49:27 compute-0 podman[260702]: 2025-11-24 09:49:27.457148201 +0000 UTC m=+0.395647838 container died b28dfe59d40df70bf471a172a20cf6fe132cb2b1b147ad8b85fa9aca37d1953d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Nov 24 09:49:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-47e4505060f5562bb5046c58f0d78c0dd5f8623f28c956c8587d7c2bd365501a-merged.mount: Deactivated successfully.
Nov 24 09:49:27 compute-0 podman[260702]: 2025-11-24 09:49:27.495542939 +0000 UTC m=+0.434042566 container remove b28dfe59d40df70bf471a172a20cf6fe132cb2b1b147ad8b85fa9aca37d1953d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 24 09:49:27 compute-0 systemd[1]: libpod-conmon-b28dfe59d40df70bf471a172a20cf6fe132cb2b1b147ad8b85fa9aca37d1953d.scope: Deactivated successfully.
Nov 24 09:49:27 compute-0 sudo[260595]: pam_unix(sudo:session): session closed for user root
Nov 24 09:49:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:27 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e7800a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:27 compute-0 sudo[260740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:49:27 compute-0 sudo[260740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:49:27 compute-0 sudo[260740]: pam_unix(sudo:session): session closed for user root
Nov 24 09:49:27 compute-0 sudo[260765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- raw list --format json
Nov 24 09:49:27 compute-0 sudo[260765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:49:28 compute-0 podman[260830]: 2025-11-24 09:49:28.057699628 +0000 UTC m=+0.039472867 container create 0f3fab67e0ee47b3375543d38070bf408953db49f592a12fefa506a35c7f5127 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_bohr, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:49:28 compute-0 systemd[1]: Started libpod-conmon-0f3fab67e0ee47b3375543d38070bf408953db49f592a12fefa506a35c7f5127.scope.
Nov 24 09:49:28 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:49:28 compute-0 podman[260830]: 2025-11-24 09:49:28.129905103 +0000 UTC m=+0.111678342 container init 0f3fab67e0ee47b3375543d38070bf408953db49f592a12fefa506a35c7f5127 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_bohr, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Nov 24 09:49:28 compute-0 podman[260830]: 2025-11-24 09:49:28.040074446 +0000 UTC m=+0.021847705 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:49:28 compute-0 podman[260830]: 2025-11-24 09:49:28.13550661 +0000 UTC m=+0.117279849 container start 0f3fab67e0ee47b3375543d38070bf408953db49f592a12fefa506a35c7f5127 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_bohr, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:49:28 compute-0 podman[260830]: 2025-11-24 09:49:28.138226746 +0000 UTC m=+0.119999985 container attach 0f3fab67e0ee47b3375543d38070bf408953db49f592a12fefa506a35c7f5127 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_bohr, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:49:28 compute-0 loving_bohr[260847]: 167 167
Nov 24 09:49:28 compute-0 systemd[1]: libpod-0f3fab67e0ee47b3375543d38070bf408953db49f592a12fefa506a35c7f5127.scope: Deactivated successfully.
Nov 24 09:49:28 compute-0 podman[260830]: 2025-11-24 09:49:28.140286447 +0000 UTC m=+0.122059686 container died 0f3fab67e0ee47b3375543d38070bf408953db49f592a12fefa506a35c7f5127 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_bohr, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:49:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:28 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e7800a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4df09c050f8e88dc4e07f3ed839f02680b7803dd210e23a82292ef01703b821-merged.mount: Deactivated successfully.
Nov 24 09:49:28 compute-0 podman[260830]: 2025-11-24 09:49:28.174116694 +0000 UTC m=+0.155889933 container remove 0f3fab67e0ee47b3375543d38070bf408953db49f592a12fefa506a35c7f5127 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:49:28 compute-0 systemd[1]: libpod-conmon-0f3fab67e0ee47b3375543d38070bf408953db49f592a12fefa506a35c7f5127.scope: Deactivated successfully.
Nov 24 09:49:28 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v667: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:49:28 compute-0 podman[260872]: 2025-11-24 09:49:28.330014137 +0000 UTC m=+0.033054769 container create 09ca4cc53eca527b21d1dbedac62acb85febfd16832bce5a8b166a275a75e3fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_matsumoto, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Nov 24 09:49:28 compute-0 systemd[1]: Started libpod-conmon-09ca4cc53eca527b21d1dbedac62acb85febfd16832bce5a8b166a275a75e3fa.scope.
Nov 24 09:49:28 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:49:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bb43e09f597709b974269c7ac56e4750e4f86f4687c58132814351d35d9ed64/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:49:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bb43e09f597709b974269c7ac56e4750e4f86f4687c58132814351d35d9ed64/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:49:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bb43e09f597709b974269c7ac56e4750e4f86f4687c58132814351d35d9ed64/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:49:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bb43e09f597709b974269c7ac56e4750e4f86f4687c58132814351d35d9ed64/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:49:28 compute-0 podman[260872]: 2025-11-24 09:49:28.396287288 +0000 UTC m=+0.099327950 container init 09ca4cc53eca527b21d1dbedac62acb85febfd16832bce5a8b166a275a75e3fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True)
Nov 24 09:49:28 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:28 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:28 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:28.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:28 compute-0 podman[260872]: 2025-11-24 09:49:28.403008352 +0000 UTC m=+0.106048984 container start 09ca4cc53eca527b21d1dbedac62acb85febfd16832bce5a8b166a275a75e3fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_matsumoto, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:49:28 compute-0 podman[260872]: 2025-11-24 09:49:28.406217181 +0000 UTC m=+0.109257813 container attach 09ca4cc53eca527b21d1dbedac62acb85febfd16832bce5a8b166a275a75e3fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Nov 24 09:49:28 compute-0 podman[260872]: 2025-11-24 09:49:28.316178908 +0000 UTC m=+0.019219560 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:49:28 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:49:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:28 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:28 compute-0 lvm[260962]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 09:49:28 compute-0 lvm[260962]: VG ceph_vg0 finished
Nov 24 09:49:29 compute-0 gracious_matsumoto[260888]: {}
Nov 24 09:49:29 compute-0 systemd[1]: libpod-09ca4cc53eca527b21d1dbedac62acb85febfd16832bce5a8b166a275a75e3fa.scope: Deactivated successfully.
Nov 24 09:49:29 compute-0 systemd[1]: libpod-09ca4cc53eca527b21d1dbedac62acb85febfd16832bce5a8b166a275a75e3fa.scope: Consumed 1.001s CPU time.
Nov 24 09:49:29 compute-0 podman[260872]: 2025-11-24 09:49:29.083191036 +0000 UTC m=+0.786231688 container died 09ca4cc53eca527b21d1dbedac62acb85febfd16832bce5a8b166a275a75e3fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_matsumoto, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:49:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-1bb43e09f597709b974269c7ac56e4750e4f86f4687c58132814351d35d9ed64-merged.mount: Deactivated successfully.
Nov 24 09:49:29 compute-0 podman[260872]: 2025-11-24 09:49:29.128412132 +0000 UTC m=+0.831452754 container remove 09ca4cc53eca527b21d1dbedac62acb85febfd16832bce5a8b166a275a75e3fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:49:29 compute-0 systemd[1]: libpod-conmon-09ca4cc53eca527b21d1dbedac62acb85febfd16832bce5a8b166a275a75e3fa.scope: Deactivated successfully.
Nov 24 09:49:29 compute-0 sudo[260765]: pam_unix(sudo:session): session closed for user root
Nov 24 09:49:29 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:49:29 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:49:29 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:49:29 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:49:29 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:29 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:29 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:29.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:29 compute-0 ceph-mon[74331]: pgmap v667: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:49:29 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:49:29 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:49:29 compute-0 sudo[260977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:49:29 compute-0 sudo[260977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:49:29 compute-0 sudo[260977]: pam_unix(sudo:session): session closed for user root
Nov 24 09:49:29 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:29 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e50003850 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:30 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e44002720 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:30 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v668: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:49:30 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:30 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:30 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:30.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:30 compute-0 ceph-mon[74331]: pgmap v668: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:49:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:30 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e50003850 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:49:30] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Nov 24 09:49:30 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:49:30] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Nov 24 09:49:31 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:31 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:31 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:31.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:31 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:49:31 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:31 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e7800a7e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:31 compute-0 podman[261004]: 2025-11-24 09:49:31.778978855 +0000 UTC m=+0.054093424 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 24 09:49:32 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:32 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:32 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v669: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:49:32 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:32 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:49:32 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:32.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:49:32 compute-0 ceph-mon[74331]: pgmap v669: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:49:32 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:32 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e44002720 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:49:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:33.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:49:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:49:33 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:33 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e50003850 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:34 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:34 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e44002720 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:34 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v670: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:49:34 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:34 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:49:34 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:34.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:49:34 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:34 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:49:34 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:34 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e64000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:35 compute-0 ceph-mon[74331]: pgmap v670: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:49:35 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:35 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:35 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:35.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:35 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:36 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:36 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e50003850 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:36 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v671: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:49:36 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:36 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:36 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:36.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:36 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:36 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e44002720 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:49:37.076Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 09:49:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:49:37.077Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:49:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:49:37.077Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:49:37 compute-0 ceph-mon[74331]: pgmap v671: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:49:37 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:37 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:49:37 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:37.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:49:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:37 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e64001930 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:37 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:49:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:37 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:49:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:38 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:38 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v672: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:49:38 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:38 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:49:38 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:38.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:49:38 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:49:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:38 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e50003850 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:39 compute-0 ceph-mon[74331]: pgmap v672: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:49:39 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:39 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:39 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:39.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:39 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:39 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e44003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:40 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e64001930 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:40 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v673: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Nov 24 09:49:40 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:40 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:40 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:40.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:40 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:49:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:40 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:49:40] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Nov 24 09:49:40 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:49:40] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Nov 24 09:49:41 compute-0 ceph-mon[74331]: pgmap v673: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Nov 24 09:49:41 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:41 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:41 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:41.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:41 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:41 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e50003850 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:42 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e44003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:42 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v674: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:49:42 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:42 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:49:42 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:42.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:49:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:42 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e64001930 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:43 compute-0 sshd-session[261035]: Invalid user cacti from 36.255.3.203 port 34767
Nov 24 09:49:43 compute-0 ceph-mon[74331]: pgmap v674: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:49:43 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:43 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:49:43 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:43.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:49:43 compute-0 sshd-session[261035]: Received disconnect from 36.255.3.203 port 34767:11: Bye Bye [preauth]
Nov 24 09:49:43 compute-0 sshd-session[261035]: Disconnected from invalid user cacti 36.255.3.203 port 34767 [preauth]
Nov 24 09:49:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:49:43 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:43 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:43 compute-0 sshd-session[261038]: Invalid user test1 from 14.215.126.91 port 38912
Nov 24 09:49:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:44 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e50003850 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:44 compute-0 sshd-session[261038]: Received disconnect from 14.215.126.91 port 38912:11: Bye Bye [preauth]
Nov 24 09:49:44 compute-0 sshd-session[261038]: Disconnected from invalid user test1 14.215.126.91 port 38912 [preauth]
Nov 24 09:49:44 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v675: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:49:44 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:44 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:44 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:44.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:44 compute-0 sudo[261042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:49:44 compute-0 sudo[261042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:49:44 compute-0 sudo[261042]: pam_unix(sudo:session): session closed for user root
Nov 24 09:49:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:44 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e44003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:45 compute-0 ceph-mon[74331]: pgmap v675: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:49:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Optimize plan auto_2025-11-24_09:49:45
Nov 24 09:49:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 09:49:45 compute-0 ceph-mgr[74626]: [balancer INFO root] do_upmap
Nov 24 09:49:45 compute-0 ceph-mgr[74626]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', 'default.rgw.log', 'volumes', 'backups', 'default.rgw.meta', '.nfs', '.rgw.root', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'images']
Nov 24 09:49:45 compute-0 ceph-mgr[74626]: [balancer INFO root] prepared 0/10 upmap changes
Nov 24 09:49:45 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:45 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:45 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:45.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:49:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:49:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:49:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:49:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:49:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:49:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:45 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e64002da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 09:49:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:49:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 09:49:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:49:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:49:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:49:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:49:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:49:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:49:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:49:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:49:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:49:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 09:49:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:49:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:49:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:49:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 24 09:49:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:49:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 09:49:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:49:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:49:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:49:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 09:49:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:49:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 09:49:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 09:49:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:49:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 09:49:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:49:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:49:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:49:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:49:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:49:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:49:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:49:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/094946 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:49:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:46 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:46 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v676: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:49:46 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:49:46 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:46 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:46 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:46.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:46 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e50003850 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:49:47.078Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:49:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:49:47.078Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 09:49:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:49:47.078Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 09:49:47 compute-0 ceph-mgr[74626]: [devicehealth INFO root] Check health
Nov 24 09:49:47 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:47 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:47 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:47.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:47 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e50003850 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:48 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e50003850 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:48 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v677: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:49:48 compute-0 ceph-mon[74331]: pgmap v676: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:49:48 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:48 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:48 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:48.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:48 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:49:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:48 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e64002da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:49 compute-0 ceph-mon[74331]: pgmap v677: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:49:49 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:49 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:49 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:49.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:49 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:49 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e38000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:50 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e40000d00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:50 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v678: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:49:50 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:50 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:50 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:50.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:50 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e50003850 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:49:50] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Nov 24 09:49:50 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:49:50] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Nov 24 09:49:51 compute-0 rsyslogd[1004]: imjournal: 3654 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 24 09:49:51 compute-0 ceph-mon[74331]: pgmap v678: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:49:51 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:51 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:51 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:51.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:51 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:51 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e64002da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:52 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e380016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:52 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v679: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:49:52 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:52 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:52 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:52.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:52 compute-0 podman[261077]: 2025-11-24 09:49:52.779490769 +0000 UTC m=+0.055349094 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 24 09:49:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/094952 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:49:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:52 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e40001820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:53 compute-0 ceph-mon[74331]: pgmap v679: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:49:53 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:53 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:49:53 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:53.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:49:53 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:49:53 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:53 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e50003850 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:53 compute-0 podman[261099]: 2025-11-24 09:49:53.849054387 +0000 UTC m=+0.127089788 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 24 09:49:54 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:54 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e64003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:54 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v680: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:49:54 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Nov 24 09:49:54 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/394130552' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 24 09:49:54 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.15162 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 24 09:49:54 compute-0 ceph-mgr[74626]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 24 09:49:54 compute-0 ceph-mgr[74626]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 24 09:49:54 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.24541 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 24 09:49:54 compute-0 ceph-mgr[74626]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 24 09:49:54 compute-0 ceph-mgr[74626]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 24 09:49:54 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:54 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:54 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:54.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:54 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.24541 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Nov 24 09:49:54 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:54 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e380016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:55 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:55 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:55 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:55.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:55 compute-0 ceph-mon[74331]: pgmap v680: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:49:55 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/394130552' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 24 09:49:55 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/2190235602' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 24 09:49:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:55 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e40001820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:56 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e50003850 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:56 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v681: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:49:56 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:56 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:49:56 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:56.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:49:56 compute-0 ceph-mon[74331]: from='client.15162 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 24 09:49:56 compute-0 ceph-mon[74331]: from='client.24541 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 24 09:49:56 compute-0 ceph-mon[74331]: from='client.24541 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Nov 24 09:49:56 compute-0 ceph-mon[74331]: pgmap v681: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:49:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:56 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e64003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:49:57.080Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:49:57 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:57 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:49:57 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:57.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:49:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:57 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e380016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:58 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e40001820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:58 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v682: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:49:58 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:58 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:49:58 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:49:58.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:49:58 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:49:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:58 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e50003850 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:49:59 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:49:59 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:49:59 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:49:59.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:49:59 compute-0 ceph-mon[74331]: pgmap v682: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:49:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:49:59 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e64003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:00 compute-0 ceph-mon[74331]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 24 09:50:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:00 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e38002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:00 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v683: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:50:00 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:00 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:50:00 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:00.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:50:00 compute-0 ceph-mon[74331]: overall HEALTH_OK
Nov 24 09:50:00 compute-0 ceph-mon[74331]: pgmap v683: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:50:00 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:50:00 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Nov 24 09:50:00 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:50:00.606724) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 09:50:00 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Nov 24 09:50:00 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977800606766, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 2125, "num_deletes": 251, "total_data_size": 4246897, "memory_usage": 4321744, "flush_reason": "Manual Compaction"}
Nov 24 09:50:00 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Nov 24 09:50:00 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977800630398, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 4163317, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20024, "largest_seqno": 22147, "table_properties": {"data_size": 4153694, "index_size": 6117, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19510, "raw_average_key_size": 20, "raw_value_size": 4134666, "raw_average_value_size": 4284, "num_data_blocks": 268, "num_entries": 965, "num_filter_entries": 965, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763977576, "oldest_key_time": 1763977576, "file_creation_time": 1763977800, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "42aa12d2-c531-4ddc-8c4c-bc0b5971346b", "db_session_id": "RORHLERH15LC1QL8D0I4", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:50:00 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 23721 microseconds, and 8138 cpu microseconds.
Nov 24 09:50:00 compute-0 ceph-mon[74331]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:50:00 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:50:00.630442) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 4163317 bytes OK
Nov 24 09:50:00 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:50:00.630463) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Nov 24 09:50:00 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:50:00.632030) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Nov 24 09:50:00 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:50:00.632042) EVENT_LOG_v1 {"time_micros": 1763977800632037, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 09:50:00 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:50:00.632059) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 09:50:00 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 4238335, prev total WAL file size 4238335, number of live WAL files 2.
Nov 24 09:50:00 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:50:00 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:50:00.632905) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Nov 24 09:50:00 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 09:50:00 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(4065KB)], [44(12MB)]
Nov 24 09:50:00 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977800632928, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 17118074, "oldest_snapshot_seqno": -1}
Nov 24 09:50:00 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 5449 keys, 14940542 bytes, temperature: kUnknown
Nov 24 09:50:00 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977800711980, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 14940542, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14901757, "index_size": 24083, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13637, "raw_key_size": 137479, "raw_average_key_size": 25, "raw_value_size": 14801029, "raw_average_value_size": 2716, "num_data_blocks": 993, "num_entries": 5449, "num_filter_entries": 5449, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976305, "oldest_key_time": 0, "file_creation_time": 1763977800, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "42aa12d2-c531-4ddc-8c4c-bc0b5971346b", "db_session_id": "RORHLERH15LC1QL8D0I4", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:50:00 compute-0 ceph-mon[74331]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:50:00 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:50:00.712275) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 14940542 bytes
Nov 24 09:50:00 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:50:00.713665) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 216.3 rd, 188.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.0, 12.4 +0.0 blob) out(14.2 +0.0 blob), read-write-amplify(7.7) write-amplify(3.6) OK, records in: 5969, records dropped: 520 output_compression: NoCompression
Nov 24 09:50:00 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:50:00.713681) EVENT_LOG_v1 {"time_micros": 1763977800713673, "job": 22, "event": "compaction_finished", "compaction_time_micros": 79138, "compaction_time_cpu_micros": 28348, "output_level": 6, "num_output_files": 1, "total_output_size": 14940542, "num_input_records": 5969, "num_output_records": 5449, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 09:50:00 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:50:00 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977800714481, "job": 22, "event": "table_file_deletion", "file_number": 46}
Nov 24 09:50:00 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:50:00 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977800716949, "job": 22, "event": "table_file_deletion", "file_number": 44}
Nov 24 09:50:00 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:50:00.632832) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:50:00 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:50:00.717179) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:50:00 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:50:00.717188) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:50:00 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:50:00.717190) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:50:00 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:50:00.717192) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:50:00 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:50:00.717194) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:50:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:00 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e40002cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:00 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 24 09:50:00 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4055330357' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 09:50:00 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 24 09:50:00 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4055330357' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 09:50:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:50:00] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Nov 24 09:50:00 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:50:00] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Nov 24 09:50:01 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:01 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:01 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:01.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:01 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e50003850 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:01 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/4055330357' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 09:50:01 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/4055330357' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 09:50:02 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:02 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e64003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:02 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v684: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:50:02 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:02 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:50:02 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:02 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:50:02 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:02.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:50:02 compute-0 ceph-mon[74331]: pgmap v684: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:50:02 compute-0 podman[261134]: 2025-11-24 09:50:02.765801786 +0000 UTC m=+0.045452374 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 24 09:50:02 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:02 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e38002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:03 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:03 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:03 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:03.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:03 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:50:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:03 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e40002cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:04 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:04 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e50003850 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:04 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v685: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:50:04 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:04 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:04 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:04.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:04 compute-0 sudo[261156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:50:04 compute-0 sudo[261156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:50:04 compute-0 sudo[261156]: pam_unix(sudo:session): session closed for user root
Nov 24 09:50:04 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:04 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e50003850 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:05 compute-0 ceph-mon[74331]: pgmap v685: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:50:05 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:05 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:05 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:05.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:05 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:50:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:05 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:50:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:05 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e38002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:06 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e38002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:06 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v686: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:50:06 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:06 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:06 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:06.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:06 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e38002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:50:07.080Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:50:07 compute-0 ceph-mon[74331]: pgmap v686: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:50:07 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:07 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:07 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:07.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:07 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e50003850 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:08 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e38002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:08 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v687: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:50:08 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:08 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:08 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:08.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:08 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:50:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:08 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:50:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:08 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e400039c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=cleanup t=2025-11-24T09:50:09.188559581Z level=info msg="Completed cleanup jobs" duration=30.483295ms
Nov 24 09:50:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=grafana.update.checker t=2025-11-24T09:50:09.301314849Z level=info msg="Update check succeeded" duration=52.33811ms
Nov 24 09:50:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=plugins.update.checker t=2025-11-24T09:50:09.305891501Z level=info msg="Update check succeeded" duration=48.614029ms
Nov 24 09:50:09 compute-0 ceph-mon[74331]: pgmap v687: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:50:09 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:09 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:09 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:09.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:09 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e50003850 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:09 compute-0 nova_compute[257700]: 2025-11-24 09:50:09.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:50:09 compute-0 nova_compute[257700]: 2025-11-24 09:50:09.922 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 09:50:09 compute-0 nova_compute[257700]: 2025-11-24 09:50:09.922 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 09:50:09 compute-0 nova_compute[257700]: 2025-11-24 09:50:09.950 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 09:50:09 compute-0 nova_compute[257700]: 2025-11-24 09:50:09.951 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:50:09 compute-0 nova_compute[257700]: 2025-11-24 09:50:09.951 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:50:09 compute-0 nova_compute[257700]: 2025-11-24 09:50:09.951 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 09:50:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:10 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e64003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:10 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v688: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:50:10 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:10 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:10 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:10.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:10 compute-0 nova_compute[257700]: 2025-11-24 09:50:10.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:50:10 compute-0 nova_compute[257700]: 2025-11-24 09:50:10.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:50:10 compute-0 nova_compute[257700]: 2025-11-24 09:50:10.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:50:10 compute-0 nova_compute[257700]: 2025-11-24 09:50:10.938 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:50:10 compute-0 nova_compute[257700]: 2025-11-24 09:50:10.939 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:50:10 compute-0 nova_compute[257700]: 2025-11-24 09:50:10.939 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:50:10 compute-0 nova_compute[257700]: 2025-11-24 09:50:10.939 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 09:50:10 compute-0 nova_compute[257700]: 2025-11-24 09:50:10.940 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:50:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:10 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e38003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:50:10] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Nov 24 09:50:10 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:50:10] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Nov 24 09:50:11 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:50:11 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/695719103' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:50:11 compute-0 ceph-mon[74331]: pgmap v688: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:50:11 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/1883317646' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:50:11 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/3497774359' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:50:11 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/695719103' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:50:11 compute-0 nova_compute[257700]: 2025-11-24 09:50:11.370 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:50:11 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:11 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:50:11 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:11.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:50:11 compute-0 nova_compute[257700]: 2025-11-24 09:50:11.525 257704 WARNING nova.virt.libvirt.driver [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 09:50:11 compute-0 nova_compute[257700]: 2025-11-24 09:50:11.526 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4946MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 09:50:11 compute-0 nova_compute[257700]: 2025-11-24 09:50:11.526 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:50:11 compute-0 nova_compute[257700]: 2025-11-24 09:50:11.526 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:50:11 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:11 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e400039c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:11 compute-0 nova_compute[257700]: 2025-11-24 09:50:11.618 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 09:50:11 compute-0 nova_compute[257700]: 2025-11-24 09:50:11.619 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 09:50:11 compute-0 nova_compute[257700]: 2025-11-24 09:50:11.635 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:50:12 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:50:12 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2015223078' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:50:12 compute-0 nova_compute[257700]: 2025-11-24 09:50:12.098 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:50:12 compute-0 nova_compute[257700]: 2025-11-24 09:50:12.104 257704 DEBUG nova.compute.provider_tree [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Inventory has not changed in ProviderTree for provider: a50ce3b5-7e9e-4263-a4aa-c35573ac7257 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 09:50:12 compute-0 nova_compute[257700]: 2025-11-24 09:50:12.120 257704 DEBUG nova.scheduler.client.report [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Inventory has not changed for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 09:50:12 compute-0 nova_compute[257700]: 2025-11-24 09:50:12.122 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 09:50:12 compute-0 nova_compute[257700]: 2025-11-24 09:50:12.122 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:50:12 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:12 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e50003850 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:12 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v689: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:50:12 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/2769118241' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:50:12 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/3495070877' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:50:12 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2015223078' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:50:12 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:12 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:50:12 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:12.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:50:12 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:12 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e64003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:13 compute-0 nova_compute[257700]: 2025-11-24 09:50:13.116 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:50:13 compute-0 nova_compute[257700]: 2025-11-24 09:50:13.116 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:50:13 compute-0 nova_compute[257700]: 2025-11-24 09:50:13.117 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:50:13 compute-0 ceph-mon[74331]: pgmap v689: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:50:13 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:13 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:50:13 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:13.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:50:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:50:13 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:13 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e38003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:14 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:14 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e400039c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:14 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v690: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:50:14 compute-0 ceph-mon[74331]: pgmap v690: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:50:14 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:14 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:14 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:14.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:14 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/095014 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:50:14 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:14 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e50003850 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:50:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:50:15 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:15 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:15 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:15.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:15 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:50:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:50:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:50:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:50:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:50:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:15 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e64003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:16 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e38003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:16 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v691: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:50:16 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:16 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:16 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:16.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:16 compute-0 ceph-mon[74331]: pgmap v691: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:50:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:16 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e400039c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:50:17.082Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:50:17 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Nov 24 09:50:17 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2639730056' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 24 09:50:17 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Nov 24 09:50:17 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2923448658' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 24 09:50:17 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.15207 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 24 09:50:17 compute-0 ceph-mgr[74626]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 24 09:50:17 compute-0 ceph-mgr[74626]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 24 09:50:17 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.15210 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 24 09:50:17 compute-0 ceph-mgr[74626]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 24 09:50:17 compute-0 ceph-mgr[74626]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 24 09:50:17 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.15210 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Nov 24 09:50:17 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:17 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:50:17 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:17.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:50:17 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/2639730056' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 24 09:50:17 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/2923448658' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 24 09:50:17 compute-0 ceph-mon[74331]: from='client.15207 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 24 09:50:17 compute-0 ceph-mon[74331]: from='client.15210 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 24 09:50:17 compute-0 ceph-mon[74331]: from='client.15210 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Nov 24 09:50:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:17 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e50003850 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:18 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e64003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:18 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v692: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:50:18 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:18 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:18 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:18.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:18 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:50:18 compute-0 ceph-mon[74331]: pgmap v692: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:50:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:18 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e38003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:19 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:19 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:19 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:19.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:19 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:19 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c001080 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:20 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c001080 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:20 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v693: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:50:20 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:20 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:20 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:20.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:50:20.559 165073 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:50:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:50:20.560 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:50:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:50:20.560 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:50:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:20 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e64003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:50:20] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Nov 24 09:50:20 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:50:20] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Nov 24 09:50:21 compute-0 ceph-mon[74331]: pgmap v693: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:50:21 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:21 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:21 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:21.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:21 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:21 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e38003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:22 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:22 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e44001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:22 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v694: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:50:22 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:22 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:22 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:22.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:22 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:22 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c002530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:23 compute-0 ceph-mon[74331]: pgmap v694: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:50:23 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:23 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:50:23 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:23.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:50:23 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:50:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:23 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e64003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:23 compute-0 podman[261246]: 2025-11-24 09:50:23.766853624 +0000 UTC m=+0.048607380 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:50:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:24 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e38003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:24 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v695: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:50:24 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:24 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:50:24 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:24.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:50:24 compute-0 podman[261268]: 2025-11-24 09:50:24.797840205 +0000 UTC m=+0.079026089 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 09:50:24 compute-0 sudo[261285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:50:24 compute-0 sudo[261285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:50:24 compute-0 sudo[261285]: pam_unix(sudo:session): session closed for user root
Nov 24 09:50:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:24 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e44001dd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:25 compute-0 ceph-mon[74331]: pgmap v695: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:50:25 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:25 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:25 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:25.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:25 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c002530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:26 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e64003ec0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:26 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v696: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:50:26 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:26 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:26 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:26.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:26 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e38003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:50:27.082Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:50:27 compute-0 ceph-mon[74331]: pgmap v696: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:50:27 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:27 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:50:27 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:27.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:50:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:27 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e44001dd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:28 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c002530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:28 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v697: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:50:28 compute-0 ceph-mon[74331]: pgmap v697: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:50:28 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:28 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:28 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:28.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:28 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:50:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:28 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c002530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:29 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:29 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:29 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:29.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:29 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:29 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e38003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:29 compute-0 sudo[261324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:50:29 compute-0 sudo[261324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:50:29 compute-0 sudo[261324]: pam_unix(sudo:session): session closed for user root
Nov 24 09:50:29 compute-0 sudo[261349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:50:29 compute-0 sudo[261349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:50:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/095030 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:50:30 compute-0 sudo[261349]: pam_unix(sudo:session): session closed for user root
Nov 24 09:50:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:30 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e44001dd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:30 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Nov 24 09:50:30 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 09:50:30 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v698: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:50:30 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 09:50:30 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 09:50:30 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:30 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:30 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:30.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:30 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e64003f20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:50:30] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Nov 24 09:50:30 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:50:30] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Nov 24 09:50:31 compute-0 ceph-mon[74331]: pgmap v698: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:50:31 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:50:31 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:31 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:50:31 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:31.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:50:31 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:31 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c003ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 09:50:32 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:50:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 09:50:32 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:50:32 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:32 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e38003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:32 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v699: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:50:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 09:50:32 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:50:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 09:50:32 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:50:32 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:32 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:32 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:32.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Nov 24 09:50:32 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 24 09:50:32 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:32 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e44001f70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Nov 24 09:50:33 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 24 09:50:33 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:50:33 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:50:33 compute-0 ceph-mon[74331]: pgmap v699: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:50:33 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:50:33 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:50:33 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 24 09:50:33 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 24 09:50:33 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v700: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 185 B/s rd, 0 op/s
Nov 24 09:50:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:50:33 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:50:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:50:33 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:50:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:33.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:33 compute-0 sudo[261409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:50:33 compute-0 sudo[261409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:50:33 compute-0 sudo[261409]: pam_unix(sudo:session): session closed for user root
Nov 24 09:50:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:50:33 compute-0 sudo[261435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 24 09:50:33 compute-0 sudo[261435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:50:33 compute-0 podman[261433]: 2025-11-24 09:50:33.551946922 +0000 UTC m=+0.062485992 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 09:50:33 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:33 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e64003f40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:33 compute-0 podman[261519]: 2025-11-24 09:50:33.920081559 +0000 UTC m=+0.035185559 container create 0effa93016a77b80cba8afc369c9ceab5378337b451d4ffc3f343e6f78351c0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_booth, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Nov 24 09:50:33 compute-0 systemd[1]: Started libpod-conmon-0effa93016a77b80cba8afc369c9ceab5378337b451d4ffc3f343e6f78351c0e.scope.
Nov 24 09:50:33 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:50:33 compute-0 podman[261519]: 2025-11-24 09:50:33.990200068 +0000 UTC m=+0.105304088 container init 0effa93016a77b80cba8afc369c9ceab5378337b451d4ffc3f343e6f78351c0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_booth, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:50:33 compute-0 podman[261519]: 2025-11-24 09:50:33.996607145 +0000 UTC m=+0.111711145 container start 0effa93016a77b80cba8afc369c9ceab5378337b451d4ffc3f343e6f78351c0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_booth, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 09:50:33 compute-0 podman[261519]: 2025-11-24 09:50:33.999501057 +0000 UTC m=+0.114605047 container attach 0effa93016a77b80cba8afc369c9ceab5378337b451d4ffc3f343e6f78351c0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_booth, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 24 09:50:34 compute-0 podman[261519]: 2025-11-24 09:50:33.904520835 +0000 UTC m=+0.019624855 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:50:34 compute-0 amazing_booth[261535]: 167 167
Nov 24 09:50:34 compute-0 systemd[1]: libpod-0effa93016a77b80cba8afc369c9ceab5378337b451d4ffc3f343e6f78351c0e.scope: Deactivated successfully.
Nov 24 09:50:34 compute-0 podman[261519]: 2025-11-24 09:50:34.003008354 +0000 UTC m=+0.118112354 container died 0effa93016a77b80cba8afc369c9ceab5378337b451d4ffc3f343e6f78351c0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_booth, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Nov 24 09:50:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-3eb8eb41e246a761b495ee4db1558d61f430203578f9a25a828c51cdcb5f7480-merged.mount: Deactivated successfully.
Nov 24 09:50:34 compute-0 podman[261519]: 2025-11-24 09:50:34.039398831 +0000 UTC m=+0.154502831 container remove 0effa93016a77b80cba8afc369c9ceab5378337b451d4ffc3f343e6f78351c0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_booth, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Nov 24 09:50:34 compute-0 systemd[1]: libpod-conmon-0effa93016a77b80cba8afc369c9ceab5378337b451d4ffc3f343e6f78351c0e.scope: Deactivated successfully.
Nov 24 09:50:34 compute-0 ceph-mon[74331]: log_channel(cluster) log [WRN] : Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Nov 24 09:50:34 compute-0 podman[261560]: 2025-11-24 09:50:34.203950818 +0000 UTC m=+0.038092370 container create d85f977332100d970d08f6d73b610a74bbe0b6d6805ebd82e0c131103700ae67 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_feynman, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Nov 24 09:50:34 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:34 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c003ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:34 compute-0 systemd[1]: Started libpod-conmon-d85f977332100d970d08f6d73b610a74bbe0b6d6805ebd82e0c131103700ae67.scope.
Nov 24 09:50:34 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 24 09:50:34 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 24 09:50:34 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:50:34 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:50:34 compute-0 ceph-mon[74331]: pgmap v700: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 185 B/s rd, 0 op/s
Nov 24 09:50:34 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:50:34 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:50:34 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:50:34 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:50:34 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:50:34 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:50:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab2da03ec7d70653886837a3aa48c6688179585665a6349f4df3459e38a165f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:50:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab2da03ec7d70653886837a3aa48c6688179585665a6349f4df3459e38a165f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:50:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab2da03ec7d70653886837a3aa48c6688179585665a6349f4df3459e38a165f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:50:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab2da03ec7d70653886837a3aa48c6688179585665a6349f4df3459e38a165f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:50:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab2da03ec7d70653886837a3aa48c6688179585665a6349f4df3459e38a165f4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:50:34 compute-0 podman[261560]: 2025-11-24 09:50:34.280634459 +0000 UTC m=+0.114776011 container init d85f977332100d970d08f6d73b610a74bbe0b6d6805ebd82e0c131103700ae67 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_feynman, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Nov 24 09:50:34 compute-0 podman[261560]: 2025-11-24 09:50:34.187759969 +0000 UTC m=+0.021901541 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:50:34 compute-0 podman[261560]: 2025-11-24 09:50:34.287285133 +0000 UTC m=+0.121426695 container start d85f977332100d970d08f6d73b610a74bbe0b6d6805ebd82e0c131103700ae67 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Nov 24 09:50:34 compute-0 podman[261560]: 2025-11-24 09:50:34.290059781 +0000 UTC m=+0.124201333 container attach d85f977332100d970d08f6d73b610a74bbe0b6d6805ebd82e0c131103700ae67 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 24 09:50:34 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:34 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:34 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:34.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:34 compute-0 agitated_feynman[261575]: --> passed data devices: 0 physical, 1 LVM
Nov 24 09:50:34 compute-0 agitated_feynman[261575]: --> All data devices are unavailable
Nov 24 09:50:34 compute-0 systemd[1]: libpod-d85f977332100d970d08f6d73b610a74bbe0b6d6805ebd82e0c131103700ae67.scope: Deactivated successfully.
Nov 24 09:50:34 compute-0 podman[261560]: 2025-11-24 09:50:34.640585074 +0000 UTC m=+0.474726626 container died d85f977332100d970d08f6d73b610a74bbe0b6d6805ebd82e0c131103700ae67 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_feynman, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:50:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab2da03ec7d70653886837a3aa48c6688179585665a6349f4df3459e38a165f4-merged.mount: Deactivated successfully.
Nov 24 09:50:34 compute-0 podman[261560]: 2025-11-24 09:50:34.683966403 +0000 UTC m=+0.518107955 container remove d85f977332100d970d08f6d73b610a74bbe0b6d6805ebd82e0c131103700ae67 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_feynman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Nov 24 09:50:34 compute-0 systemd[1]: libpod-conmon-d85f977332100d970d08f6d73b610a74bbe0b6d6805ebd82e0c131103700ae67.scope: Deactivated successfully.
Nov 24 09:50:34 compute-0 sudo[261435]: pam_unix(sudo:session): session closed for user root
Nov 24 09:50:34 compute-0 sudo[261603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:50:34 compute-0 sudo[261603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:50:34 compute-0 sudo[261603]: pam_unix(sudo:session): session closed for user root
Nov 24 09:50:34 compute-0 sudo[261628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- lvm list --format json
Nov 24 09:50:34 compute-0 sudo[261628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:50:34 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:34 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e38003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:35 compute-0 podman[261693]: 2025-11-24 09:50:35.207974823 +0000 UTC m=+0.034475120 container create 1cf61a324b1948e148d0c9d40479f34f0fb4e0cc1b800da3bbf1f97cdd4a3d31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Nov 24 09:50:35 compute-0 systemd[1]: Started libpod-conmon-1cf61a324b1948e148d0c9d40479f34f0fb4e0cc1b800da3bbf1f97cdd4a3d31.scope.
Nov 24 09:50:35 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v701: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 186 B/s rd, 0 op/s
Nov 24 09:50:35 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:50:35 compute-0 podman[261693]: 2025-11-24 09:50:35.273988002 +0000 UTC m=+0.100488299 container init 1cf61a324b1948e148d0c9d40479f34f0fb4e0cc1b800da3bbf1f97cdd4a3d31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_margulis, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:50:35 compute-0 podman[261693]: 2025-11-24 09:50:35.279958639 +0000 UTC m=+0.106458936 container start 1cf61a324b1948e148d0c9d40479f34f0fb4e0cc1b800da3bbf1f97cdd4a3d31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_margulis, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:50:35 compute-0 ceph-mon[74331]: Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Nov 24 09:50:35 compute-0 blissful_margulis[261710]: 167 167
Nov 24 09:50:35 compute-0 systemd[1]: libpod-1cf61a324b1948e148d0c9d40479f34f0fb4e0cc1b800da3bbf1f97cdd4a3d31.scope: Deactivated successfully.
Nov 24 09:50:35 compute-0 conmon[261710]: conmon 1cf61a324b1948e148d0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1cf61a324b1948e148d0c9d40479f34f0fb4e0cc1b800da3bbf1f97cdd4a3d31.scope/container/memory.events
Nov 24 09:50:35 compute-0 podman[261693]: 2025-11-24 09:50:35.283305551 +0000 UTC m=+0.109805868 container attach 1cf61a324b1948e148d0c9d40479f34f0fb4e0cc1b800da3bbf1f97cdd4a3d31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_margulis, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:50:35 compute-0 podman[261693]: 2025-11-24 09:50:35.28567768 +0000 UTC m=+0.112177987 container died 1cf61a324b1948e148d0c9d40479f34f0fb4e0cc1b800da3bbf1f97cdd4a3d31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_margulis, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Nov 24 09:50:35 compute-0 podman[261693]: 2025-11-24 09:50:35.192721457 +0000 UTC m=+0.019221774 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:50:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-65c93c690a1c87c120cab47bde4146df5c13ba5fd5dc97fb5f599fd42b89c180-merged.mount: Deactivated successfully.
Nov 24 09:50:35 compute-0 podman[261693]: 2025-11-24 09:50:35.315914065 +0000 UTC m=+0.142414362 container remove 1cf61a324b1948e148d0c9d40479f34f0fb4e0cc1b800da3bbf1f97cdd4a3d31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_margulis, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:50:35 compute-0 systemd[1]: libpod-conmon-1cf61a324b1948e148d0c9d40479f34f0fb4e0cc1b800da3bbf1f97cdd4a3d31.scope: Deactivated successfully.
Nov 24 09:50:35 compute-0 podman[261733]: 2025-11-24 09:50:35.460025538 +0000 UTC m=+0.037615178 container create b2f093a936ee40fd6af650cdad93e4efd8ec64451f2786ff72516336fbd797aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_raman, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Nov 24 09:50:35 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:35 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:35 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:35.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:35 compute-0 systemd[1]: Started libpod-conmon-b2f093a936ee40fd6af650cdad93e4efd8ec64451f2786ff72516336fbd797aa.scope.
Nov 24 09:50:35 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:50:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87ca7ce079a12a214b52c04da9ef6df8d622f8743bafaa0f371fe65eafd9cbc3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:50:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87ca7ce079a12a214b52c04da9ef6df8d622f8743bafaa0f371fe65eafd9cbc3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:50:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87ca7ce079a12a214b52c04da9ef6df8d622f8743bafaa0f371fe65eafd9cbc3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:50:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87ca7ce079a12a214b52c04da9ef6df8d622f8743bafaa0f371fe65eafd9cbc3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:50:35 compute-0 podman[261733]: 2025-11-24 09:50:35.524718364 +0000 UTC m=+0.102308024 container init b2f093a936ee40fd6af650cdad93e4efd8ec64451f2786ff72516336fbd797aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Nov 24 09:50:35 compute-0 podman[261733]: 2025-11-24 09:50:35.53064752 +0000 UTC m=+0.108237160 container start b2f093a936ee40fd6af650cdad93e4efd8ec64451f2786ff72516336fbd797aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_raman, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid)
Nov 24 09:50:35 compute-0 podman[261733]: 2025-11-24 09:50:35.533785117 +0000 UTC m=+0.111374757 container attach b2f093a936ee40fd6af650cdad93e4efd8ec64451f2786ff72516336fbd797aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_raman, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True)
Nov 24 09:50:35 compute-0 podman[261733]: 2025-11-24 09:50:35.445652514 +0000 UTC m=+0.023242174 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:50:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:35 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e44003630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:35 compute-0 epic_raman[261750]: {
Nov 24 09:50:35 compute-0 epic_raman[261750]:     "0": [
Nov 24 09:50:35 compute-0 epic_raman[261750]:         {
Nov 24 09:50:35 compute-0 epic_raman[261750]:             "devices": [
Nov 24 09:50:35 compute-0 epic_raman[261750]:                 "/dev/loop3"
Nov 24 09:50:35 compute-0 epic_raman[261750]:             ],
Nov 24 09:50:35 compute-0 epic_raman[261750]:             "lv_name": "ceph_lv0",
Nov 24 09:50:35 compute-0 epic_raman[261750]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:50:35 compute-0 epic_raman[261750]:             "lv_size": "21470642176",
Nov 24 09:50:35 compute-0 epic_raman[261750]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=84a084c3-61a7-5de7-8207-1f88efa59a64,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 24 09:50:35 compute-0 epic_raman[261750]:             "lv_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:50:35 compute-0 epic_raman[261750]:             "name": "ceph_lv0",
Nov 24 09:50:35 compute-0 epic_raman[261750]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:50:35 compute-0 epic_raman[261750]:             "tags": {
Nov 24 09:50:35 compute-0 epic_raman[261750]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:50:35 compute-0 epic_raman[261750]:                 "ceph.block_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:50:35 compute-0 epic_raman[261750]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 09:50:35 compute-0 epic_raman[261750]:                 "ceph.cluster_fsid": "84a084c3-61a7-5de7-8207-1f88efa59a64",
Nov 24 09:50:35 compute-0 epic_raman[261750]:                 "ceph.cluster_name": "ceph",
Nov 24 09:50:35 compute-0 epic_raman[261750]:                 "ceph.crush_device_class": "",
Nov 24 09:50:35 compute-0 epic_raman[261750]:                 "ceph.encrypted": "0",
Nov 24 09:50:35 compute-0 epic_raman[261750]:                 "ceph.osd_fsid": "4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c",
Nov 24 09:50:35 compute-0 epic_raman[261750]:                 "ceph.osd_id": "0",
Nov 24 09:50:35 compute-0 epic_raman[261750]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 09:50:35 compute-0 epic_raman[261750]:                 "ceph.type": "block",
Nov 24 09:50:35 compute-0 epic_raman[261750]:                 "ceph.vdo": "0",
Nov 24 09:50:35 compute-0 epic_raman[261750]:                 "ceph.with_tpm": "0"
Nov 24 09:50:35 compute-0 epic_raman[261750]:             },
Nov 24 09:50:35 compute-0 epic_raman[261750]:             "type": "block",
Nov 24 09:50:35 compute-0 epic_raman[261750]:             "vg_name": "ceph_vg0"
Nov 24 09:50:35 compute-0 epic_raman[261750]:         }
Nov 24 09:50:35 compute-0 epic_raman[261750]:     ]
Nov 24 09:50:35 compute-0 epic_raman[261750]: }
Nov 24 09:50:35 compute-0 systemd[1]: libpod-b2f093a936ee40fd6af650cdad93e4efd8ec64451f2786ff72516336fbd797aa.scope: Deactivated successfully.
Nov 24 09:50:35 compute-0 podman[261733]: 2025-11-24 09:50:35.823945751 +0000 UTC m=+0.401535391 container died b2f093a936ee40fd6af650cdad93e4efd8ec64451f2786ff72516336fbd797aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:50:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-87ca7ce079a12a214b52c04da9ef6df8d622f8743bafaa0f371fe65eafd9cbc3-merged.mount: Deactivated successfully.
Nov 24 09:50:35 compute-0 podman[261733]: 2025-11-24 09:50:35.88187667 +0000 UTC m=+0.459466320 container remove b2f093a936ee40fd6af650cdad93e4efd8ec64451f2786ff72516336fbd797aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_raman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Nov 24 09:50:35 compute-0 systemd[1]: libpod-conmon-b2f093a936ee40fd6af650cdad93e4efd8ec64451f2786ff72516336fbd797aa.scope: Deactivated successfully.
Nov 24 09:50:35 compute-0 sudo[261628]: pam_unix(sudo:session): session closed for user root
Nov 24 09:50:35 compute-0 sudo[261773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:50:35 compute-0 sudo[261773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:50:35 compute-0 sudo[261773]: pam_unix(sudo:session): session closed for user root
Nov 24 09:50:36 compute-0 sudo[261798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- raw list --format json
Nov 24 09:50:36 compute-0 sudo[261798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:50:36 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:36 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e64003f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:36 compute-0 podman[261863]: 2025-11-24 09:50:36.395520745 +0000 UTC m=+0.035131908 container create 7211b8de5d624a7300d9c12aa69852fe5f33c66a7d4c3c1fa2cca838f7d2206d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_wing, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:50:36 compute-0 ceph-mon[74331]: pgmap v701: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 186 B/s rd, 0 op/s
Nov 24 09:50:36 compute-0 systemd[1]: Started libpod-conmon-7211b8de5d624a7300d9c12aa69852fe5f33c66a7d4c3c1fa2cca838f7d2206d.scope.
Nov 24 09:50:36 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:50:36 compute-0 podman[261863]: 2025-11-24 09:50:36.470954614 +0000 UTC m=+0.110565797 container init 7211b8de5d624a7300d9c12aa69852fe5f33c66a7d4c3c1fa2cca838f7d2206d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:50:36 compute-0 podman[261863]: 2025-11-24 09:50:36.476305316 +0000 UTC m=+0.115916479 container start 7211b8de5d624a7300d9c12aa69852fe5f33c66a7d4c3c1fa2cca838f7d2206d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:50:36 compute-0 podman[261863]: 2025-11-24 09:50:36.381727855 +0000 UTC m=+0.021339038 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:50:36 compute-0 podman[261863]: 2025-11-24 09:50:36.479218428 +0000 UTC m=+0.118829591 container attach 7211b8de5d624a7300d9c12aa69852fe5f33c66a7d4c3c1fa2cca838f7d2206d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_wing, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 24 09:50:36 compute-0 eager_wing[261879]: 167 167
Nov 24 09:50:36 compute-0 systemd[1]: libpod-7211b8de5d624a7300d9c12aa69852fe5f33c66a7d4c3c1fa2cca838f7d2206d.scope: Deactivated successfully.
Nov 24 09:50:36 compute-0 podman[261863]: 2025-11-24 09:50:36.48051182 +0000 UTC m=+0.120122983 container died 7211b8de5d624a7300d9c12aa69852fe5f33c66a7d4c3c1fa2cca838f7d2206d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_wing, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:50:36 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:36 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:36 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:36.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-86be0d046a2b8cc8a849e8f9f6774c0618ce468aebecdcadd50dd441527687e3-merged.mount: Deactivated successfully.
Nov 24 09:50:36 compute-0 podman[261863]: 2025-11-24 09:50:36.515911703 +0000 UTC m=+0.155522866 container remove 7211b8de5d624a7300d9c12aa69852fe5f33c66a7d4c3c1fa2cca838f7d2206d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:50:36 compute-0 systemd[1]: libpod-conmon-7211b8de5d624a7300d9c12aa69852fe5f33c66a7d4c3c1fa2cca838f7d2206d.scope: Deactivated successfully.
Nov 24 09:50:36 compute-0 podman[261902]: 2025-11-24 09:50:36.710845049 +0000 UTC m=+0.044123779 container create fc14daa5923875e1664233734ea73976bfaafc0715cb698d59d31f6d362aa8f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_wiles, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 24 09:50:36 compute-0 systemd[1]: Started libpod-conmon-fc14daa5923875e1664233734ea73976bfaafc0715cb698d59d31f6d362aa8f4.scope.
Nov 24 09:50:36 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:50:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a230c106677d01c2d1943dfc71b64369100fa5d1494c31dc07dfbe66dd8b76b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:50:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a230c106677d01c2d1943dfc71b64369100fa5d1494c31dc07dfbe66dd8b76b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:50:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a230c106677d01c2d1943dfc71b64369100fa5d1494c31dc07dfbe66dd8b76b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:50:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a230c106677d01c2d1943dfc71b64369100fa5d1494c31dc07dfbe66dd8b76b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:50:36 compute-0 podman[261902]: 2025-11-24 09:50:36.690463947 +0000 UTC m=+0.023742717 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:50:36 compute-0 podman[261902]: 2025-11-24 09:50:36.789344965 +0000 UTC m=+0.122623715 container init fc14daa5923875e1664233734ea73976bfaafc0715cb698d59d31f6d362aa8f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_wiles, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:50:36 compute-0 podman[261902]: 2025-11-24 09:50:36.795478996 +0000 UTC m=+0.128757716 container start fc14daa5923875e1664233734ea73976bfaafc0715cb698d59d31f6d362aa8f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:50:36 compute-0 podman[261902]: 2025-11-24 09:50:36.798383947 +0000 UTC m=+0.131662687 container attach fc14daa5923875e1664233734ea73976bfaafc0715cb698d59d31f6d362aa8f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 24 09:50:36 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:36 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c003ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:50:37.083Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 09:50:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:50:37.083Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:50:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:50:37.084Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:50:37 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v702: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 651 B/s rd, 93 B/s wr, 0 op/s
Nov 24 09:50:37 compute-0 lvm[261993]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 09:50:37 compute-0 lvm[261993]: VG ceph_vg0 finished
Nov 24 09:50:37 compute-0 goofy_wiles[261918]: {}
Nov 24 09:50:37 compute-0 systemd[1]: libpod-fc14daa5923875e1664233734ea73976bfaafc0715cb698d59d31f6d362aa8f4.scope: Deactivated successfully.
Nov 24 09:50:37 compute-0 podman[261902]: 2025-11-24 09:50:37.468954371 +0000 UTC m=+0.802233111 container died fc14daa5923875e1664233734ea73976bfaafc0715cb698d59d31f6d362aa8f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_wiles, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 09:50:37 compute-0 systemd[1]: libpod-fc14daa5923875e1664233734ea73976bfaafc0715cb698d59d31f6d362aa8f4.scope: Consumed 1.017s CPU time.
Nov 24 09:50:37 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:37 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:50:37 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:37.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:50:37 compute-0 ceph-mon[74331]: pgmap v702: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 651 B/s rd, 93 B/s wr, 0 op/s
Nov 24 09:50:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-a230c106677d01c2d1943dfc71b64369100fa5d1494c31dc07dfbe66dd8b76b0-merged.mount: Deactivated successfully.
Nov 24 09:50:37 compute-0 podman[261902]: 2025-11-24 09:50:37.51189182 +0000 UTC m=+0.845170540 container remove fc14daa5923875e1664233734ea73976bfaafc0715cb698d59d31f6d362aa8f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_wiles, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid)
Nov 24 09:50:37 compute-0 systemd[1]: libpod-conmon-fc14daa5923875e1664233734ea73976bfaafc0715cb698d59d31f6d362aa8f4.scope: Deactivated successfully.
Nov 24 09:50:37 compute-0 sudo[261798]: pam_unix(sudo:session): session closed for user root
Nov 24 09:50:37 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:50:37 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:50:37 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:50:37 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:50:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:37 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e38003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:37 compute-0 sudo[262008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:50:37 compute-0 sudo[262008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:50:37 compute-0 sudo[262008]: pam_unix(sudo:session): session closed for user root
Nov 24 09:50:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:38 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e44003630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:38 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:38 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:38 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:38.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:38 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:50:38 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:50:38 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:50:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:38 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e64003f80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:39 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:39 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:50:39 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v703: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 651 B/s rd, 93 B/s wr, 0 op/s
Nov 24 09:50:39 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:39 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:39 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:39.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:39 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:39 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c003ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:39 compute-0 ceph-mon[74331]: pgmap v703: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 651 B/s rd, 93 B/s wr, 0 op/s
Nov 24 09:50:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:40 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e38003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:40 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:40 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:40 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:40.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:40 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e44003630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:50:40] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Nov 24 09:50:40 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:50:40] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Nov 24 09:50:41 compute-0 sshd-session[262035]: Received disconnect from 83.229.122.23 port 57926:11: Bye Bye [preauth]
Nov 24 09:50:41 compute-0 sshd-session[262035]: Disconnected from authenticating user root 83.229.122.23 port 57926 [preauth]
Nov 24 09:50:41 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v704: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 651 B/s rd, 93 B/s wr, 0 op/s
Nov 24 09:50:41 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:41 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:50:41 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:41.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:50:41 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:41 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e64003fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:42 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:50:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:42 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:50:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:42 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c0046d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:42 compute-0 ceph-mon[74331]: pgmap v704: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 651 B/s rd, 93 B/s wr, 0 op/s
Nov 24 09:50:42 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:42 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:42 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:42.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:42 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e38003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:43 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v705: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 651 B/s wr, 1 op/s
Nov 24 09:50:43 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:43 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:43 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:43.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:50:43 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:43 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e44003630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:44 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e64003fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:44 compute-0 ceph-mon[74331]: pgmap v705: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 651 B/s wr, 1 op/s
Nov 24 09:50:44 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:44 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:44 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:44.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:44 compute-0 sudo[262042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:50:44 compute-0 sudo[262042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:50:44 compute-0 sudo[262042]: pam_unix(sudo:session): session closed for user root
Nov 24 09:50:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:44 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c0046d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:45 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v706: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:50:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Optimize plan auto_2025-11-24_09:50:45
Nov 24 09:50:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 09:50:45 compute-0 ceph-mgr[74626]: [balancer INFO root] do_upmap
Nov 24 09:50:45 compute-0 ceph-mgr[74626]: [balancer INFO root] pools ['vms', '.mgr', '.rgw.root', 'cephfs.cephfs.data', 'volumes', 'default.rgw.meta', 'backups', 'images', 'default.rgw.log', 'cephfs.cephfs.meta', '.nfs', 'default.rgw.control']
Nov 24 09:50:45 compute-0 ceph-mgr[74626]: [balancer INFO root] prepared 0/10 upmap changes
Nov 24 09:50:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:45 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 24 09:50:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:50:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:50:45 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Nov 24 09:50:45 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:50:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:50:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:50:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:50:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:50:45 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:45 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:50:45 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:45.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:50:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:45 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e38003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:45 compute-0 ceph-mon[74331]: pgmap v706: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Nov 24 09:50:45 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:50:45 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:50:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 09:50:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:50:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 09:50:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:50:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:50:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:50:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:50:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:50:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:50:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:50:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:50:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:50:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 09:50:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:50:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:50:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:50:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 24 09:50:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:50:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 09:50:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:50:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:50:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:50:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 09:50:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:50:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 09:50:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 09:50:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:50:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 09:50:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:50:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:50:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:50:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:50:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:50:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:50:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:50:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:46 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e44003630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:46 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:46 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:46 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:46.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:46 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e64003fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:50:47.084Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:50:47 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v707: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:50:47 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:47 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:47 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:47.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:47 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c0046d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:48 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e38003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:48 compute-0 ceph-mon[74331]: pgmap v707: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Nov 24 09:50:48 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:48 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:50:48 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:48.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:50:48 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:50:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:48 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e44003630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:49 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v708: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:50:49 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:49 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:50:49 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:49.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:50:49 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:49 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e64004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:50 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c0046d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:50 compute-0 ceph-mon[74331]: pgmap v708: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:50:50 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:50 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:50:50 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:50.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:50:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:50 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e50002930 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:50:50] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Nov 24 09:50:50 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:50:50] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Nov 24 09:50:51 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v709: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:50:51 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:51 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:51 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:51.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:51 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:51 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e40002810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/095052 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 24 09:50:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:52 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e64004090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:52 compute-0 ceph-mon[74331]: pgmap v709: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Nov 24 09:50:52 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:52 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:52 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:52.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:52 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c0046d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:53 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v710: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:50:53 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:53 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:50:53 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:53.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:50:53 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:50:53 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:53 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e50002930 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:54 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:54 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e40002810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:54 compute-0 ceph-mon[74331]: pgmap v710: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 3 op/s
Nov 24 09:50:54 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:54 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:50:54 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:54.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:50:54 compute-0 podman[262079]: 2025-11-24 09:50:54.80504338 +0000 UTC m=+0.083748237 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 09:50:54 compute-0 podman[262101]: 2025-11-24 09:50:54.957516079 +0000 UTC m=+0.100936280 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 24 09:50:54 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:54 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e64004090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:55 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v711: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:50:55 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:55 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:55 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:55.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:55 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c0046d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:56 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e50002240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:56 compute-0 ceph-mon[74331]: pgmap v711: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:50:56 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:56 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:56 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:56.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:56 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e40002810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:50:57.085Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 09:50:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:50:57.085Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:50:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:50:57.086Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:50:57 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v712: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 2 op/s
Nov 24 09:50:57 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:50:57.358 165073 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:13:51', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4e:f0:a8:6f:5e:1b'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 09:50:57 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:50:57.359 165073 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 09:50:57 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:50:57.359 165073 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feb242b9-6422-4c37-bc7a-5c14a79beaf8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:50:57 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:57 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:57 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:57.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:57 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e64004090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:58 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c0046d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:58 compute-0 ceph-mon[74331]: pgmap v712: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 2 op/s
Nov 24 09:50:58 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:50:58 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:58 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:58 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:50:58.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:58 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e50002240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:59 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v713: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:50:59 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:50:59 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:50:59 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:50:59.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:50:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:50:59 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e40002810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:50:59 compute-0 ceph-mon[74331]: pgmap v713: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:51:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:00 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e64004090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:00 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:00 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:00 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:00.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:00 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:51:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:00 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c0046d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:51:00] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Nov 24 09:51:00 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:51:00] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Nov 24 09:51:01 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v714: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:51:01 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:01 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:01 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:01.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:01 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c0046d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:01 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/1960026280' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 09:51:01 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/1960026280' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 09:51:01 compute-0 ceph-mon[74331]: pgmap v714: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:51:02 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:02 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e40002810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:02 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:02 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:02 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:02.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:02 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:02 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e640040b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:03 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v715: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:51:03 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:51:03 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:03 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:03 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:03.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:03 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e50003660 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:03 compute-0 podman[262136]: 2025-11-24 09:51:03.794052476 +0000 UTC m=+0.063613469 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 24 09:51:04 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:04 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c0046d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:04 compute-0 ceph-mon[74331]: pgmap v715: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 24 09:51:04 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:04 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:04 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:04.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:04 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:04 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c0046d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:04 compute-0 sudo[262156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:51:04 compute-0 sudo[262156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:51:04 compute-0 sudo[262156]: pam_unix(sudo:session): session closed for user root
Nov 24 09:51:05 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v716: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:51:05 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:05 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:51:05 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:05.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:51:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:05 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e640040d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:06 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e50003660 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:06 compute-0 ceph-mon[74331]: pgmap v716: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:51:06 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:06 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:06 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:06.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:06 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e50003660 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:51:07.086Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:51:07 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v717: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:51:07 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:07 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:51:07 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:07.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:51:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:07 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c0046d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:08 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e640040f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:08 compute-0 ceph-mon[74331]: pgmap v717: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:51:08 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:51:08 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:08 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:08 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:08.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:08 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e640040f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:09 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v718: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:51:09 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:09 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:09 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:09.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:09 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e40002810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:09 compute-0 nova_compute[257700]: 2025-11-24 09:51:09.922 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:51:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:10 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c0046d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:10 compute-0 ceph-mon[74331]: pgmap v718: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:51:10 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:10 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:10 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:10.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:10 compute-0 nova_compute[257700]: 2025-11-24 09:51:10.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:51:10 compute-0 nova_compute[257700]: 2025-11-24 09:51:10.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:51:10 compute-0 nova_compute[257700]: 2025-11-24 09:51:10.921 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 09:51:10 compute-0 nova_compute[257700]: 2025-11-24 09:51:10.922 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:51:10 compute-0 nova_compute[257700]: 2025-11-24 09:51:10.939 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:51:10 compute-0 nova_compute[257700]: 2025-11-24 09:51:10.940 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:51:10 compute-0 nova_compute[257700]: 2025-11-24 09:51:10.940 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:51:10 compute-0 nova_compute[257700]: 2025-11-24 09:51:10.940 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 09:51:10 compute-0 nova_compute[257700]: 2025-11-24 09:51:10.940 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:51:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:10 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c0046d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:51:10] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Nov 24 09:51:10 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:51:10] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Nov 24 09:51:11 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v719: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:51:11 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:51:11 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/233033597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:51:11 compute-0 nova_compute[257700]: 2025-11-24 09:51:11.363 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:51:11 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/233033597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:51:11 compute-0 nova_compute[257700]: 2025-11-24 09:51:11.504 257704 WARNING nova.virt.libvirt.driver [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 09:51:11 compute-0 nova_compute[257700]: 2025-11-24 09:51:11.505 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4914MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 09:51:11 compute-0 nova_compute[257700]: 2025-11-24 09:51:11.505 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:51:11 compute-0 nova_compute[257700]: 2025-11-24 09:51:11.506 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:51:11 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:11 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:11 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:11.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:11 compute-0 nova_compute[257700]: 2025-11-24 09:51:11.573 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 09:51:11 compute-0 nova_compute[257700]: 2025-11-24 09:51:11.573 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 09:51:11 compute-0 nova_compute[257700]: 2025-11-24 09:51:11.587 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:51:11 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:11 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c0046d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:12 compute-0 nova_compute[257700]: 2025-11-24 09:51:12.016 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:51:12 compute-0 nova_compute[257700]: 2025-11-24 09:51:12.021 257704 DEBUG nova.compute.provider_tree [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Inventory has not changed in ProviderTree for provider: a50ce3b5-7e9e-4263-a4aa-c35573ac7257 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 09:51:12 compute-0 nova_compute[257700]: 2025-11-24 09:51:12.045 257704 DEBUG nova.scheduler.client.report [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Inventory has not changed for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 09:51:12 compute-0 nova_compute[257700]: 2025-11-24 09:51:12.047 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 09:51:12 compute-0 nova_compute[257700]: 2025-11-24 09:51:12.047 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.541s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:51:12 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:12 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c0046d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:12 compute-0 ceph-mon[74331]: pgmap v719: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:51:12 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2281961602' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:51:12 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/423930642' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:51:12 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:12 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.002000048s ======
Nov 24 09:51:12 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:12.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Nov 24 09:51:12 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:12 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e78001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:13 compute-0 nova_compute[257700]: 2025-11-24 09:51:13.047 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:51:13 compute-0 nova_compute[257700]: 2025-11-24 09:51:13.048 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 09:51:13 compute-0 nova_compute[257700]: 2025-11-24 09:51:13.048 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 09:51:13 compute-0 nova_compute[257700]: 2025-11-24 09:51:13.058 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 09:51:13 compute-0 nova_compute[257700]: 2025-11-24 09:51:13.059 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:51:13 compute-0 nova_compute[257700]: 2025-11-24 09:51:13.059 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:51:13 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v720: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:51:13 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/4230280917' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:51:13 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/2994754606' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:51:13 compute-0 ceph-mon[74331]: pgmap v720: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:51:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:51:13 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:13 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:13 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:13.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:13 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:13 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e44003630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:13 compute-0 nova_compute[257700]: 2025-11-24 09:51:13.928 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:51:13 compute-0 nova_compute[257700]: 2025-11-24 09:51:13.929 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:51:14 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:14 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e50004760 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:14 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/1692086803' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:51:14 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:14 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:51:14 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:14.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:51:14 compute-0 nova_compute[257700]: 2025-11-24 09:51:14.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:51:14 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:14 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c0046d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:15 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v721: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:51:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:51:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:51:15 compute-0 ceph-mon[74331]: pgmap v721: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:51:15 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:51:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:51:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:51:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:51:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:51:15 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:15 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:15 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:15.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:15 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c0046d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:16 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c0046d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:16 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:16 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:16 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:16.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:16 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e50004760 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:51:17.087Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:51:17 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v722: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:51:17 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:17 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:17 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:17.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:17 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e78001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:18 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e44004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:18 compute-0 ceph-mon[74331]: pgmap v722: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:51:18 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:51:18 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:18 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:18 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:18.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:18 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c0046d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:19 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v723: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:51:19 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:19 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:51:19 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:19.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:51:19 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:19 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e78008dc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:20 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e50004760 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:20 compute-0 ceph-mon[74331]: pgmap v723: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:51:20 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:20 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:20 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:20.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:51:20.559 165073 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:51:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:51:20.560 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:51:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:51:20.560 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:51:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:51:20] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Nov 24 09:51:20 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:51:20] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Nov 24 09:51:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:20 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e44004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:21 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v724: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:51:21 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:21 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:21 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:21.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:21 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:21 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c0046d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:22 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:22 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e78008dc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:22 compute-0 ceph-mon[74331]: pgmap v724: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:51:22 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:22 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:22 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:22.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:22 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:22 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e50004760 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:23 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v725: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:51:23 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:51:23 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:23 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:23 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:23.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:23 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e44004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:24 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c0046d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:24 compute-0 ceph-mon[74331]: pgmap v725: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:51:24 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:24 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:24 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:24.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:24 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e78008dc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:25 compute-0 sudo[262247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:51:25 compute-0 sudo[262247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:51:25 compute-0 sudo[262247]: pam_unix(sudo:session): session closed for user root
Nov 24 09:51:25 compute-0 podman[262271]: 2025-11-24 09:51:25.160937139 +0000 UTC m=+0.060756808 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible)
Nov 24 09:51:25 compute-0 podman[262272]: 2025-11-24 09:51:25.175906629 +0000 UTC m=+0.075683077 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:51:25 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v726: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:51:25 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:25 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:51:25 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:25.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:51:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:25 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e50004760 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:26 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e44004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:26 compute-0 ceph-mon[74331]: pgmap v726: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:51:26 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:26 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:26 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:26.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:26 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c0046d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:51:27.088Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 09:51:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:51:27.088Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:51:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:51:27.089Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:51:27 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v727: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:51:27 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:27 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:51:27 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:27.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:51:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:27 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e78008dc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:28 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e50004760 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:28 compute-0 ceph-mon[74331]: pgmap v727: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:51:28 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:51:28 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:28 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:28 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:28.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:29 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:29 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e50004760 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:29 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v728: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:51:29 compute-0 ceph-mon[74331]: pgmap v728: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:51:29 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:29 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:51:29 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:29.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:51:29 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:29 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c0046d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:30 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e78008dc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:30 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:51:30 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:30 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:51:30 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:30.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:51:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:51:30] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Nov 24 09:51:30 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:51:30] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Nov 24 09:51:31 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:31 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e50004760 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:31 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v729: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:51:31 compute-0 ceph-mon[74331]: pgmap v729: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:51:31 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:31 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:31 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:31.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:31 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:31 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e44004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:32 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:32 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c0046d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:32 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:32 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:51:32 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:32.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:51:33 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:33 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e78008dc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:33 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v730: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:51:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:51:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:33.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:33 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:33 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e50004760 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:34 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:34 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e44004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:34 compute-0 ceph-mon[74331]: pgmap v730: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:51:34 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:34 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:34 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:34.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:34 compute-0 podman[262329]: 2025-11-24 09:51:34.769260077 +0000 UTC m=+0.049895982 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 09:51:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:35 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c0046d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:35 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v731: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:51:35 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:35 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:35 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:35.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:35 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e78008dc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:36 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:36 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e50004760 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:36 compute-0 ceph-mon[74331]: pgmap v731: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:51:36 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:36 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:36 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:36.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:37 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e44004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:51:37.090Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 09:51:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:51:37.091Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 09:51:37 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v732: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:51:37 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:37 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:37 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:37.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:37 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c0046d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:37 compute-0 sudo[262351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:51:37 compute-0 sudo[262351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:51:37 compute-0 sudo[262351]: pam_unix(sudo:session): session closed for user root
Nov 24 09:51:37 compute-0 sudo[262376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:51:37 compute-0 sudo[262376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:51:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:38 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c0046d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:38 compute-0 ceph-mon[74331]: pgmap v732: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:51:38 compute-0 sudo[262376]: pam_unix(sudo:session): session closed for user root
Nov 24 09:51:38 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:51:38 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:38 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:51:38 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:38.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:51:38 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v733: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 269 B/s rd, 0 op/s
Nov 24 09:51:38 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:51:38 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:51:38 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:51:38 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:51:38 compute-0 sudo[262432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:51:38 compute-0 sudo[262432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:51:38 compute-0 sudo[262432]: pam_unix(sudo:session): session closed for user root
Nov 24 09:51:38 compute-0 sudo[262457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 24 09:51:38 compute-0 sudo[262457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:51:38 compute-0 ceph-mgr[74626]: [dashboard INFO request] [192.168.122.100:46632] [POST] [200] [0.003s] [4.0B] [07a3276d-dd59-436f-9194-e3b67f407f1e] /api/prometheus_receiver
Nov 24 09:51:39 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:39 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e50004760 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:39 compute-0 podman[262524]: 2025-11-24 09:51:39.231040869 +0000 UTC m=+0.051024300 container create 9f4780e959f5a506241da12393b42913346b1d41ca8c26f6380596e04cf79241 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:51:39 compute-0 systemd[1]: Started libpod-conmon-9f4780e959f5a506241da12393b42913346b1d41ca8c26f6380596e04cf79241.scope.
Nov 24 09:51:39 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:51:39 compute-0 podman[262524]: 2025-11-24 09:51:39.304895739 +0000 UTC m=+0.124879220 container init 9f4780e959f5a506241da12393b42913346b1d41ca8c26f6380596e04cf79241 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_raman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 24 09:51:39 compute-0 podman[262524]: 2025-11-24 09:51:39.214324866 +0000 UTC m=+0.034308317 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:51:39 compute-0 podman[262524]: 2025-11-24 09:51:39.312576179 +0000 UTC m=+0.132559610 container start 9f4780e959f5a506241da12393b42913346b1d41ca8c26f6380596e04cf79241 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 24 09:51:39 compute-0 podman[262524]: 2025-11-24 09:51:39.315539572 +0000 UTC m=+0.135523043 container attach 9f4780e959f5a506241da12393b42913346b1d41ca8c26f6380596e04cf79241 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Nov 24 09:51:39 compute-0 recursing_raman[262541]: 167 167
Nov 24 09:51:39 compute-0 systemd[1]: libpod-9f4780e959f5a506241da12393b42913346b1d41ca8c26f6380596e04cf79241.scope: Deactivated successfully.
Nov 24 09:51:39 compute-0 podman[262524]: 2025-11-24 09:51:39.318534065 +0000 UTC m=+0.138517496 container died 9f4780e959f5a506241da12393b42913346b1d41ca8c26f6380596e04cf79241 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_raman, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:51:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-efa07461a4cc8248802b9486cdec450b6fa30130c07e84bdc42e6581cc6ce856-merged.mount: Deactivated successfully.
Nov 24 09:51:39 compute-0 podman[262524]: 2025-11-24 09:51:39.357873825 +0000 UTC m=+0.177857256 container remove 9f4780e959f5a506241da12393b42913346b1d41ca8c26f6380596e04cf79241 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_raman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Nov 24 09:51:39 compute-0 systemd[1]: libpod-conmon-9f4780e959f5a506241da12393b42913346b1d41ca8c26f6380596e04cf79241.scope: Deactivated successfully.
Nov 24 09:51:39 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:51:39 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:51:39 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:51:39 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:51:39 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:51:39 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:51:39 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:51:39 compute-0 podman[262564]: 2025-11-24 09:51:39.503213069 +0000 UTC m=+0.036381848 container create 98d29e608af80ffe1cb2910e1b052201dfc6dfe839df17179c48a69dec8f11d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_hellman, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:51:39 compute-0 systemd[1]: Started libpod-conmon-98d29e608af80ffe1cb2910e1b052201dfc6dfe839df17179c48a69dec8f11d1.scope.
Nov 24 09:51:39 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:51:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e736ff4965f8435b1ee93b33ea500f8d3a2472fef2971b6bd9907d3a93705d8d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:51:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e736ff4965f8435b1ee93b33ea500f8d3a2472fef2971b6bd9907d3a93705d8d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:51:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e736ff4965f8435b1ee93b33ea500f8d3a2472fef2971b6bd9907d3a93705d8d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:51:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e736ff4965f8435b1ee93b33ea500f8d3a2472fef2971b6bd9907d3a93705d8d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:51:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e736ff4965f8435b1ee93b33ea500f8d3a2472fef2971b6bd9907d3a93705d8d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:51:39 compute-0 podman[262564]: 2025-11-24 09:51:39.561991738 +0000 UTC m=+0.095160597 container init 98d29e608af80ffe1cb2910e1b052201dfc6dfe839df17179c48a69dec8f11d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_hellman, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 24 09:51:39 compute-0 podman[262564]: 2025-11-24 09:51:39.570742804 +0000 UTC m=+0.103911573 container start 98d29e608af80ffe1cb2910e1b052201dfc6dfe839df17179c48a69dec8f11d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_hellman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 24 09:51:39 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:39 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:39 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:39.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:39 compute-0 podman[262564]: 2025-11-24 09:51:39.574701182 +0000 UTC m=+0.107869951 container attach 98d29e608af80ffe1cb2910e1b052201dfc6dfe839df17179c48a69dec8f11d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_hellman, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:51:39 compute-0 podman[262564]: 2025-11-24 09:51:39.488985458 +0000 UTC m=+0.022154237 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:51:39 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:39 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e44004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:39 compute-0 frosty_hellman[262581]: --> passed data devices: 0 physical, 1 LVM
Nov 24 09:51:39 compute-0 frosty_hellman[262581]: --> All data devices are unavailable
Nov 24 09:51:39 compute-0 systemd[1]: libpod-98d29e608af80ffe1cb2910e1b052201dfc6dfe839df17179c48a69dec8f11d1.scope: Deactivated successfully.
Nov 24 09:51:39 compute-0 podman[262564]: 2025-11-24 09:51:39.882493541 +0000 UTC m=+0.415662310 container died 98d29e608af80ffe1cb2910e1b052201dfc6dfe839df17179c48a69dec8f11d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_hellman, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:51:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-e736ff4965f8435b1ee93b33ea500f8d3a2472fef2971b6bd9907d3a93705d8d-merged.mount: Deactivated successfully.
Nov 24 09:51:39 compute-0 podman[262564]: 2025-11-24 09:51:39.918212051 +0000 UTC m=+0.451380830 container remove 98d29e608af80ffe1cb2910e1b052201dfc6dfe839df17179c48a69dec8f11d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_hellman, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:51:39 compute-0 systemd[1]: libpod-conmon-98d29e608af80ffe1cb2910e1b052201dfc6dfe839df17179c48a69dec8f11d1.scope: Deactivated successfully.
Nov 24 09:51:39 compute-0 sudo[262457]: pam_unix(sudo:session): session closed for user root
Nov 24 09:51:40 compute-0 sudo[262610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:51:40 compute-0 sudo[262610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:51:40 compute-0 sudo[262610]: pam_unix(sudo:session): session closed for user root
Nov 24 09:51:40 compute-0 sudo[262635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- lvm list --format json
Nov 24 09:51:40 compute-0 sudo[262635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:51:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:40 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e78008dc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:40 compute-0 ceph-mon[74331]: pgmap v733: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 269 B/s rd, 0 op/s
Nov 24 09:51:40 compute-0 podman[262700]: 2025-11-24 09:51:40.527736741 +0000 UTC m=+0.038622154 container create 373b328f7b42d8f4c172620e8efd133d418608cc9df167202bfcb649e418234e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_wescoff, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:51:40 compute-0 systemd[1]: Started libpod-conmon-373b328f7b42d8f4c172620e8efd133d418608cc9df167202bfcb649e418234e.scope.
Nov 24 09:51:40 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:40 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:40 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:40.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:40 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:51:40 compute-0 podman[262700]: 2025-11-24 09:51:40.512478534 +0000 UTC m=+0.023363967 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:51:40 compute-0 podman[262700]: 2025-11-24 09:51:40.611104536 +0000 UTC m=+0.121989969 container init 373b328f7b42d8f4c172620e8efd133d418608cc9df167202bfcb649e418234e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_wescoff, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 24 09:51:40 compute-0 podman[262700]: 2025-11-24 09:51:40.618137179 +0000 UTC m=+0.129022582 container start 373b328f7b42d8f4c172620e8efd133d418608cc9df167202bfcb649e418234e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 24 09:51:40 compute-0 podman[262700]: 2025-11-24 09:51:40.621560474 +0000 UTC m=+0.132445887 container attach 373b328f7b42d8f4c172620e8efd133d418608cc9df167202bfcb649e418234e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 09:51:40 compute-0 agitated_wescoff[262717]: 167 167
Nov 24 09:51:40 compute-0 systemd[1]: libpod-373b328f7b42d8f4c172620e8efd133d418608cc9df167202bfcb649e418234e.scope: Deactivated successfully.
Nov 24 09:51:40 compute-0 podman[262700]: 2025-11-24 09:51:40.623911731 +0000 UTC m=+0.134797144 container died 373b328f7b42d8f4c172620e8efd133d418608cc9df167202bfcb649e418234e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_wescoff, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 24 09:51:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-e90a8d9cbebbc2d90178e5d82a556713ab3e711f17906f9c7ef5ee2f2df3361b-merged.mount: Deactivated successfully.
Nov 24 09:51:40 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v734: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 269 B/s rd, 0 op/s
Nov 24 09:51:40 compute-0 podman[262700]: 2025-11-24 09:51:40.658613017 +0000 UTC m=+0.169498440 container remove 373b328f7b42d8f4c172620e8efd133d418608cc9df167202bfcb649e418234e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:51:40 compute-0 systemd[1]: libpod-conmon-373b328f7b42d8f4c172620e8efd133d418608cc9df167202bfcb649e418234e.scope: Deactivated successfully.
Nov 24 09:51:40 compute-0 podman[262741]: 2025-11-24 09:51:40.837019176 +0000 UTC m=+0.039015183 container create ee4665df07f7bc84ed2ea24865bf296ce33cd5d21f0444e17053996fa44b10fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_tu, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Nov 24 09:51:40 compute-0 systemd[1]: Started libpod-conmon-ee4665df07f7bc84ed2ea24865bf296ce33cd5d21f0444e17053996fa44b10fb.scope.
Nov 24 09:51:40 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:51:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79143a462bbd83517cea5935c80e18557e614916929087ec22df7d7c078f15a0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:51:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79143a462bbd83517cea5935c80e18557e614916929087ec22df7d7c078f15a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:51:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79143a462bbd83517cea5935c80e18557e614916929087ec22df7d7c078f15a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:51:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79143a462bbd83517cea5935c80e18557e614916929087ec22df7d7c078f15a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:51:40 compute-0 podman[262741]: 2025-11-24 09:51:40.916878126 +0000 UTC m=+0.118874133 container init ee4665df07f7bc84ed2ea24865bf296ce33cd5d21f0444e17053996fa44b10fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_tu, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:51:40 compute-0 podman[262741]: 2025-11-24 09:51:40.821599366 +0000 UTC m=+0.023595393 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:51:40 compute-0 podman[262741]: 2025-11-24 09:51:40.923440937 +0000 UTC m=+0.125436954 container start ee4665df07f7bc84ed2ea24865bf296ce33cd5d21f0444e17053996fa44b10fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_tu, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:51:40 compute-0 podman[262741]: 2025-11-24 09:51:40.926463251 +0000 UTC m=+0.128459278 container attach ee4665df07f7bc84ed2ea24865bf296ce33cd5d21f0444e17053996fa44b10fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_tu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:51:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:51:40] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Nov 24 09:51:40 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:51:40] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Nov 24 09:51:41 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:41 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c0046d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:41 compute-0 quizzical_tu[262759]: {
Nov 24 09:51:41 compute-0 quizzical_tu[262759]:     "0": [
Nov 24 09:51:41 compute-0 quizzical_tu[262759]:         {
Nov 24 09:51:41 compute-0 quizzical_tu[262759]:             "devices": [
Nov 24 09:51:41 compute-0 quizzical_tu[262759]:                 "/dev/loop3"
Nov 24 09:51:41 compute-0 quizzical_tu[262759]:             ],
Nov 24 09:51:41 compute-0 quizzical_tu[262759]:             "lv_name": "ceph_lv0",
Nov 24 09:51:41 compute-0 quizzical_tu[262759]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:51:41 compute-0 quizzical_tu[262759]:             "lv_size": "21470642176",
Nov 24 09:51:41 compute-0 quizzical_tu[262759]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=84a084c3-61a7-5de7-8207-1f88efa59a64,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 24 09:51:41 compute-0 quizzical_tu[262759]:             "lv_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:51:41 compute-0 quizzical_tu[262759]:             "name": "ceph_lv0",
Nov 24 09:51:41 compute-0 quizzical_tu[262759]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:51:41 compute-0 quizzical_tu[262759]:             "tags": {
Nov 24 09:51:41 compute-0 quizzical_tu[262759]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:51:41 compute-0 quizzical_tu[262759]:                 "ceph.block_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:51:41 compute-0 quizzical_tu[262759]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 09:51:41 compute-0 quizzical_tu[262759]:                 "ceph.cluster_fsid": "84a084c3-61a7-5de7-8207-1f88efa59a64",
Nov 24 09:51:41 compute-0 quizzical_tu[262759]:                 "ceph.cluster_name": "ceph",
Nov 24 09:51:41 compute-0 quizzical_tu[262759]:                 "ceph.crush_device_class": "",
Nov 24 09:51:41 compute-0 quizzical_tu[262759]:                 "ceph.encrypted": "0",
Nov 24 09:51:41 compute-0 quizzical_tu[262759]:                 "ceph.osd_fsid": "4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c",
Nov 24 09:51:41 compute-0 quizzical_tu[262759]:                 "ceph.osd_id": "0",
Nov 24 09:51:41 compute-0 quizzical_tu[262759]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 09:51:41 compute-0 quizzical_tu[262759]:                 "ceph.type": "block",
Nov 24 09:51:41 compute-0 quizzical_tu[262759]:                 "ceph.vdo": "0",
Nov 24 09:51:41 compute-0 quizzical_tu[262759]:                 "ceph.with_tpm": "0"
Nov 24 09:51:41 compute-0 quizzical_tu[262759]:             },
Nov 24 09:51:41 compute-0 quizzical_tu[262759]:             "type": "block",
Nov 24 09:51:41 compute-0 quizzical_tu[262759]:             "vg_name": "ceph_vg0"
Nov 24 09:51:41 compute-0 quizzical_tu[262759]:         }
Nov 24 09:51:41 compute-0 quizzical_tu[262759]:     ]
Nov 24 09:51:41 compute-0 quizzical_tu[262759]: }
Nov 24 09:51:41 compute-0 systemd[1]: libpod-ee4665df07f7bc84ed2ea24865bf296ce33cd5d21f0444e17053996fa44b10fb.scope: Deactivated successfully.
Nov 24 09:51:41 compute-0 conmon[262759]: conmon ee4665df07f7bc84ed2e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ee4665df07f7bc84ed2ea24865bf296ce33cd5d21f0444e17053996fa44b10fb.scope/container/memory.events
Nov 24 09:51:41 compute-0 podman[262741]: 2025-11-24 09:51:41.181620473 +0000 UTC m=+0.383616490 container died ee4665df07f7bc84ed2ea24865bf296ce33cd5d21f0444e17053996fa44b10fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_tu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:51:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-79143a462bbd83517cea5935c80e18557e614916929087ec22df7d7c078f15a0-merged.mount: Deactivated successfully.
Nov 24 09:51:41 compute-0 podman[262741]: 2025-11-24 09:51:41.229678778 +0000 UTC m=+0.431674805 container remove ee4665df07f7bc84ed2ea24865bf296ce33cd5d21f0444e17053996fa44b10fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_tu, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Nov 24 09:51:41 compute-0 systemd[1]: libpod-conmon-ee4665df07f7bc84ed2ea24865bf296ce33cd5d21f0444e17053996fa44b10fb.scope: Deactivated successfully.
Nov 24 09:51:41 compute-0 sudo[262635]: pam_unix(sudo:session): session closed for user root
Nov 24 09:51:41 compute-0 sudo[262781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:51:41 compute-0 sudo[262781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:51:41 compute-0 sudo[262781]: pam_unix(sudo:session): session closed for user root
Nov 24 09:51:41 compute-0 sudo[262806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- raw list --format json
Nov 24 09:51:41 compute-0 sudo[262806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:51:41 compute-0 ceph-mon[74331]: pgmap v734: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 269 B/s rd, 0 op/s
Nov 24 09:51:41 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:41 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:41 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:41.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:41 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:41 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e50004760 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:41 compute-0 podman[262871]: 2025-11-24 09:51:41.76350596 +0000 UTC m=+0.034891622 container create 6fdd43c9a721e745d71516dec4ff60863d844270db9dc7d8ee7c43083e1d8f8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:51:41 compute-0 systemd[1]: Started libpod-conmon-6fdd43c9a721e745d71516dec4ff60863d844270db9dc7d8ee7c43083e1d8f8a.scope.
Nov 24 09:51:41 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:51:41 compute-0 podman[262871]: 2025-11-24 09:51:41.821741316 +0000 UTC m=+0.093126998 container init 6fdd43c9a721e745d71516dec4ff60863d844270db9dc7d8ee7c43083e1d8f8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid)
Nov 24 09:51:41 compute-0 podman[262871]: 2025-11-24 09:51:41.827148369 +0000 UTC m=+0.098534031 container start 6fdd43c9a721e745d71516dec4ff60863d844270db9dc7d8ee7c43083e1d8f8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:51:41 compute-0 podman[262871]: 2025-11-24 09:51:41.829892377 +0000 UTC m=+0.101278069 container attach 6fdd43c9a721e745d71516dec4ff60863d844270db9dc7d8ee7c43083e1d8f8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 24 09:51:41 compute-0 great_mclean[262888]: 167 167
Nov 24 09:51:41 compute-0 systemd[1]: libpod-6fdd43c9a721e745d71516dec4ff60863d844270db9dc7d8ee7c43083e1d8f8a.scope: Deactivated successfully.
Nov 24 09:51:41 compute-0 conmon[262888]: conmon 6fdd43c9a721e745d715 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6fdd43c9a721e745d71516dec4ff60863d844270db9dc7d8ee7c43083e1d8f8a.scope/container/memory.events
Nov 24 09:51:41 compute-0 podman[262871]: 2025-11-24 09:51:41.831802514 +0000 UTC m=+0.103188176 container died 6fdd43c9a721e745d71516dec4ff60863d844270db9dc7d8ee7c43083e1d8f8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Nov 24 09:51:41 compute-0 podman[262871]: 2025-11-24 09:51:41.749214388 +0000 UTC m=+0.020600070 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:51:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-de666c55d00a335e54cb506b5f463a6bab9d7d230373b0d68a6abd1cf8a8665d-merged.mount: Deactivated successfully.
Nov 24 09:51:41 compute-0 podman[262871]: 2025-11-24 09:51:41.867385551 +0000 UTC m=+0.138771203 container remove 6fdd43c9a721e745d71516dec4ff60863d844270db9dc7d8ee7c43083e1d8f8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 24 09:51:41 compute-0 systemd[1]: libpod-conmon-6fdd43c9a721e745d71516dec4ff60863d844270db9dc7d8ee7c43083e1d8f8a.scope: Deactivated successfully.
Nov 24 09:51:42 compute-0 podman[262910]: 2025-11-24 09:51:42.004989244 +0000 UTC m=+0.035313701 container create 3e71ade5c3bfaed70e7184382660f53da9ca8ed16b3a1ea94159457e6b648360 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Nov 24 09:51:42 compute-0 systemd[1]: Started libpod-conmon-3e71ade5c3bfaed70e7184382660f53da9ca8ed16b3a1ea94159457e6b648360.scope.
Nov 24 09:51:42 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:51:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79edec3f06e337d5e468180980beb2bf8ab5924743eea34165a866838213ce71/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:51:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79edec3f06e337d5e468180980beb2bf8ab5924743eea34165a866838213ce71/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:51:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79edec3f06e337d5e468180980beb2bf8ab5924743eea34165a866838213ce71/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:51:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79edec3f06e337d5e468180980beb2bf8ab5924743eea34165a866838213ce71/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:51:42 compute-0 podman[262910]: 2025-11-24 09:51:41.989919103 +0000 UTC m=+0.020243590 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:51:42 compute-0 podman[262910]: 2025-11-24 09:51:42.086061333 +0000 UTC m=+0.116385820 container init 3e71ade5c3bfaed70e7184382660f53da9ca8ed16b3a1ea94159457e6b648360 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_diffie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 24 09:51:42 compute-0 podman[262910]: 2025-11-24 09:51:42.09486227 +0000 UTC m=+0.125186727 container start 3e71ade5c3bfaed70e7184382660f53da9ca8ed16b3a1ea94159457e6b648360 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:51:42 compute-0 podman[262910]: 2025-11-24 09:51:42.098689725 +0000 UTC m=+0.129014192 container attach 3e71ade5c3bfaed70e7184382660f53da9ca8ed16b3a1ea94159457e6b648360 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:51:42 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:42 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e50004760 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:42 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:42 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:42 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:42.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:42 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v735: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 359 B/s rd, 0 op/s
Nov 24 09:51:42 compute-0 lvm[263002]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 09:51:42 compute-0 lvm[263002]: VG ceph_vg0 finished
Nov 24 09:51:42 compute-0 cool_diffie[262927]: {}
Nov 24 09:51:42 compute-0 systemd[1]: libpod-3e71ade5c3bfaed70e7184382660f53da9ca8ed16b3a1ea94159457e6b648360.scope: Deactivated successfully.
Nov 24 09:51:42 compute-0 systemd[1]: libpod-3e71ade5c3bfaed70e7184382660f53da9ca8ed16b3a1ea94159457e6b648360.scope: Consumed 1.031s CPU time.
Nov 24 09:51:42 compute-0 podman[262910]: 2025-11-24 09:51:42.74577979 +0000 UTC m=+0.776104247 container died 3e71ade5c3bfaed70e7184382660f53da9ca8ed16b3a1ea94159457e6b648360 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Nov 24 09:51:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-79edec3f06e337d5e468180980beb2bf8ab5924743eea34165a866838213ce71-merged.mount: Deactivated successfully.
Nov 24 09:51:42 compute-0 podman[262910]: 2025-11-24 09:51:42.787245671 +0000 UTC m=+0.817570128 container remove 3e71ade5c3bfaed70e7184382660f53da9ca8ed16b3a1ea94159457e6b648360 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_diffie, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 24 09:51:42 compute-0 systemd[1]: libpod-conmon-3e71ade5c3bfaed70e7184382660f53da9ca8ed16b3a1ea94159457e6b648360.scope: Deactivated successfully.
Nov 24 09:51:42 compute-0 sudo[262806]: pam_unix(sudo:session): session closed for user root
Nov 24 09:51:42 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:51:42 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:51:42 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:51:42 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:51:42 compute-0 sudo[263019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:51:42 compute-0 sudo[263019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:51:42 compute-0 sudo[263019]: pam_unix(sudo:session): session closed for user root
Nov 24 09:51:43 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:43 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e78008dc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:51:43 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Nov 24 09:51:43 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:51:43.528778) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 09:51:43 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Nov 24 09:51:43 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977903528812, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1221, "num_deletes": 252, "total_data_size": 2136122, "memory_usage": 2190312, "flush_reason": "Manual Compaction"}
Nov 24 09:51:43 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Nov 24 09:51:43 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977903537395, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 1342962, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22148, "largest_seqno": 23368, "table_properties": {"data_size": 1338313, "index_size": 2045, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12168, "raw_average_key_size": 20, "raw_value_size": 1328214, "raw_average_value_size": 2251, "num_data_blocks": 88, "num_entries": 590, "num_filter_entries": 590, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763977801, "oldest_key_time": 1763977801, "file_creation_time": 1763977903, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "42aa12d2-c531-4ddc-8c4c-bc0b5971346b", "db_session_id": "RORHLERH15LC1QL8D0I4", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:51:43 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 8679 microseconds, and 4490 cpu microseconds.
Nov 24 09:51:43 compute-0 ceph-mon[74331]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:51:43 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:51:43.537450) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 1342962 bytes OK
Nov 24 09:51:43 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:51:43.537474) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Nov 24 09:51:43 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:51:43.541383) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Nov 24 09:51:43 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:51:43.541402) EVENT_LOG_v1 {"time_micros": 1763977903541396, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 09:51:43 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:51:43.541423) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 09:51:43 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 2130671, prev total WAL file size 2130671, number of live WAL files 2.
Nov 24 09:51:43 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:51:43 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:51:43.542169) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353034' seq:72057594037927935, type:22 .. '6D67727374617400373537' seq:0, type:0; will stop at (end)
Nov 24 09:51:43 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 09:51:43 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(1311KB)], [47(14MB)]
Nov 24 09:51:43 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977903542220, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 16283504, "oldest_snapshot_seqno": -1}
Nov 24 09:51:43 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:43 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:51:43 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:43.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:51:43 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 5562 keys, 12889369 bytes, temperature: kUnknown
Nov 24 09:51:43 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977903614716, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 12889369, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12853051, "index_size": 21296, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13957, "raw_key_size": 140199, "raw_average_key_size": 25, "raw_value_size": 12753561, "raw_average_value_size": 2292, "num_data_blocks": 870, "num_entries": 5562, "num_filter_entries": 5562, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976305, "oldest_key_time": 0, "file_creation_time": 1763977903, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "42aa12d2-c531-4ddc-8c4c-bc0b5971346b", "db_session_id": "RORHLERH15LC1QL8D0I4", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:51:43 compute-0 ceph-mon[74331]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:51:43 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:51:43.615045) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 12889369 bytes
Nov 24 09:51:43 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:51:43.616054) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 224.4 rd, 177.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 14.2 +0.0 blob) out(12.3 +0.0 blob), read-write-amplify(21.7) write-amplify(9.6) OK, records in: 6039, records dropped: 477 output_compression: NoCompression
Nov 24 09:51:43 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:51:43.616070) EVENT_LOG_v1 {"time_micros": 1763977903616062, "job": 24, "event": "compaction_finished", "compaction_time_micros": 72567, "compaction_time_cpu_micros": 25842, "output_level": 6, "num_output_files": 1, "total_output_size": 12889369, "num_input_records": 6039, "num_output_records": 5562, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 09:51:43 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:51:43 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977903616369, "job": 24, "event": "table_file_deletion", "file_number": 49}
Nov 24 09:51:43 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:51:43 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977903618659, "job": 24, "event": "table_file_deletion", "file_number": 47}
Nov 24 09:51:43 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:51:43.542068) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:51:43 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:51:43.618713) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:51:43 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:51:43.618717) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:51:43 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:51:43.618719) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:51:43 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:51:43.618721) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:51:43 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:51:43.618722) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:51:43 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:43 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c0046d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:43 compute-0 ceph-mon[74331]: pgmap v735: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 359 B/s rd, 0 op/s
Nov 24 09:51:43 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:51:43 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:51:44 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:44 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e64001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:44 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:44 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:44 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:44.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:44 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v736: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 269 B/s rd, 0 op/s
Nov 24 09:51:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:45 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e50004760 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:45 compute-0 sudo[263048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:51:45 compute-0 sudo[263048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:51:45 compute-0 sudo[263048]: pam_unix(sudo:session): session closed for user root
Nov 24 09:51:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Optimize plan auto_2025-11-24_09:51:45
Nov 24 09:51:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 09:51:45 compute-0 ceph-mgr[74626]: [balancer INFO root] do_upmap
Nov 24 09:51:45 compute-0 ceph-mgr[74626]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.meta', '.rgw.root', '.nfs', 'default.rgw.log', 'backups', '.mgr', 'images', 'volumes', 'cephfs.cephfs.data']
Nov 24 09:51:45 compute-0 ceph-mgr[74626]: [balancer INFO root] prepared 0/10 upmap changes
Nov 24 09:51:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:51:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:51:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:51:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:51:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:51:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:51:45 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:45 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:45 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:45.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:45 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e50004760 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 09:51:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:51:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 09:51:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:51:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:51:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:51:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:51:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:51:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:51:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:51:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:51:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:51:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 09:51:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:51:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:51:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:51:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 24 09:51:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:51:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 09:51:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:51:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:51:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:51:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 09:51:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:51:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 09:51:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 09:51:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:51:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 09:51:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:51:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:51:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:51:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:51:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:51:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:51:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:51:45 compute-0 ceph-mon[74331]: pgmap v736: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 269 B/s rd, 0 op/s
Nov 24 09:51:45 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:51:46 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:46 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c0046d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:46 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:46 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:51:46 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:46.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:51:46 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v737: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 449 B/s rd, 0 op/s
Nov 24 09:51:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:47 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e64001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:51:47.091Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:51:47 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:47 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:47 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:47.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:47 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e50004760 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:47 compute-0 ceph-mon[74331]: pgmap v737: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 449 B/s rd, 0 op/s
Nov 24 09:51:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:48 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e40001230 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:48 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:51:48 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:48 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:48 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:48.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:48 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v738: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 269 B/s rd, 0 op/s
Nov 24 09:51:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:51:48.844Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:51:49 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:49 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c0046d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:49 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:49 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:49 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:49.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:49 compute-0 sshd-session[263078]: Connection closed by 159.65.46.209 port 50742
Nov 24 09:51:49 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:49 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e64001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:49 compute-0 ceph-mon[74331]: pgmap v738: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 269 B/s rd, 0 op/s
Nov 24 09:51:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:50 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e50004760 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:50 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:50 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:51:50 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:50.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:51:50 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v739: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:51:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:51:50] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Nov 24 09:51:50 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:51:50] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Nov 24 09:51:51 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:51 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e400020f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:51 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:51 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:51:51 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:51.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:51:51 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:51 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c0046d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:52 compute-0 ceph-mon[74331]: pgmap v739: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:51:52 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:52 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e64001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:52 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:52 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:52 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:52.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:52 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v740: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:51:53 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:53 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e50004760 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:53 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:51:53 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:53 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:53 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:53.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:53 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:53 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e400020f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:54 compute-0 ceph-mon[74331]: pgmap v740: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:51:54 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:54 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e400020f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:54 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:54 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:54 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:54.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:54 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v741: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:51:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:55 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e400020f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:55 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:55 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:55 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:55.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:55 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e38002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:55 compute-0 podman[263087]: 2025-11-24 09:51:55.793414369 +0000 UTC m=+0.059265693 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 24 09:51:55 compute-0 podman[263088]: 2025-11-24 09:51:55.87415732 +0000 UTC m=+0.140014234 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 24 09:51:56 compute-0 ceph-mon[74331]: pgmap v741: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:51:56 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:56 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e78008dc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:56 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:56 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:56 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:56.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:56 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v742: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:51:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:57 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e400020f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:51:57.092Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:51:57 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:57 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:57 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:57.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:57 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c004710 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:58 compute-0 ceph-mon[74331]: pgmap v742: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Nov 24 09:51:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:58 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e38002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:58 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:51:58 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:58 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:58 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:51:58.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:58 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v743: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:51:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:51:58.845Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:51:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:59 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e78008dc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:51:59 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:51:59 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:51:59 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:51:59.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:51:59 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:51:59 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e400020f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:00 compute-0 ceph-mon[74331]: pgmap v743: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:52:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:00 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c004730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:00 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:00 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:00 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:00.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:00 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v744: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:52:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:52:00] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Nov 24 09:52:00 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:52:00] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Nov 24 09:52:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:01 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e38002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:01 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:52:01 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/856740024' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 09:52:01 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/856740024' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 09:52:01 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:01 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:01 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:01.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:01 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e78008dc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:02 compute-0 ceph-mon[74331]: pgmap v744: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:52:02 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:02 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e400020f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:02 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:02 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:02 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:02.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:02 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v745: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:52:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:03 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c004750 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:03 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:52:03 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:03 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:03 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:03.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:03 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e38002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:04 compute-0 ceph-mon[74331]: pgmap v745: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:52:04 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:04 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e78008dc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:04 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:04 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:52:04 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:04.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:52:04 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v746: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:52:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:05 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e400020f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/095205 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:52:05 compute-0 sudo[263143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:52:05 compute-0 sudo[263143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:52:05 compute-0 sudo[263143]: pam_unix(sudo:session): session closed for user root
Nov 24 09:52:05 compute-0 podman[263168]: 2025-11-24 09:52:05.320912184 +0000 UTC m=+0.047450981 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible)
Nov 24 09:52:05 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:05 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:05 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:05.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:05 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c004770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:06 compute-0 ceph-mon[74331]: pgmap v746: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 24 09:52:06 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:06 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e38002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:06 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:06 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:52:06 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:06.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:52:06 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v747: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:52:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:07 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e78008dc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:52:07.092Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:52:07 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:07 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:07 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:07.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:07 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e400020f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/095208 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:52:08 compute-0 ceph-mon[74331]: pgmap v747: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 24 09:52:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:08 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c004790 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:08 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:52:08 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:08 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:08 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:08.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:08 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v748: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:52:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:52:08.846Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:52:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:52:08.846Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:52:08 compute-0 nova_compute[257700]: 2025-11-24 09:52:08.922 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:52:08 compute-0 nova_compute[257700]: 2025-11-24 09:52:08.922 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 24 09:52:08 compute-0 nova_compute[257700]: 2025-11-24 09:52:08.934 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 24 09:52:08 compute-0 nova_compute[257700]: 2025-11-24 09:52:08.935 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:52:08 compute-0 nova_compute[257700]: 2025-11-24 09:52:08.936 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 24 09:52:08 compute-0 nova_compute[257700]: 2025-11-24 09:52:08.948 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:52:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:09 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e38002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:09 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:09 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:52:09 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:09.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:52:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:09 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e78008dc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:10 compute-0 ceph-mon[74331]: pgmap v748: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:52:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:10 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e400020f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:10 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:10 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:10 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:10.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:10 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v749: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:52:10 compute-0 nova_compute[257700]: 2025-11-24 09:52:10.956 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:52:10 compute-0 nova_compute[257700]: 2025-11-24 09:52:10.956 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:52:10 compute-0 nova_compute[257700]: 2025-11-24 09:52:10.957 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:52:10 compute-0 nova_compute[257700]: 2025-11-24 09:52:10.957 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 09:52:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:52:10] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Nov 24 09:52:10 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:52:10] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Nov 24 09:52:11 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:11 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c0047b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:11 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:11 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:11 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:11.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:11 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:11 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c0047b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:11 compute-0 nova_compute[257700]: 2025-11-24 09:52:11.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:52:11 compute-0 nova_compute[257700]: 2025-11-24 09:52:11.921 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 09:52:11 compute-0 nova_compute[257700]: 2025-11-24 09:52:11.921 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 09:52:11 compute-0 nova_compute[257700]: 2025-11-24 09:52:11.936 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 09:52:11 compute-0 nova_compute[257700]: 2025-11-24 09:52:11.936 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:52:12 compute-0 ceph-mon[74331]: pgmap v749: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 24 09:52:12 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/4225259137' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:52:12 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:12 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e78008dc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:12 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:12 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:12 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:12.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:12 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v750: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:52:12 compute-0 nova_compute[257700]: 2025-11-24 09:52:12.920 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:52:12 compute-0 nova_compute[257700]: 2025-11-24 09:52:12.940 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:52:12 compute-0 nova_compute[257700]: 2025-11-24 09:52:12.941 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:52:12 compute-0 nova_compute[257700]: 2025-11-24 09:52:12.941 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:52:12 compute-0 nova_compute[257700]: 2025-11-24 09:52:12.941 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 09:52:12 compute-0 nova_compute[257700]: 2025-11-24 09:52:12.941 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:52:13 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:13 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e400020f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:13 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/405630170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:52:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:52:13 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1479336973' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:52:13 compute-0 nova_compute[257700]: 2025-11-24 09:52:13.383 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:52:13 compute-0 nova_compute[257700]: 2025-11-24 09:52:13.524 257704 WARNING nova.virt.libvirt.driver [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 09:52:13 compute-0 nova_compute[257700]: 2025-11-24 09:52:13.525 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4887MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 09:52:13 compute-0 nova_compute[257700]: 2025-11-24 09:52:13.526 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:52:13 compute-0 nova_compute[257700]: 2025-11-24 09:52:13.526 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:52:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:52:13 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:13 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:13 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:13.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:13 compute-0 nova_compute[257700]: 2025-11-24 09:52:13.625 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 09:52:13 compute-0 nova_compute[257700]: 2025-11-24 09:52:13.626 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 09:52:13 compute-0 nova_compute[257700]: 2025-11-24 09:52:13.667 257704 DEBUG nova.scheduler.client.report [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Refreshing inventories for resource provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 24 09:52:13 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:13 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c0047b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:13 compute-0 nova_compute[257700]: 2025-11-24 09:52:13.730 257704 DEBUG nova.scheduler.client.report [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Updating ProviderTree inventory for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 24 09:52:13 compute-0 nova_compute[257700]: 2025-11-24 09:52:13.730 257704 DEBUG nova.compute.provider_tree [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Updating inventory in ProviderTree for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 24 09:52:13 compute-0 nova_compute[257700]: 2025-11-24 09:52:13.745 257704 DEBUG nova.scheduler.client.report [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Refreshing aggregate associations for resource provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 24 09:52:13 compute-0 nova_compute[257700]: 2025-11-24 09:52:13.764 257704 DEBUG nova.scheduler.client.report [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Refreshing trait associations for resource provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257, traits: COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_F16C,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SATA,COMPUTE_ACCELERATORS,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_SHA,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSSE3,HW_CPU_X86_AVX,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AESNI,HW_CPU_X86_BMI,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_MMX,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 24 09:52:13 compute-0 nova_compute[257700]: 2025-11-24 09:52:13.781 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:52:14 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:14 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:52:14 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:52:14 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1366320252' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:52:14 compute-0 nova_compute[257700]: 2025-11-24 09:52:14.264 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:52:14 compute-0 nova_compute[257700]: 2025-11-24 09:52:14.269 257704 DEBUG nova.compute.provider_tree [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Inventory has not changed in ProviderTree for provider: a50ce3b5-7e9e-4263-a4aa-c35573ac7257 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 09:52:14 compute-0 nova_compute[257700]: 2025-11-24 09:52:14.287 257704 DEBUG nova.scheduler.client.report [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Inventory has not changed for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 09:52:14 compute-0 nova_compute[257700]: 2025-11-24 09:52:14.289 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 09:52:14 compute-0 nova_compute[257700]: 2025-11-24 09:52:14.289 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:52:14 compute-0 ceph-mon[74331]: pgmap v750: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:52:14 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1479336973' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:52:14 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1366320252' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:52:14 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:14 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c0047b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:14 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:14 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:14 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:14.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:14 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v751: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:52:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:15 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e78008dc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:15 compute-0 nova_compute[257700]: 2025-11-24 09:52:15.289 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:52:15 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/4217903816' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:52:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:52:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:52:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:52:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:52:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:52:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:52:15 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:15 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:52:15 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:15.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:52:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:15 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e78008dc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:15 compute-0 nova_compute[257700]: 2025-11-24 09:52:15.916 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:52:16 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:16 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e38003b80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:16 compute-0 ceph-mon[74331]: pgmap v751: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 24 09:52:16 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:52:16 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/647738787' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:52:16 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:16 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:52:16 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:16.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:52:16 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v752: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 426 B/s wr, 1 op/s
Nov 24 09:52:16 compute-0 nova_compute[257700]: 2025-11-24 09:52:16.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:52:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:17 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c0047b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:52:17.093Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:52:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:52:17.093Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 09:52:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:52:17.093Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 09:52:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:17 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:52:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:17 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:52:17 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:17 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:52:17 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:17.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:52:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:17 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c0047b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:18 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c0047b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:18 compute-0 ceph-mon[74331]: pgmap v752: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 426 B/s wr, 1 op/s
Nov 24 09:52:18 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:52:18 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Nov 24 09:52:18 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:52:18.540600) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 09:52:18 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Nov 24 09:52:18 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977938540629, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 543, "num_deletes": 255, "total_data_size": 613138, "memory_usage": 624536, "flush_reason": "Manual Compaction"}
Nov 24 09:52:18 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Nov 24 09:52:18 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977938545586, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 604618, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23369, "largest_seqno": 23911, "table_properties": {"data_size": 601700, "index_size": 890, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6559, "raw_average_key_size": 17, "raw_value_size": 595842, "raw_average_value_size": 1606, "num_data_blocks": 40, "num_entries": 371, "num_filter_entries": 371, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763977903, "oldest_key_time": 1763977903, "file_creation_time": 1763977938, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "42aa12d2-c531-4ddc-8c4c-bc0b5971346b", "db_session_id": "RORHLERH15LC1QL8D0I4", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:52:18 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 5035 microseconds, and 2489 cpu microseconds.
Nov 24 09:52:18 compute-0 ceph-mon[74331]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:52:18 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:52:18.545631) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 604618 bytes OK
Nov 24 09:52:18 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:52:18.545648) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Nov 24 09:52:18 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:52:18.547243) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Nov 24 09:52:18 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:52:18.547256) EVENT_LOG_v1 {"time_micros": 1763977938547252, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 09:52:18 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:52:18.547271) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 09:52:18 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 610102, prev total WAL file size 610102, number of live WAL files 2.
Nov 24 09:52:18 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:52:18 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:52:18.547654) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323531' seq:72057594037927935, type:22 .. '6C6F676D00353032' seq:0, type:0; will stop at (end)
Nov 24 09:52:18 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 09:52:18 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(590KB)], [50(12MB)]
Nov 24 09:52:18 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977938547704, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 13493987, "oldest_snapshot_seqno": -1}
Nov 24 09:52:18 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 5415 keys, 13358183 bytes, temperature: kUnknown
Nov 24 09:52:18 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977938619901, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 13358183, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13321950, "index_size": 21595, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13573, "raw_key_size": 138331, "raw_average_key_size": 25, "raw_value_size": 13224064, "raw_average_value_size": 2442, "num_data_blocks": 879, "num_entries": 5415, "num_filter_entries": 5415, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976305, "oldest_key_time": 0, "file_creation_time": 1763977938, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "42aa12d2-c531-4ddc-8c4c-bc0b5971346b", "db_session_id": "RORHLERH15LC1QL8D0I4", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:52:18 compute-0 ceph-mon[74331]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:52:18 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:18 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:52:18.620204) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 13358183 bytes
Nov 24 09:52:18 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:18 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:52:18.621700) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 187.0 rd, 185.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 12.3 +0.0 blob) out(12.7 +0.0 blob), read-write-amplify(44.4) write-amplify(22.1) OK, records in: 5933, records dropped: 518 output_compression: NoCompression
Nov 24 09:52:18 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:52:18.621717) EVENT_LOG_v1 {"time_micros": 1763977938621709, "job": 26, "event": "compaction_finished", "compaction_time_micros": 72161, "compaction_time_cpu_micros": 26346, "output_level": 6, "num_output_files": 1, "total_output_size": 13358183, "num_input_records": 5933, "num_output_records": 5415, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 09:52:18 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:52:18 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977938621934, "job": 26, "event": "table_file_deletion", "file_number": 52}
Nov 24 09:52:18 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:18.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:18 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:52:18 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763977938624555, "job": 26, "event": "table_file_deletion", "file_number": 50}
Nov 24 09:52:18 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:52:18.547552) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:52:18 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:52:18.624591) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:52:18 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:52:18.624596) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:52:18 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:52:18.624598) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:52:18 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:52:18.624599) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:52:18 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:52:18.624601) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:52:18 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v753: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 426 B/s wr, 1 op/s
Nov 24 09:52:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:52:18.846Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 09:52:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:52:18.847Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:52:19 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:19 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e38003b80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:19 compute-0 ceph-mon[74331]: pgmap v753: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 426 B/s wr, 1 op/s
Nov 24 09:52:19 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:19 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:19 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:19.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:19 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:19 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c0047b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:20 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c0047b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:52:20.560 165073 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:52:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:52:20.560 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:52:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:52:20.560 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:52:20 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:20 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:52:20 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:20.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:52:20 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v754: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 426 B/s wr, 1 op/s
Nov 24 09:52:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:52:20] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Nov 24 09:52:20 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:52:20] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Nov 24 09:52:21 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:21 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c0047b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:21 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:21 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:21 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:21.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:21 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:21 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c0047b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:21 compute-0 ceph-mon[74331]: pgmap v754: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 426 B/s wr, 1 op/s
Nov 24 09:52:22 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:22 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c0047b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:22 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:22 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:22 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:22.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:22 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v755: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:52:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:23 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e38003b80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:23 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:52:23 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:23 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:23 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:23.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:23 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e440019e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:23 compute-0 ceph-mon[74331]: pgmap v755: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Nov 24 09:52:24 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:24 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c0047b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:24 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:24 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:24 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:24.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:24 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v756: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 341 B/s wr, 1 op/s
Nov 24 09:52:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:25 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e7800a840 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:25 compute-0 sudo[263252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:52:25 compute-0 sudo[263252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:52:25 compute-0 sudo[263252]: pam_unix(sudo:session): session closed for user root
Nov 24 09:52:25 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:25 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:25 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:25.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:25 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e38003b80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:25 compute-0 ceph-mon[74331]: pgmap v756: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 341 B/s wr, 1 op/s
Nov 24 09:52:26 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:26 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e440019e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:26 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:26 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:52:26 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:26.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:52:26 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v757: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Nov 24 09:52:26 compute-0 podman[263279]: 2025-11-24 09:52:26.780763348 +0000 UTC m=+0.058216481 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 24 09:52:26 compute-0 podman[263280]: 2025-11-24 09:52:26.81155809 +0000 UTC m=+0.087577458 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 24 09:52:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:27 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e64002920 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:52:27.094Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:52:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:27 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:52:27 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:27 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:52:27 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:27.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:52:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:27 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c0047b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:27 compute-0 ceph-mon[74331]: pgmap v757: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Nov 24 09:52:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:28 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e38003b80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:28 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:52:28 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:28 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:28 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:28.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:28 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v758: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 24 09:52:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:52:28.847Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:52:29 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:29 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e44002e50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:29 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:29 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:52:29 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:29.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:52:29 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:29 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e64002920 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:29 compute-0 ceph-mon[74331]: pgmap v758: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 24 09:52:30 compute-0 sshd-session[263325]: Received disconnect from 45.78.198.78 port 57506:11: Bye Bye [preauth]
Nov 24 09:52:30 compute-0 sshd-session[263325]: Disconnected from authenticating user daemon 45.78.198.78 port 57506 [preauth]
Nov 24 09:52:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:30 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c0047b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:30 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:30 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:52:30 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:30.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:52:30 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v759: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 24 09:52:30 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:52:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:52:30] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Nov 24 09:52:30 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:52:30] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Nov 24 09:52:31 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:31 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e38003b80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:31 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:31 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:52:31 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:31.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:52:31 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:31 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e44002e50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:31 compute-0 ceph-mon[74331]: pgmap v759: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Nov 24 09:52:32 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:32 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e64002250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:32 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:32 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:32 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:32.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:32 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v760: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 24 09:52:33 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:33 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c0047b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:52:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:33.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:33 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:33 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e38003b80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=404 latency=0.001000024s ======
Nov 24 09:52:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:33.800 +0000] "GET /info HTTP/1.1" 404 152 - "python-urllib3/1.26.5" - latency=0.001000024s
Nov 24 09:52:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - - [24/Nov/2025:09:52:33.818 +0000] "GET /swift/healthcheck HTTP/1.1" 200 0 - "python-urllib3/1.26.5" - latency=0.000000000s
Nov 24 09:52:33 compute-0 ceph-mon[74331]: pgmap v760: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 24 09:52:34 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:34 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e44002e50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:34 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:34 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:34 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:34.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:34 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v761: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Nov 24 09:52:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:35 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e44002e50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:35 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:35 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:35 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:35.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:35 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c004950 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:35 compute-0 podman[263334]: 2025-11-24 09:52:35.769780167 +0000 UTC m=+0.044925132 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent)
Nov 24 09:52:35 compute-0 ceph-mon[74331]: pgmap v761: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Nov 24 09:52:36 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:36 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e6c004950 fd 39 proxy header rest len failed header rlen = % (will set dead)
Nov 24 09:52:36 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:36 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:36 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:36.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:36 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v762: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 0 op/s
Nov 24 09:52:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[260009]: 24/11/2025 09:52:37 : epoch 69242a01 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1e64002250 fd 39 proxy ignored for local
Nov 24 09:52:37 compute-0 kernel: ganesha.nfsd[263277]: segfault at 50 ip 00007f1f2211a32e sp 00007f1ee3ffe210 error 4 in libntirpc.so.5.8[7f1f220ff000+2c000] likely on CPU 1 (core 0, socket 1)
Nov 24 09:52:37 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 24 09:52:37 compute-0 systemd[1]: Started Process Core Dump (PID 263356/UID 0).
Nov 24 09:52:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:52:37.095Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:52:37 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:37 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:52:37 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:37.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:52:37 compute-0 ceph-mon[74331]: pgmap v762: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 0 op/s
Nov 24 09:52:38 compute-0 systemd-coredump[263357]: Process 260013 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 69:
                                                    #0  0x00007f1f2211a32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Nov 24 09:52:38 compute-0 systemd[1]: systemd-coredump@10-263356-0.service: Deactivated successfully.
Nov 24 09:52:38 compute-0 systemd[1]: systemd-coredump@10-263356-0.service: Consumed 1.059s CPU time.
Nov 24 09:52:38 compute-0 podman[263363]: 2025-11-24 09:52:38.251716708 +0000 UTC m=+0.029660345 container died 00c13a7990cd7d517ad65204534333af4acd6989a91aff71b1c84a43c5349db8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:52:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-27f42a2e27276e1a66ee3acb84ad172394ad5277417af75a2ecb77fa22ec0f14-merged.mount: Deactivated successfully.
Nov 24 09:52:38 compute-0 podman[263363]: 2025-11-24 09:52:38.292873296 +0000 UTC m=+0.070816913 container remove 00c13a7990cd7d517ad65204534333af4acd6989a91aff71b1c84a43c5349db8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 24 09:52:38 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.2.0.compute-0.ssprex.service: Main process exited, code=exited, status=139/n/a
Nov 24 09:52:38 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.2.0.compute-0.ssprex.service: Failed with result 'exit-code'.
Nov 24 09:52:38 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.2.0.compute-0.ssprex.service: Consumed 1.611s CPU time.
Nov 24 09:52:38 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:52:38 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:38 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:52:38 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:38.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:52:38 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v763: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Nov 24 09:52:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:52:38.848Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:52:38 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Nov 24 09:52:38 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Nov 24 09:52:38 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Nov 24 09:52:39 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:39 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:39 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:39.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:39 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Nov 24 09:52:39 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Nov 24 09:52:39 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Nov 24 09:52:39 compute-0 ceph-mon[74331]: pgmap v763: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Nov 24 09:52:39 compute-0 ceph-mon[74331]: osdmap e151: 3 total, 3 up, 3 in
Nov 24 09:52:40 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:40 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:40 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:40.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:40 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v766: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s
Nov 24 09:52:40 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Nov 24 09:52:40 compute-0 ceph-mon[74331]: osdmap e152: 3 total, 3 up, 3 in
Nov 24 09:52:40 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Nov 24 09:52:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:52:40] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Nov 24 09:52:40 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:52:40] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Nov 24 09:52:40 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Nov 24 09:52:41 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:41 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:41 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:41.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:41 compute-0 ceph-mon[74331]: pgmap v766: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s
Nov 24 09:52:41 compute-0 ceph-mon[74331]: osdmap e153: 3 total, 3 up, 3 in
Nov 24 09:52:42 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:42 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:42 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:42.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:42 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v768: 353 pgs: 353 active+clean; 21 MiB data, 170 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 3.4 MiB/s wr, 32 op/s
Nov 24 09:52:42 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Nov 24 09:52:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Nov 24 09:52:43 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Nov 24 09:52:43 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [WARNING] 327/095243 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 24 09:52:43 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf[97304]: [ALERT] 327/095243 (4) : backend 'backend' has no server available!
Nov 24 09:52:43 compute-0 sudo[263411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:52:43 compute-0 sudo[263411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:52:43 compute-0 sudo[263411]: pam_unix(sudo:session): session closed for user root
Nov 24 09:52:43 compute-0 sudo[263436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:52:43 compute-0 sudo[263436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:52:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:52:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Nov 24 09:52:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Nov 24 09:52:43 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Nov 24 09:52:43 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:43 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:43 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:43.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:43 compute-0 sudo[263436]: pam_unix(sudo:session): session closed for user root
Nov 24 09:52:43 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v771: 353 pgs: 353 active+clean; 21 MiB data, 170 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 5.2 MiB/s wr, 49 op/s
Nov 24 09:52:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:52:43 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:52:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:52:43 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:52:43 compute-0 sudo[263493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:52:43 compute-0 sudo[263493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:52:43 compute-0 sudo[263493]: pam_unix(sudo:session): session closed for user root
Nov 24 09:52:44 compute-0 ceph-mon[74331]: pgmap v768: 353 pgs: 353 active+clean; 21 MiB data, 170 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 3.4 MiB/s wr, 32 op/s
Nov 24 09:52:44 compute-0 ceph-mon[74331]: osdmap e154: 3 total, 3 up, 3 in
Nov 24 09:52:44 compute-0 ceph-mon[74331]: osdmap e155: 3 total, 3 up, 3 in
Nov 24 09:52:44 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:52:44 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:52:44 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:52:44 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:52:44 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:52:44 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:52:44 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:52:44 compute-0 sudo[263518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 24 09:52:44 compute-0 sudo[263518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:52:44 compute-0 podman[263584]: 2025-11-24 09:52:44.507750672 +0000 UTC m=+0.046694207 container create a858219fe6f1566b44853ced30b860618258df46a8ea3d1c52fc2604861b774b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_gould, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 09:52:44 compute-0 systemd[1]: Started libpod-conmon-a858219fe6f1566b44853ced30b860618258df46a8ea3d1c52fc2604861b774b.scope.
Nov 24 09:52:44 compute-0 podman[263584]: 2025-11-24 09:52:44.485092361 +0000 UTC m=+0.024035896 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:52:44 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:52:44 compute-0 podman[263584]: 2025-11-24 09:52:44.596500738 +0000 UTC m=+0.135444243 container init a858219fe6f1566b44853ced30b860618258df46a8ea3d1c52fc2604861b774b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:52:44 compute-0 podman[263584]: 2025-11-24 09:52:44.605931532 +0000 UTC m=+0.144875027 container start a858219fe6f1566b44853ced30b860618258df46a8ea3d1c52fc2604861b774b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Nov 24 09:52:44 compute-0 podman[263584]: 2025-11-24 09:52:44.609423859 +0000 UTC m=+0.148367354 container attach a858219fe6f1566b44853ced30b860618258df46a8ea3d1c52fc2604861b774b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_gould, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 24 09:52:44 compute-0 peaceful_gould[263600]: 167 167
Nov 24 09:52:44 compute-0 systemd[1]: libpod-a858219fe6f1566b44853ced30b860618258df46a8ea3d1c52fc2604861b774b.scope: Deactivated successfully.
Nov 24 09:52:44 compute-0 podman[263584]: 2025-11-24 09:52:44.613204631 +0000 UTC m=+0.152148146 container died a858219fe6f1566b44853ced30b860618258df46a8ea3d1c52fc2604861b774b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_gould, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:52:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8de260367e1a61d1dbaf2a533cd8fbde6b1eb4e394f207e0dcf76f8a971169d-merged.mount: Deactivated successfully.
Nov 24 09:52:44 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:44 compute-0 podman[263584]: 2025-11-24 09:52:44.648411864 +0000 UTC m=+0.187355359 container remove a858219fe6f1566b44853ced30b860618258df46a8ea3d1c52fc2604861b774b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_gould, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:52:44 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:52:44 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:44.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:52:44 compute-0 systemd[1]: libpod-conmon-a858219fe6f1566b44853ced30b860618258df46a8ea3d1c52fc2604861b774b.scope: Deactivated successfully.
Nov 24 09:52:44 compute-0 podman[263624]: 2025-11-24 09:52:44.811442938 +0000 UTC m=+0.037701664 container create e8bbffe1d27e962438b2c419ebcf7e47495025f488304b0d80b62ad0e1f3b32b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid)
Nov 24 09:52:44 compute-0 systemd[1]: Started libpod-conmon-e8bbffe1d27e962438b2c419ebcf7e47495025f488304b0d80b62ad0e1f3b32b.scope.
Nov 24 09:52:44 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:52:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c39c94a0c28a6ccd193cbf83d5a5326d94dd283fd81eab0647c0243816f8807a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:52:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c39c94a0c28a6ccd193cbf83d5a5326d94dd283fd81eab0647c0243816f8807a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:52:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c39c94a0c28a6ccd193cbf83d5a5326d94dd283fd81eab0647c0243816f8807a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:52:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c39c94a0c28a6ccd193cbf83d5a5326d94dd283fd81eab0647c0243816f8807a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:52:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c39c94a0c28a6ccd193cbf83d5a5326d94dd283fd81eab0647c0243816f8807a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:52:44 compute-0 podman[263624]: 2025-11-24 09:52:44.874901539 +0000 UTC m=+0.101160285 container init e8bbffe1d27e962438b2c419ebcf7e47495025f488304b0d80b62ad0e1f3b32b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_wu, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 24 09:52:44 compute-0 podman[263624]: 2025-11-24 09:52:44.88343236 +0000 UTC m=+0.109691086 container start e8bbffe1d27e962438b2c419ebcf7e47495025f488304b0d80b62ad0e1f3b32b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_wu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Nov 24 09:52:44 compute-0 podman[263624]: 2025-11-24 09:52:44.886564498 +0000 UTC m=+0.112823234 container attach e8bbffe1d27e962438b2c419ebcf7e47495025f488304b0d80b62ad0e1f3b32b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:52:44 compute-0 podman[263624]: 2025-11-24 09:52:44.79572613 +0000 UTC m=+0.021984886 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:52:45 compute-0 ceph-mon[74331]: pgmap v771: 353 pgs: 353 active+clean; 21 MiB data, 170 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 5.2 MiB/s wr, 49 op/s
Nov 24 09:52:45 compute-0 inspiring_wu[263641]: --> passed data devices: 0 physical, 1 LVM
Nov 24 09:52:45 compute-0 inspiring_wu[263641]: --> All data devices are unavailable
Nov 24 09:52:45 compute-0 systemd[1]: libpod-e8bbffe1d27e962438b2c419ebcf7e47495025f488304b0d80b62ad0e1f3b32b.scope: Deactivated successfully.
Nov 24 09:52:45 compute-0 podman[263624]: 2025-11-24 09:52:45.223843585 +0000 UTC m=+0.450102311 container died e8bbffe1d27e962438b2c419ebcf7e47495025f488304b0d80b62ad0e1f3b32b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_wu, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 24 09:52:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-c39c94a0c28a6ccd193cbf83d5a5326d94dd283fd81eab0647c0243816f8807a-merged.mount: Deactivated successfully.
Nov 24 09:52:45 compute-0 podman[263624]: 2025-11-24 09:52:45.261857717 +0000 UTC m=+0.488116443 container remove e8bbffe1d27e962438b2c419ebcf7e47495025f488304b0d80b62ad0e1f3b32b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_wu, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:52:45 compute-0 systemd[1]: libpod-conmon-e8bbffe1d27e962438b2c419ebcf7e47495025f488304b0d80b62ad0e1f3b32b.scope: Deactivated successfully.
Nov 24 09:52:45 compute-0 sudo[263518]: pam_unix(sudo:session): session closed for user root
Nov 24 09:52:45 compute-0 sudo[263669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:52:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Optimize plan auto_2025-11-24_09:52:45
Nov 24 09:52:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 09:52:45 compute-0 sudo[263669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:52:45 compute-0 ceph-mgr[74626]: [balancer INFO root] do_upmap
Nov 24 09:52:45 compute-0 ceph-mgr[74626]: [balancer INFO root] pools ['vms', '.rgw.root', 'volumes', 'images', 'backups', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', '.mgr', '.nfs', 'default.rgw.log']
Nov 24 09:52:45 compute-0 ceph-mgr[74626]: [balancer INFO root] prepared 0/10 upmap changes
Nov 24 09:52:45 compute-0 sudo[263669]: pam_unix(sudo:session): session closed for user root
Nov 24 09:52:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:52:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:52:45 compute-0 sudo[263694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- lvm list --format json
Nov 24 09:52:45 compute-0 sudo[263694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:52:45 compute-0 sudo[263707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:52:45 compute-0 sudo[263707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:52:45 compute-0 sudo[263707]: pam_unix(sudo:session): session closed for user root
Nov 24 09:52:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:52:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:52:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:52:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:52:45 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:45 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:45 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:45.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 09:52:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:52:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 09:52:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:52:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:52:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:52:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:52:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:52:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:52:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:52:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00033296094614833626 of space, bias 1.0, pg target 0.09988828384450088 quantized to 32 (current 32)
Nov 24 09:52:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:52:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 09:52:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:52:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:52:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:52:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 24 09:52:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:52:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 09:52:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:52:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:52:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:52:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 09:52:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:52:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 09:52:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 09:52:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:52:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 09:52:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:52:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:52:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:52:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:52:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:52:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:52:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:52:45 compute-0 podman[263787]: 2025-11-24 09:52:45.800305114 +0000 UTC m=+0.039023797 container create 6a460081ee86942f8dda6797394a4c173df88045bf2d64b0aae40813773e8ffa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_booth, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:52:45 compute-0 systemd[1]: Started libpod-conmon-6a460081ee86942f8dda6797394a4c173df88045bf2d64b0aae40813773e8ffa.scope.
Nov 24 09:52:45 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:52:45 compute-0 podman[263787]: 2025-11-24 09:52:45.782852532 +0000 UTC m=+0.021571235 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:52:45 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v772: 353 pgs: 353 active+clean; 21 MiB data, 170 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 3.9 MiB/s wr, 37 op/s
Nov 24 09:52:45 compute-0 podman[263787]: 2025-11-24 09:52:45.885772659 +0000 UTC m=+0.124491362 container init 6a460081ee86942f8dda6797394a4c173df88045bf2d64b0aae40813773e8ffa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_booth, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:52:45 compute-0 podman[263787]: 2025-11-24 09:52:45.891293756 +0000 UTC m=+0.130012439 container start 6a460081ee86942f8dda6797394a4c173df88045bf2d64b0aae40813773e8ffa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_booth, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Nov 24 09:52:45 compute-0 suspicious_booth[263803]: 167 167
Nov 24 09:52:45 compute-0 systemd[1]: libpod-6a460081ee86942f8dda6797394a4c173df88045bf2d64b0aae40813773e8ffa.scope: Deactivated successfully.
Nov 24 09:52:45 compute-0 podman[263787]: 2025-11-24 09:52:45.896129986 +0000 UTC m=+0.134848689 container attach 6a460081ee86942f8dda6797394a4c173df88045bf2d64b0aae40813773e8ffa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_booth, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 24 09:52:45 compute-0 podman[263787]: 2025-11-24 09:52:45.896757391 +0000 UTC m=+0.135476074 container died 6a460081ee86942f8dda6797394a4c173df88045bf2d64b0aae40813773e8ffa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_booth, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Nov 24 09:52:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-d93460a45c8770556d949627e366ea98cfc33e32c2241e7d027f59f9510f4cae-merged.mount: Deactivated successfully.
Nov 24 09:52:45 compute-0 podman[263787]: 2025-11-24 09:52:45.924770204 +0000 UTC m=+0.163488877 container remove 6a460081ee86942f8dda6797394a4c173df88045bf2d64b0aae40813773e8ffa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_booth, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 24 09:52:45 compute-0 systemd[1]: libpod-conmon-6a460081ee86942f8dda6797394a4c173df88045bf2d64b0aae40813773e8ffa.scope: Deactivated successfully.
Nov 24 09:52:46 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:52:46 compute-0 podman[263828]: 2025-11-24 09:52:46.088093557 +0000 UTC m=+0.035212002 container create 8f7d515b28b57ac7dabafb9698f142621f8b896799ddf7e8e9fab19b3f829dfd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_keldysh, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:52:46 compute-0 systemd[1]: Started libpod-conmon-8f7d515b28b57ac7dabafb9698f142621f8b896799ddf7e8e9fab19b3f829dfd.scope.
Nov 24 09:52:46 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:52:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/094c5001cb92b59a322d375ff1525ea45249b1fdec4ab6133161e0bf6542c25f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:52:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/094c5001cb92b59a322d375ff1525ea45249b1fdec4ab6133161e0bf6542c25f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:52:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/094c5001cb92b59a322d375ff1525ea45249b1fdec4ab6133161e0bf6542c25f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:52:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/094c5001cb92b59a322d375ff1525ea45249b1fdec4ab6133161e0bf6542c25f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:52:46 compute-0 podman[263828]: 2025-11-24 09:52:46.073439484 +0000 UTC m=+0.020557959 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:52:46 compute-0 podman[263828]: 2025-11-24 09:52:46.170365954 +0000 UTC m=+0.117484399 container init 8f7d515b28b57ac7dabafb9698f142621f8b896799ddf7e8e9fab19b3f829dfd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_keldysh, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Nov 24 09:52:46 compute-0 podman[263828]: 2025-11-24 09:52:46.17874113 +0000 UTC m=+0.125859575 container start 8f7d515b28b57ac7dabafb9698f142621f8b896799ddf7e8e9fab19b3f829dfd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_keldysh, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:52:46 compute-0 podman[263828]: 2025-11-24 09:52:46.182020292 +0000 UTC m=+0.129138737 container attach 8f7d515b28b57ac7dabafb9698f142621f8b896799ddf7e8e9fab19b3f829dfd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:52:46 compute-0 nervous_keldysh[263844]: {
Nov 24 09:52:46 compute-0 nervous_keldysh[263844]:     "0": [
Nov 24 09:52:46 compute-0 nervous_keldysh[263844]:         {
Nov 24 09:52:46 compute-0 nervous_keldysh[263844]:             "devices": [
Nov 24 09:52:46 compute-0 nervous_keldysh[263844]:                 "/dev/loop3"
Nov 24 09:52:46 compute-0 nervous_keldysh[263844]:             ],
Nov 24 09:52:46 compute-0 nervous_keldysh[263844]:             "lv_name": "ceph_lv0",
Nov 24 09:52:46 compute-0 nervous_keldysh[263844]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:52:46 compute-0 nervous_keldysh[263844]:             "lv_size": "21470642176",
Nov 24 09:52:46 compute-0 nervous_keldysh[263844]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=84a084c3-61a7-5de7-8207-1f88efa59a64,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 24 09:52:46 compute-0 nervous_keldysh[263844]:             "lv_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:52:46 compute-0 nervous_keldysh[263844]:             "name": "ceph_lv0",
Nov 24 09:52:46 compute-0 nervous_keldysh[263844]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:52:46 compute-0 nervous_keldysh[263844]:             "tags": {
Nov 24 09:52:46 compute-0 nervous_keldysh[263844]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:52:46 compute-0 nervous_keldysh[263844]:                 "ceph.block_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:52:46 compute-0 nervous_keldysh[263844]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 09:52:46 compute-0 nervous_keldysh[263844]:                 "ceph.cluster_fsid": "84a084c3-61a7-5de7-8207-1f88efa59a64",
Nov 24 09:52:46 compute-0 nervous_keldysh[263844]:                 "ceph.cluster_name": "ceph",
Nov 24 09:52:46 compute-0 nervous_keldysh[263844]:                 "ceph.crush_device_class": "",
Nov 24 09:52:46 compute-0 nervous_keldysh[263844]:                 "ceph.encrypted": "0",
Nov 24 09:52:46 compute-0 nervous_keldysh[263844]:                 "ceph.osd_fsid": "4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c",
Nov 24 09:52:46 compute-0 nervous_keldysh[263844]:                 "ceph.osd_id": "0",
Nov 24 09:52:46 compute-0 nervous_keldysh[263844]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 09:52:46 compute-0 nervous_keldysh[263844]:                 "ceph.type": "block",
Nov 24 09:52:46 compute-0 nervous_keldysh[263844]:                 "ceph.vdo": "0",
Nov 24 09:52:46 compute-0 nervous_keldysh[263844]:                 "ceph.with_tpm": "0"
Nov 24 09:52:46 compute-0 nervous_keldysh[263844]:             },
Nov 24 09:52:46 compute-0 nervous_keldysh[263844]:             "type": "block",
Nov 24 09:52:46 compute-0 nervous_keldysh[263844]:             "vg_name": "ceph_vg0"
Nov 24 09:52:46 compute-0 nervous_keldysh[263844]:         }
Nov 24 09:52:46 compute-0 nervous_keldysh[263844]:     ]
Nov 24 09:52:46 compute-0 nervous_keldysh[263844]: }
Nov 24 09:52:46 compute-0 systemd[1]: libpod-8f7d515b28b57ac7dabafb9698f142621f8b896799ddf7e8e9fab19b3f829dfd.scope: Deactivated successfully.
Nov 24 09:52:46 compute-0 podman[263828]: 2025-11-24 09:52:46.47084154 +0000 UTC m=+0.417959985 container died 8f7d515b28b57ac7dabafb9698f142621f8b896799ddf7e8e9fab19b3f829dfd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_keldysh, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 24 09:52:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-094c5001cb92b59a322d375ff1525ea45249b1fdec4ab6133161e0bf6542c25f-merged.mount: Deactivated successfully.
Nov 24 09:52:46 compute-0 podman[263828]: 2025-11-24 09:52:46.511031935 +0000 UTC m=+0.458150380 container remove 8f7d515b28b57ac7dabafb9698f142621f8b896799ddf7e8e9fab19b3f829dfd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_keldysh, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:52:46 compute-0 systemd[1]: libpod-conmon-8f7d515b28b57ac7dabafb9698f142621f8b896799ddf7e8e9fab19b3f829dfd.scope: Deactivated successfully.
Nov 24 09:52:46 compute-0 sudo[263694]: pam_unix(sudo:session): session closed for user root
Nov 24 09:52:46 compute-0 sudo[263868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:52:46 compute-0 sudo[263868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:52:46 compute-0 sudo[263868]: pam_unix(sudo:session): session closed for user root
Nov 24 09:52:46 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:46 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:46 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:46.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:46 compute-0 sudo[263893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- raw list --format json
Nov 24 09:52:46 compute-0 sudo[263893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:52:47 compute-0 podman[263958]: 2025-11-24 09:52:47.022688759 +0000 UTC m=+0.035884328 container create f53a99450c804cbae4c807359f83add99dd3cb2668f0b703c9ea39e56d8af461 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_edison, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Nov 24 09:52:47 compute-0 ceph-mon[74331]: pgmap v772: 353 pgs: 353 active+clean; 21 MiB data, 170 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 3.9 MiB/s wr, 37 op/s
Nov 24 09:52:47 compute-0 systemd[1]: Started libpod-conmon-f53a99450c804cbae4c807359f83add99dd3cb2668f0b703c9ea39e56d8af461.scope.
Nov 24 09:52:47 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:52:47 compute-0 podman[263958]: 2025-11-24 09:52:47.094002264 +0000 UTC m=+0.107197893 container init f53a99450c804cbae4c807359f83add99dd3cb2668f0b703c9ea39e56d8af461 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_edison, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Nov 24 09:52:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:52:47.096Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:52:47 compute-0 podman[263958]: 2025-11-24 09:52:47.100761012 +0000 UTC m=+0.113956581 container start f53a99450c804cbae4c807359f83add99dd3cb2668f0b703c9ea39e56d8af461 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_edison, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:52:47 compute-0 podman[263958]: 2025-11-24 09:52:47.007998996 +0000 UTC m=+0.021194585 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:52:47 compute-0 gallant_edison[263974]: 167 167
Nov 24 09:52:47 compute-0 podman[263958]: 2025-11-24 09:52:47.104524235 +0000 UTC m=+0.117719824 container attach f53a99450c804cbae4c807359f83add99dd3cb2668f0b703c9ea39e56d8af461 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_edison, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Nov 24 09:52:47 compute-0 systemd[1]: libpod-f53a99450c804cbae4c807359f83add99dd3cb2668f0b703c9ea39e56d8af461.scope: Deactivated successfully.
Nov 24 09:52:47 compute-0 podman[263958]: 2025-11-24 09:52:47.104839113 +0000 UTC m=+0.118034672 container died f53a99450c804cbae4c807359f83add99dd3cb2668f0b703c9ea39e56d8af461 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_edison, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Nov 24 09:52:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-6eceb5eed3fa249fbb279cba4924209aa98ed4daf1b83b83d4e21c8d4cd0e0c4-merged.mount: Deactivated successfully.
Nov 24 09:52:47 compute-0 podman[263958]: 2025-11-24 09:52:47.133825651 +0000 UTC m=+0.147021230 container remove f53a99450c804cbae4c807359f83add99dd3cb2668f0b703c9ea39e56d8af461 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:52:47 compute-0 systemd[1]: libpod-conmon-f53a99450c804cbae4c807359f83add99dd3cb2668f0b703c9ea39e56d8af461.scope: Deactivated successfully.
Nov 24 09:52:47 compute-0 podman[263997]: 2025-11-24 09:52:47.298764072 +0000 UTC m=+0.037777366 container create 4c74f6392c675e5eb88a4d9d9d39fc8388c91068a964ca997b52b33ba46e48cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_leakey, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 24 09:52:47 compute-0 systemd[1]: Started libpod-conmon-4c74f6392c675e5eb88a4d9d9d39fc8388c91068a964ca997b52b33ba46e48cc.scope.
Nov 24 09:52:47 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:52:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fef9bedda45c0d1d2400d3d4ccebd4f7a80c02f413278a1c4dd2d04312dcd51/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:52:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fef9bedda45c0d1d2400d3d4ccebd4f7a80c02f413278a1c4dd2d04312dcd51/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:52:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fef9bedda45c0d1d2400d3d4ccebd4f7a80c02f413278a1c4dd2d04312dcd51/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:52:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fef9bedda45c0d1d2400d3d4ccebd4f7a80c02f413278a1c4dd2d04312dcd51/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:52:47 compute-0 podman[263997]: 2025-11-24 09:52:47.282616183 +0000 UTC m=+0.021629517 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:52:47 compute-0 podman[263997]: 2025-11-24 09:52:47.380218629 +0000 UTC m=+0.119231983 container init 4c74f6392c675e5eb88a4d9d9d39fc8388c91068a964ca997b52b33ba46e48cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_leakey, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 24 09:52:47 compute-0 podman[263997]: 2025-11-24 09:52:47.388156455 +0000 UTC m=+0.127169759 container start 4c74f6392c675e5eb88a4d9d9d39fc8388c91068a964ca997b52b33ba46e48cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid)
Nov 24 09:52:47 compute-0 podman[263997]: 2025-11-24 09:52:47.391304214 +0000 UTC m=+0.130317578 container attach 4c74f6392c675e5eb88a4d9d9d39fc8388c91068a964ca997b52b33ba46e48cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_leakey, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:52:47 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:47 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:47 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:47.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:47 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v773: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 5.9 MiB/s wr, 56 op/s
Nov 24 09:52:48 compute-0 lvm[264087]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 09:52:48 compute-0 lvm[264087]: VG ceph_vg0 finished
Nov 24 09:52:48 compute-0 cranky_leakey[264013]: {}
Nov 24 09:52:48 compute-0 systemd[1]: libpod-4c74f6392c675e5eb88a4d9d9d39fc8388c91068a964ca997b52b33ba46e48cc.scope: Deactivated successfully.
Nov 24 09:52:48 compute-0 systemd[1]: libpod-4c74f6392c675e5eb88a4d9d9d39fc8388c91068a964ca997b52b33ba46e48cc.scope: Consumed 1.090s CPU time.
Nov 24 09:52:48 compute-0 conmon[264013]: conmon 4c74f6392c675e5eb88a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4c74f6392c675e5eb88a4d9d9d39fc8388c91068a964ca997b52b33ba46e48cc.scope/container/memory.events
Nov 24 09:52:48 compute-0 podman[263997]: 2025-11-24 09:52:48.113241592 +0000 UTC m=+0.852254896 container died 4c74f6392c675e5eb88a4d9d9d39fc8388c91068a964ca997b52b33ba46e48cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:52:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-7fef9bedda45c0d1d2400d3d4ccebd4f7a80c02f413278a1c4dd2d04312dcd51-merged.mount: Deactivated successfully.
Nov 24 09:52:48 compute-0 podman[263997]: 2025-11-24 09:52:48.168980552 +0000 UTC m=+0.907993856 container remove 4c74f6392c675e5eb88a4d9d9d39fc8388c91068a964ca997b52b33ba46e48cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_leakey, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:52:48 compute-0 systemd[1]: libpod-conmon-4c74f6392c675e5eb88a4d9d9d39fc8388c91068a964ca997b52b33ba46e48cc.scope: Deactivated successfully.
Nov 24 09:52:48 compute-0 sudo[263893]: pam_unix(sudo:session): session closed for user root
Nov 24 09:52:48 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:52:48 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:52:48 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:52:48 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:52:48 compute-0 sudo[264104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:52:48 compute-0 sudo[264104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:52:48 compute-0 sudo[264104]: pam_unix(sudo:session): session closed for user root
Nov 24 09:52:48 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:52:48 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:48 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:48 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:48.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:48 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.2.0.compute-0.ssprex.service: Scheduled restart job, restart counter is at 11.
Nov 24 09:52:48 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ssprex for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:52:48 compute-0 systemd[1]: ceph-84a084c3-61a7-5de7-8207-1f88efa59a64@nfs.cephfs.2.0.compute-0.ssprex.service: Consumed 1.611s CPU time.
Nov 24 09:52:48 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ssprex for 84a084c3-61a7-5de7-8207-1f88efa59a64...
Nov 24 09:52:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:52:48.850Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:52:48 compute-0 podman[264179]: 2025-11-24 09:52:48.912954875 +0000 UTC m=+0.039660772 container create a8ff859c0ee484e58c6aaf58e6d722a3faffb91c2dea80441e79254f2043cb44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Nov 24 09:52:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c47988846cff2799f9e7c6c17e116345adb287cf05b25af5cfdb290129da14d/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 24 09:52:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c47988846cff2799f9e7c6c17e116345adb287cf05b25af5cfdb290129da14d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:52:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c47988846cff2799f9e7c6c17e116345adb287cf05b25af5cfdb290129da14d/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:52:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c47988846cff2799f9e7c6c17e116345adb287cf05b25af5cfdb290129da14d/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ssprex-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:52:48 compute-0 podman[264179]: 2025-11-24 09:52:48.972671224 +0000 UTC m=+0.099377121 container init a8ff859c0ee484e58c6aaf58e6d722a3faffb91c2dea80441e79254f2043cb44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:52:48 compute-0 podman[264179]: 2025-11-24 09:52:48.977976485 +0000 UTC m=+0.104682382 container start a8ff859c0ee484e58c6aaf58e6d722a3faffb91c2dea80441e79254f2043cb44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Nov 24 09:52:48 compute-0 bash[264179]: a8ff859c0ee484e58c6aaf58e6d722a3faffb91c2dea80441e79254f2043cb44
Nov 24 09:52:48 compute-0 podman[264179]: 2025-11-24 09:52:48.894004707 +0000 UTC m=+0.020710624 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:52:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:52:48 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 24 09:52:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:52:48 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 24 09:52:48 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ssprex for 84a084c3-61a7-5de7-8207-1f88efa59a64.
Nov 24 09:52:49 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:52:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 24 09:52:49 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:52:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 24 09:52:49 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:52:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 24 09:52:49 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:52:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 24 09:52:49 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:52:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 24 09:52:49 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:52:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:52:49 compute-0 ceph-mon[74331]: pgmap v773: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 5.9 MiB/s wr, 56 op/s
Nov 24 09:52:49 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:52:49 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:52:49 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:49 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:49 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:49.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:49 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v774: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.8 MiB/s wr, 26 op/s
Nov 24 09:52:50 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:50 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:50 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:50.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:52:50] "GET /metrics HTTP/1.1" 200 48397 "" "Prometheus/2.51.0"
Nov 24 09:52:50 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:52:50] "GET /metrics HTTP/1.1" 200 48397 "" "Prometheus/2.51.0"
Nov 24 09:52:51 compute-0 ceph-mon[74331]: pgmap v774: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.8 MiB/s wr, 26 op/s
Nov 24 09:52:51 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:51 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:52:51 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:51.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:52:51 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v775: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 2.3 MiB/s wr, 23 op/s
Nov 24 09:52:52 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:52 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:52 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:52.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:53 compute-0 ceph-mon[74331]: pgmap v775: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 2.3 MiB/s wr, 23 op/s
Nov 24 09:52:53 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:52:53 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:53 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:53 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:53.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:53 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v776: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Nov 24 09:52:54 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:54 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:54 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:54.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:52:55 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:52:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:52:55 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:52:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:52:55 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:52:55 compute-0 ceph-mon[74331]: pgmap v776: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Nov 24 09:52:55 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:55 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:55 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:55.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:55 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v777: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 1.7 MiB/s wr, 17 op/s
Nov 24 09:52:56 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:56 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:56 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:56.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:52:57.098Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:52:57 compute-0 ceph-mon[74331]: pgmap v777: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 1.7 MiB/s wr, 17 op/s
Nov 24 09:52:57 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:57 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:57 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:57.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:57 compute-0 podman[264248]: 2025-11-24 09:52:57.796851494 +0000 UTC m=+0.071617053 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 09:52:57 compute-0 podman[264249]: 2025-11-24 09:52:57.817839174 +0000 UTC m=+0.090963573 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 24 09:52:57 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v778: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.7 MiB/s wr, 17 op/s
Nov 24 09:52:58 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:52:58 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:58 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:58 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:52:58.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:52:58.852Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:52:59 compute-0 ceph-mon[74331]: pgmap v778: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.7 MiB/s wr, 17 op/s
Nov 24 09:52:59 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:52:59 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:52:59 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:52:59.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:52:59 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v779: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 170 B/s wr, 1 op/s
Nov 24 09:53:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:52:59 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:53:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:52:59 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:53:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:52:59 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:53:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:53:00 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:53:00 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:00 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:00 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:00.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:00 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 24 09:53:00 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2050834204' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 09:53:00 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 24 09:53:00 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2050834204' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 09:53:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:53:00] "GET /metrics HTTP/1.1" 200 48398 "" "Prometheus/2.51.0"
Nov 24 09:53:00 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:53:00] "GET /metrics HTTP/1.1" 200 48398 "" "Prometheus/2.51.0"
Nov 24 09:53:01 compute-0 anacron[29927]: Job `cron.weekly' started
Nov 24 09:53:01 compute-0 anacron[29927]: Job `cron.weekly' terminated
Nov 24 09:53:01 compute-0 ceph-mon[74331]: pgmap v779: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 170 B/s wr, 1 op/s
Nov 24 09:53:01 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:53:01 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/2050834204' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 09:53:01 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/2050834204' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 09:53:01 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:01 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:01 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:01.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:01 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v780: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 170 B/s wr, 2 op/s
Nov 24 09:53:02 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:02 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:02 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:02.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:03 compute-0 ceph-mon[74331]: pgmap v780: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 170 B/s wr, 2 op/s
Nov 24 09:53:03 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:53:03 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:03 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:03 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:03.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:03 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v781: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 85 B/s wr, 1 op/s
Nov 24 09:53:04 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:04 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:53:04 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:04.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:53:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:53:04 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:53:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:53:04 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:53:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:53:04 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:53:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:53:05 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:53:05 compute-0 ceph-mon[74331]: pgmap v781: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 85 B/s wr, 1 op/s
Nov 24 09:53:05 compute-0 sudo[264303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:53:05 compute-0 sudo[264303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:53:05 compute-0 sudo[264303]: pam_unix(sudo:session): session closed for user root
Nov 24 09:53:05 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:05 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:53:05 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:05.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:53:05 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:53:05.779 165073 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:13:51', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4e:f0:a8:6f:5e:1b'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 09:53:05 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:53:05.780 165073 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 09:53:05 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v782: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 85 B/s wr, 1 op/s
Nov 24 09:53:06 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:06 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:06 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:06.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:06 compute-0 podman[264329]: 2025-11-24 09:53:06.7668561 +0000 UTC m=+0.047785414 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:53:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:53:07.098Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 09:53:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:53:07.098Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:53:07 compute-0 ceph-mon[74331]: pgmap v782: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 85 B/s wr, 1 op/s
Nov 24 09:53:07 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:07 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:07 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:07.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:07 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v783: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 85 B/s wr, 1 op/s
Nov 24 09:53:08 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:53:08 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:08 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:08 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:08.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:53:08.853Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:53:09 compute-0 ceph-mon[74331]: pgmap v783: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 85 B/s wr, 1 op/s
Nov 24 09:53:09 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:09 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:09 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:09.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:09 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v784: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 09:53:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:53:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:53:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:53:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:53:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:53:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:53:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:53:10 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:53:10 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:10 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:10 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:10.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:53:10] "GET /metrics HTTP/1.1" 200 48398 "" "Prometheus/2.51.0"
Nov 24 09:53:10 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:53:10] "GET /metrics HTTP/1.1" 200 48398 "" "Prometheus/2.51.0"
Nov 24 09:53:11 compute-0 ceph-mon[74331]: pgmap v784: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 09:53:11 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:11 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:11 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:11.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:11 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v785: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 09:53:11 compute-0 nova_compute[257700]: 2025-11-24 09:53:11.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:53:11 compute-0 nova_compute[257700]: 2025-11-24 09:53:11.922 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:53:11 compute-0 nova_compute[257700]: 2025-11-24 09:53:11.922 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:53:12 compute-0 ceph-mon[74331]: pgmap v785: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 09:53:12 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/2058198224' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:53:12 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:12 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:12 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:12.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:12 compute-0 nova_compute[257700]: 2025-11-24 09:53:12.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:53:12 compute-0 nova_compute[257700]: 2025-11-24 09:53:12.921 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 09:53:13 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/4266170854' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:53:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:53:13 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:13 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:53:13 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:13.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:53:13 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:53:13.782 165073 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feb242b9-6422-4c37-bc7a-5c14a79beaf8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:53:13 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v786: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 09:53:13 compute-0 nova_compute[257700]: 2025-11-24 09:53:13.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:53:13 compute-0 nova_compute[257700]: 2025-11-24 09:53:13.921 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 09:53:13 compute-0 nova_compute[257700]: 2025-11-24 09:53:13.921 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 09:53:13 compute-0 nova_compute[257700]: 2025-11-24 09:53:13.935 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 09:53:13 compute-0 nova_compute[257700]: 2025-11-24 09:53:13.935 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:53:13 compute-0 nova_compute[257700]: 2025-11-24 09:53:13.953 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:53:13 compute-0 nova_compute[257700]: 2025-11-24 09:53:13.954 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:53:13 compute-0 nova_compute[257700]: 2025-11-24 09:53:13.954 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:53:13 compute-0 nova_compute[257700]: 2025-11-24 09:53:13.954 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 09:53:13 compute-0 nova_compute[257700]: 2025-11-24 09:53:13.954 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:53:14 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:53:14 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3122627864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:53:14 compute-0 nova_compute[257700]: 2025-11-24 09:53:14.385 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:53:14 compute-0 ceph-mon[74331]: pgmap v786: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 09:53:14 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3122627864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:53:14 compute-0 nova_compute[257700]: 2025-11-24 09:53:14.543 257704 WARNING nova.virt.libvirt.driver [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 09:53:14 compute-0 nova_compute[257700]: 2025-11-24 09:53:14.544 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4934MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 09:53:14 compute-0 nova_compute[257700]: 2025-11-24 09:53:14.544 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:53:14 compute-0 nova_compute[257700]: 2025-11-24 09:53:14.545 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:53:14 compute-0 nova_compute[257700]: 2025-11-24 09:53:14.617 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 09:53:14 compute-0 nova_compute[257700]: 2025-11-24 09:53:14.617 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 09:53:14 compute-0 nova_compute[257700]: 2025-11-24 09:53:14.632 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:53:14 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:14 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:53:14 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:14.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:53:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:53:15 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:53:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:53:15 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:53:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:53:15 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:53:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:53:15 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:53:15 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:53:15 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1178473982' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:53:15 compute-0 nova_compute[257700]: 2025-11-24 09:53:15.114 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:53:15 compute-0 nova_compute[257700]: 2025-11-24 09:53:15.121 257704 DEBUG nova.compute.provider_tree [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Inventory has not changed in ProviderTree for provider: a50ce3b5-7e9e-4263-a4aa-c35573ac7257 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 09:53:15 compute-0 nova_compute[257700]: 2025-11-24 09:53:15.138 257704 DEBUG nova.scheduler.client.report [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Inventory has not changed for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 09:53:15 compute-0 nova_compute[257700]: 2025-11-24 09:53:15.140 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 09:53:15 compute-0 nova_compute[257700]: 2025-11-24 09:53:15.140 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:53:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:53:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:53:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:53:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:53:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:53:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:53:15 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1178473982' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:53:15 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/1187586974' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:53:15 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:53:15 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:15 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:15 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:15.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:15 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v787: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 09:53:16 compute-0 ceph-mon[74331]: pgmap v787: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 09:53:16 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/1183407982' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:53:16 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:16 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:53:16 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:16.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:53:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:53:17.100Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:53:17 compute-0 nova_compute[257700]: 2025-11-24 09:53:17.128 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:53:17 compute-0 nova_compute[257700]: 2025-11-24 09:53:17.144 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:53:17 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:17 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:53:17 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:17.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:53:17 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v788: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 09:53:17 compute-0 nova_compute[257700]: 2025-11-24 09:53:17.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:53:17 compute-0 nova_compute[257700]: 2025-11-24 09:53:17.922 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:53:18 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:53:18 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:18 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:53:18 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:18.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:53:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:53:18.854Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:53:18 compute-0 ceph-mon[74331]: pgmap v788: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 09:53:19 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:19 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:19 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:19.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:19 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v789: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 09:53:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:53:20 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:53:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:53:20 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:53:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:53:20 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:53:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:53:20 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:53:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:53:20.560 165073 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:53:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:53:20.561 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:53:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:53:20.561 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:53:20 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:20 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:53:20 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:20.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:53:20 compute-0 ceph-mon[74331]: pgmap v789: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 09:53:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:53:20] "GET /metrics HTTP/1.1" 200 48400 "" "Prometheus/2.51.0"
Nov 24 09:53:20 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:53:20] "GET /metrics HTTP/1.1" 200 48400 "" "Prometheus/2.51.0"
Nov 24 09:53:21 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:21 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:21 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:21.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:21 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v790: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 09:53:22 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:22 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:53:22 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:22.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:53:22 compute-0 ceph-mon[74331]: pgmap v790: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 09:53:23 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:53:23 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:23 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:23 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:23.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:23 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v791: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 09:53:24 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:24 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:24 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:24.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:53:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:53:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:53:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:53:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:53:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:53:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:53:25 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:53:25 compute-0 ceph-mon[74331]: pgmap v791: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 09:53:25 compute-0 sudo[264413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:53:25 compute-0 sudo[264413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:53:25 compute-0 sudo[264413]: pam_unix(sudo:session): session closed for user root
Nov 24 09:53:25 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:25 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:53:25 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:25.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:53:25 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v792: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 09:53:26 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Nov 24 09:53:26 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:53:26.039174) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 09:53:26 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Nov 24 09:53:26 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978006039225, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 886, "num_deletes": 251, "total_data_size": 1414197, "memory_usage": 1439328, "flush_reason": "Manual Compaction"}
Nov 24 09:53:26 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Nov 24 09:53:26 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978006050681, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1394668, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23912, "largest_seqno": 24797, "table_properties": {"data_size": 1390172, "index_size": 2148, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9990, "raw_average_key_size": 19, "raw_value_size": 1381042, "raw_average_value_size": 2745, "num_data_blocks": 95, "num_entries": 503, "num_filter_entries": 503, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763977939, "oldest_key_time": 1763977939, "file_creation_time": 1763978006, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "42aa12d2-c531-4ddc-8c4c-bc0b5971346b", "db_session_id": "RORHLERH15LC1QL8D0I4", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:53:26 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 11566 microseconds, and 5509 cpu microseconds.
Nov 24 09:53:26 compute-0 ceph-mon[74331]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:53:26 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:53:26.050741) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1394668 bytes OK
Nov 24 09:53:26 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:53:26.050766) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Nov 24 09:53:26 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:53:26.052820) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Nov 24 09:53:26 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:53:26.052845) EVENT_LOG_v1 {"time_micros": 1763978006052837, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 09:53:26 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:53:26.052867) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 09:53:26 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1409952, prev total WAL file size 1409952, number of live WAL files 2.
Nov 24 09:53:26 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:53:26 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:53:26.053775) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Nov 24 09:53:26 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 09:53:26 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1361KB)], [53(12MB)]
Nov 24 09:53:26 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978006053863, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 14752851, "oldest_snapshot_seqno": -1}
Nov 24 09:53:26 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 5398 keys, 12546974 bytes, temperature: kUnknown
Nov 24 09:53:26 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978006144240, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 12546974, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12511456, "index_size": 20944, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13509, "raw_key_size": 138701, "raw_average_key_size": 25, "raw_value_size": 12414344, "raw_average_value_size": 2299, "num_data_blocks": 848, "num_entries": 5398, "num_filter_entries": 5398, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976305, "oldest_key_time": 0, "file_creation_time": 1763978006, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "42aa12d2-c531-4ddc-8c4c-bc0b5971346b", "db_session_id": "RORHLERH15LC1QL8D0I4", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:53:26 compute-0 ceph-mon[74331]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:53:26 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:53:26.144467) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 12546974 bytes
Nov 24 09:53:26 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:53:26.145377) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 163.1 rd, 138.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 12.7 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(19.6) write-amplify(9.0) OK, records in: 5918, records dropped: 520 output_compression: NoCompression
Nov 24 09:53:26 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:53:26.145391) EVENT_LOG_v1 {"time_micros": 1763978006145384, "job": 28, "event": "compaction_finished", "compaction_time_micros": 90432, "compaction_time_cpu_micros": 50337, "output_level": 6, "num_output_files": 1, "total_output_size": 12546974, "num_input_records": 5918, "num_output_records": 5398, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 09:53:26 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:53:26 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978006145738, "job": 28, "event": "table_file_deletion", "file_number": 55}
Nov 24 09:53:26 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:53:26 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978006148030, "job": 28, "event": "table_file_deletion", "file_number": 53}
Nov 24 09:53:26 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:53:26.053593) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:53:26 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:53:26.148153) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:53:26 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:53:26.148158) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:53:26 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:53:26.148160) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:53:26 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:53:26.148161) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:53:26 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:53:26.148163) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:53:26 compute-0 nova_compute[257700]: 2025-11-24 09:53:26.357 257704 DEBUG oslo_concurrency.lockutils [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "a30689a0-a2d7-4b8d-9f45-9763cda52bf9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:53:26 compute-0 nova_compute[257700]: 2025-11-24 09:53:26.358 257704 DEBUG oslo_concurrency.lockutils [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "a30689a0-a2d7-4b8d-9f45-9763cda52bf9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:53:26 compute-0 nova_compute[257700]: 2025-11-24 09:53:26.389 257704 DEBUG nova.compute.manager [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 24 09:53:26 compute-0 nova_compute[257700]: 2025-11-24 09:53:26.539 257704 DEBUG oslo_concurrency.lockutils [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:53:26 compute-0 nova_compute[257700]: 2025-11-24 09:53:26.540 257704 DEBUG oslo_concurrency.lockutils [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:53:26 compute-0 nova_compute[257700]: 2025-11-24 09:53:26.548 257704 DEBUG nova.virt.hardware [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 24 09:53:26 compute-0 nova_compute[257700]: 2025-11-24 09:53:26.548 257704 INFO nova.compute.claims [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Claim successful on node compute-0.ctlplane.example.com
Nov 24 09:53:26 compute-0 nova_compute[257700]: 2025-11-24 09:53:26.640 257704 DEBUG oslo_concurrency.processutils [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:53:26 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:26 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:26 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:26.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:27 compute-0 ceph-mon[74331]: pgmap v792: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 09:53:27 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:53:27 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2047181630' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:53:27 compute-0 nova_compute[257700]: 2025-11-24 09:53:27.082 257704 DEBUG oslo_concurrency.processutils [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:53:27 compute-0 nova_compute[257700]: 2025-11-24 09:53:27.092 257704 DEBUG nova.compute.provider_tree [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed in ProviderTree for provider: a50ce3b5-7e9e-4263-a4aa-c35573ac7257 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 09:53:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:53:27.101Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:53:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:53:27.102Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:53:27 compute-0 nova_compute[257700]: 2025-11-24 09:53:27.107 257704 DEBUG nova.scheduler.client.report [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 09:53:27 compute-0 nova_compute[257700]: 2025-11-24 09:53:27.131 257704 DEBUG oslo_concurrency.lockutils [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:53:27 compute-0 nova_compute[257700]: 2025-11-24 09:53:27.132 257704 DEBUG nova.compute.manager [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 24 09:53:27 compute-0 nova_compute[257700]: 2025-11-24 09:53:27.186 257704 DEBUG nova.compute.manager [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 24 09:53:27 compute-0 nova_compute[257700]: 2025-11-24 09:53:27.188 257704 DEBUG nova.network.neutron [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 24 09:53:27 compute-0 nova_compute[257700]: 2025-11-24 09:53:27.221 257704 INFO nova.virt.libvirt.driver [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 24 09:53:27 compute-0 nova_compute[257700]: 2025-11-24 09:53:27.246 257704 DEBUG nova.compute.manager [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 24 09:53:27 compute-0 nova_compute[257700]: 2025-11-24 09:53:27.349 257704 DEBUG nova.compute.manager [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 24 09:53:27 compute-0 nova_compute[257700]: 2025-11-24 09:53:27.351 257704 DEBUG nova.virt.libvirt.driver [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 24 09:53:27 compute-0 nova_compute[257700]: 2025-11-24 09:53:27.351 257704 INFO nova.virt.libvirt.driver [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Creating image(s)
Nov 24 09:53:27 compute-0 nova_compute[257700]: 2025-11-24 09:53:27.408 257704 DEBUG nova.storage.rbd_utils [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image a30689a0-a2d7-4b8d-9f45-9763cda52bf9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 09:53:27 compute-0 nova_compute[257700]: 2025-11-24 09:53:27.453 257704 DEBUG nova.storage.rbd_utils [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image a30689a0-a2d7-4b8d-9f45-9763cda52bf9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 09:53:27 compute-0 nova_compute[257700]: 2025-11-24 09:53:27.486 257704 DEBUG nova.storage.rbd_utils [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image a30689a0-a2d7-4b8d-9f45-9763cda52bf9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 09:53:27 compute-0 nova_compute[257700]: 2025-11-24 09:53:27.491 257704 DEBUG oslo_concurrency.lockutils [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "2ed5c667523487159c4c4503c82babbc95dbae40" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:53:27 compute-0 nova_compute[257700]: 2025-11-24 09:53:27.492 257704 DEBUG oslo_concurrency.lockutils [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "2ed5c667523487159c4c4503c82babbc95dbae40" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:53:27 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:27 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:27 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:27.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:27 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v793: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 09:53:28 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2047181630' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:53:28 compute-0 nova_compute[257700]: 2025-11-24 09:53:28.228 257704 DEBUG nova.virt.libvirt.imagebackend [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Image locations are: [{'url': 'rbd://84a084c3-61a7-5de7-8207-1f88efa59a64/images/6ef14bdf-4f04-4400-8040-4409d9d5271e/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://84a084c3-61a7-5de7-8207-1f88efa59a64/images/6ef14bdf-4f04-4400-8040-4409d9d5271e/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Nov 24 09:53:28 compute-0 nova_compute[257700]: 2025-11-24 09:53:28.295 257704 WARNING oslo_policy.policy [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Nov 24 09:53:28 compute-0 nova_compute[257700]: 2025-11-24 09:53:28.296 257704 WARNING oslo_policy.policy [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Nov 24 09:53:28 compute-0 nova_compute[257700]: 2025-11-24 09:53:28.298 257704 DEBUG nova.policy [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '43f79ff3105e4372a3c095e8057d4f1f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '94d069fc040647d5a6e54894eec915fe', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 24 09:53:28 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:53:28 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:28 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:28 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:28.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:28 compute-0 podman[264517]: 2025-11-24 09:53:28.810931616 +0000 UTC m=+0.080012071 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 09:53:28 compute-0 podman[264518]: 2025-11-24 09:53:28.842051706 +0000 UTC m=+0.112203228 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true)
Nov 24 09:53:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:53:28.855Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:53:29 compute-0 ceph-mon[74331]: pgmap v793: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 09:53:29 compute-0 nova_compute[257700]: 2025-11-24 09:53:29.101 257704 DEBUG oslo_concurrency.processutils [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:53:29 compute-0 nova_compute[257700]: 2025-11-24 09:53:29.176 257704 DEBUG oslo_concurrency.processutils [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40.part --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:53:29 compute-0 nova_compute[257700]: 2025-11-24 09:53:29.177 257704 DEBUG nova.virt.images [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] 6ef14bdf-4f04-4400-8040-4409d9d5271e was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Nov 24 09:53:29 compute-0 nova_compute[257700]: 2025-11-24 09:53:29.179 257704 DEBUG nova.privsep.utils [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Nov 24 09:53:29 compute-0 nova_compute[257700]: 2025-11-24 09:53:29.179 257704 DEBUG oslo_concurrency.processutils [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40.part /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:53:29 compute-0 nova_compute[257700]: 2025-11-24 09:53:29.345 257704 DEBUG oslo_concurrency.processutils [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40.part /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40.converted" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:53:29 compute-0 nova_compute[257700]: 2025-11-24 09:53:29.351 257704 DEBUG oslo_concurrency.processutils [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:53:29 compute-0 nova_compute[257700]: 2025-11-24 09:53:29.406 257704 DEBUG oslo_concurrency.processutils [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40.converted --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:53:29 compute-0 nova_compute[257700]: 2025-11-24 09:53:29.407 257704 DEBUG oslo_concurrency.lockutils [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "2ed5c667523487159c4c4503c82babbc95dbae40" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.915s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:53:29 compute-0 nova_compute[257700]: 2025-11-24 09:53:29.436 257704 DEBUG nova.storage.rbd_utils [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image a30689a0-a2d7-4b8d-9f45-9763cda52bf9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 09:53:29 compute-0 nova_compute[257700]: 2025-11-24 09:53:29.440 257704 DEBUG oslo_concurrency.processutils [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 a30689a0-a2d7-4b8d-9f45-9763cda52bf9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:53:29 compute-0 nova_compute[257700]: 2025-11-24 09:53:29.545 257704 DEBUG nova.network.neutron [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Successfully created port: a483f88b-7075-47e4-a535-d23a6d20a8b0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 24 09:53:29 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:29 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:29 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:29.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:29 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v794: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 09:53:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:53:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:53:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:53:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:53:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:53:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:53:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:53:30 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:53:30 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Nov 24 09:53:30 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Nov 24 09:53:30 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Nov 24 09:53:30 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:30 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:30 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:30.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:30 compute-0 nova_compute[257700]: 2025-11-24 09:53:30.847 257704 DEBUG nova.network.neutron [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Successfully updated port: a483f88b-7075-47e4-a535-d23a6d20a8b0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 24 09:53:30 compute-0 nova_compute[257700]: 2025-11-24 09:53:30.864 257704 DEBUG oslo_concurrency.lockutils [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "refresh_cache-a30689a0-a2d7-4b8d-9f45-9763cda52bf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 09:53:30 compute-0 nova_compute[257700]: 2025-11-24 09:53:30.864 257704 DEBUG oslo_concurrency.lockutils [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquired lock "refresh_cache-a30689a0-a2d7-4b8d-9f45-9763cda52bf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 09:53:30 compute-0 nova_compute[257700]: 2025-11-24 09:53:30.865 257704 DEBUG nova.network.neutron [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 24 09:53:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:53:30] "GET /metrics HTTP/1.1" 200 48397 "" "Prometheus/2.51.0"
Nov 24 09:53:30 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:53:30] "GET /metrics HTTP/1.1" 200 48397 "" "Prometheus/2.51.0"
Nov 24 09:53:31 compute-0 nova_compute[257700]: 2025-11-24 09:53:31.018 257704 DEBUG nova.network.neutron [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 24 09:53:31 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Nov 24 09:53:31 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Nov 24 09:53:31 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Nov 24 09:53:31 compute-0 ceph-mon[74331]: pgmap v794: 353 pgs: 353 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 09:53:31 compute-0 ceph-mon[74331]: osdmap e156: 3 total, 3 up, 3 in
Nov 24 09:53:31 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:53:31 compute-0 nova_compute[257700]: 2025-11-24 09:53:31.350 257704 DEBUG oslo_concurrency.processutils [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 a30689a0-a2d7-4b8d-9f45-9763cda52bf9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.910s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:53:31 compute-0 nova_compute[257700]: 2025-11-24 09:53:31.385 257704 DEBUG nova.compute.manager [req-15eaf96f-f07c-499d-8bf1-7ebdf3160929 req-ab99e9b1-5d6c-4ef7-b136-515eca4e4694 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Received event network-changed-a483f88b-7075-47e4-a535-d23a6d20a8b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 09:53:31 compute-0 nova_compute[257700]: 2025-11-24 09:53:31.386 257704 DEBUG nova.compute.manager [req-15eaf96f-f07c-499d-8bf1-7ebdf3160929 req-ab99e9b1-5d6c-4ef7-b136-515eca4e4694 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Refreshing instance network info cache due to event network-changed-a483f88b-7075-47e4-a535-d23a6d20a8b0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 09:53:31 compute-0 nova_compute[257700]: 2025-11-24 09:53:31.386 257704 DEBUG oslo_concurrency.lockutils [req-15eaf96f-f07c-499d-8bf1-7ebdf3160929 req-ab99e9b1-5d6c-4ef7-b136-515eca4e4694 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "refresh_cache-a30689a0-a2d7-4b8d-9f45-9763cda52bf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 09:53:31 compute-0 nova_compute[257700]: 2025-11-24 09:53:31.426 257704 DEBUG nova.storage.rbd_utils [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] resizing rbd image a30689a0-a2d7-4b8d-9f45-9763cda52bf9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 24 09:53:31 compute-0 nova_compute[257700]: 2025-11-24 09:53:31.526 257704 DEBUG nova.objects.instance [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'migration_context' on Instance uuid a30689a0-a2d7-4b8d-9f45-9763cda52bf9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 09:53:31 compute-0 nova_compute[257700]: 2025-11-24 09:53:31.547 257704 DEBUG nova.virt.libvirt.driver [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 24 09:53:31 compute-0 nova_compute[257700]: 2025-11-24 09:53:31.547 257704 DEBUG nova.virt.libvirt.driver [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Ensure instance console log exists: /var/lib/nova/instances/a30689a0-a2d7-4b8d-9f45-9763cda52bf9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 24 09:53:31 compute-0 nova_compute[257700]: 2025-11-24 09:53:31.547 257704 DEBUG oslo_concurrency.lockutils [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:53:31 compute-0 nova_compute[257700]: 2025-11-24 09:53:31.548 257704 DEBUG oslo_concurrency.lockutils [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:53:31 compute-0 nova_compute[257700]: 2025-11-24 09:53:31.548 257704 DEBUG oslo_concurrency.lockutils [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:53:31 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:31 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:53:31 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:31.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:53:31 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v797: 353 pgs: 353 active+clean; 88 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 52 op/s
Nov 24 09:53:32 compute-0 nova_compute[257700]: 2025-11-24 09:53:32.048 257704 DEBUG nova.network.neutron [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Updating instance_info_cache with network_info: [{"id": "a483f88b-7075-47e4-a535-d23a6d20a8b0", "address": "fa:16:3e:60:ce:66", "network": {"id": "0dc1b2d1-8ad8-483c-a726-aec9ed2927a1", "bridge": "br-int", "label": "tempest-network-smoke--1014744228", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa483f88b-70", "ovs_interfaceid": "a483f88b-7075-47e4-a535-d23a6d20a8b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 09:53:32 compute-0 nova_compute[257700]: 2025-11-24 09:53:32.069 257704 DEBUG oslo_concurrency.lockutils [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Releasing lock "refresh_cache-a30689a0-a2d7-4b8d-9f45-9763cda52bf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 09:53:32 compute-0 nova_compute[257700]: 2025-11-24 09:53:32.069 257704 DEBUG nova.compute.manager [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Instance network_info: |[{"id": "a483f88b-7075-47e4-a535-d23a6d20a8b0", "address": "fa:16:3e:60:ce:66", "network": {"id": "0dc1b2d1-8ad8-483c-a726-aec9ed2927a1", "bridge": "br-int", "label": "tempest-network-smoke--1014744228", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa483f88b-70", "ovs_interfaceid": "a483f88b-7075-47e4-a535-d23a6d20a8b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 24 09:53:32 compute-0 nova_compute[257700]: 2025-11-24 09:53:32.069 257704 DEBUG oslo_concurrency.lockutils [req-15eaf96f-f07c-499d-8bf1-7ebdf3160929 req-ab99e9b1-5d6c-4ef7-b136-515eca4e4694 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquired lock "refresh_cache-a30689a0-a2d7-4b8d-9f45-9763cda52bf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 09:53:32 compute-0 nova_compute[257700]: 2025-11-24 09:53:32.070 257704 DEBUG nova.network.neutron [req-15eaf96f-f07c-499d-8bf1-7ebdf3160929 req-ab99e9b1-5d6c-4ef7-b136-515eca4e4694 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Refreshing network info cache for port a483f88b-7075-47e4-a535-d23a6d20a8b0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 09:53:32 compute-0 nova_compute[257700]: 2025-11-24 09:53:32.072 257704 DEBUG nova.virt.libvirt.driver [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Start _get_guest_xml network_info=[{"id": "a483f88b-7075-47e4-a535-d23a6d20a8b0", "address": "fa:16:3e:60:ce:66", "network": {"id": "0dc1b2d1-8ad8-483c-a726-aec9ed2927a1", "bridge": "br-int", "label": "tempest-network-smoke--1014744228", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa483f88b-70", "ovs_interfaceid": "a483f88b-7075-47e4-a535-d23a6d20a8b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T09:52:37Z,direct_url=<?>,disk_format='qcow2',id=6ef14bdf-4f04-4400-8040-4409d9d5271e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='cf636babb68a4ebe9bf137d3fe0e4c0c',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T09:52:41Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'device_name': '/dev/vda', 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_format': None, 'boot_index': 0, 'device_type': 'disk', 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': '6ef14bdf-4f04-4400-8040-4409d9d5271e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 24 09:53:32 compute-0 nova_compute[257700]: 2025-11-24 09:53:32.076 257704 WARNING nova.virt.libvirt.driver [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 09:53:32 compute-0 nova_compute[257700]: 2025-11-24 09:53:32.080 257704 DEBUG nova.virt.libvirt.host [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 24 09:53:32 compute-0 nova_compute[257700]: 2025-11-24 09:53:32.081 257704 DEBUG nova.virt.libvirt.host [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 24 09:53:32 compute-0 nova_compute[257700]: 2025-11-24 09:53:32.087 257704 DEBUG nova.virt.libvirt.host [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 24 09:53:32 compute-0 nova_compute[257700]: 2025-11-24 09:53:32.087 257704 DEBUG nova.virt.libvirt.host [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 24 09:53:32 compute-0 nova_compute[257700]: 2025-11-24 09:53:32.088 257704 DEBUG nova.virt.libvirt.driver [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 24 09:53:32 compute-0 nova_compute[257700]: 2025-11-24 09:53:32.088 257704 DEBUG nova.virt.hardware [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-24T09:52:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='4a5d03ad-925b-45f1-89bd-f1325f9f3292',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T09:52:37Z,direct_url=<?>,disk_format='qcow2',id=6ef14bdf-4f04-4400-8040-4409d9d5271e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='cf636babb68a4ebe9bf137d3fe0e4c0c',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T09:52:41Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 24 09:53:32 compute-0 nova_compute[257700]: 2025-11-24 09:53:32.088 257704 DEBUG nova.virt.hardware [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 24 09:53:32 compute-0 nova_compute[257700]: 2025-11-24 09:53:32.089 257704 DEBUG nova.virt.hardware [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 24 09:53:32 compute-0 nova_compute[257700]: 2025-11-24 09:53:32.089 257704 DEBUG nova.virt.hardware [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 24 09:53:32 compute-0 nova_compute[257700]: 2025-11-24 09:53:32.089 257704 DEBUG nova.virt.hardware [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 24 09:53:32 compute-0 nova_compute[257700]: 2025-11-24 09:53:32.089 257704 DEBUG nova.virt.hardware [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 24 09:53:32 compute-0 nova_compute[257700]: 2025-11-24 09:53:32.089 257704 DEBUG nova.virt.hardware [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 24 09:53:32 compute-0 nova_compute[257700]: 2025-11-24 09:53:32.090 257704 DEBUG nova.virt.hardware [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 24 09:53:32 compute-0 nova_compute[257700]: 2025-11-24 09:53:32.090 257704 DEBUG nova.virt.hardware [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 24 09:53:32 compute-0 nova_compute[257700]: 2025-11-24 09:53:32.090 257704 DEBUG nova.virt.hardware [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 24 09:53:32 compute-0 nova_compute[257700]: 2025-11-24 09:53:32.090 257704 DEBUG nova.virt.hardware [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 24 09:53:32 compute-0 nova_compute[257700]: 2025-11-24 09:53:32.093 257704 DEBUG nova.privsep.utils [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Nov 24 09:53:32 compute-0 nova_compute[257700]: 2025-11-24 09:53:32.094 257704 DEBUG oslo_concurrency.processutils [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:53:32 compute-0 ceph-mon[74331]: osdmap e157: 3 total, 3 up, 3 in
Nov 24 09:53:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 24 09:53:32 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3757053087' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 09:53:32 compute-0 nova_compute[257700]: 2025-11-24 09:53:32.515 257704 DEBUG oslo_concurrency.processutils [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:53:32 compute-0 nova_compute[257700]: 2025-11-24 09:53:32.539 257704 DEBUG nova.storage.rbd_utils [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image a30689a0-a2d7-4b8d-9f45-9763cda52bf9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 09:53:32 compute-0 nova_compute[257700]: 2025-11-24 09:53:32.542 257704 DEBUG oslo_concurrency.processutils [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:53:32 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:32 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:32 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:32.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 24 09:53:32 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1655492014' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 09:53:32 compute-0 nova_compute[257700]: 2025-11-24 09:53:32.995 257704 DEBUG oslo_concurrency.processutils [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:53:32 compute-0 nova_compute[257700]: 2025-11-24 09:53:32.997 257704 DEBUG nova.virt.libvirt.vif [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T09:53:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1978315975',display_name='tempest-TestNetworkBasicOps-server-1978315975',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1978315975',id=1,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJNthCOG5s/vj2boz4+BJ2PgQP/GfmCVp6AlZEWP14On33KLzHHGoFmk6PLUtyAqj03T1Qn3XgryOX94XA7OB9At/bgHp1KmuCanoF6+mPqReV5daqHshzy/eMS+IKuQNA==',key_name='tempest-TestNetworkBasicOps-1678285583',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-cxh7ww8x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T09:53:27Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=a30689a0-a2d7-4b8d-9f45-9763cda52bf9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a483f88b-7075-47e4-a535-d23a6d20a8b0", "address": "fa:16:3e:60:ce:66", "network": {"id": "0dc1b2d1-8ad8-483c-a726-aec9ed2927a1", "bridge": "br-int", "label": "tempest-network-smoke--1014744228", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa483f88b-70", "ovs_interfaceid": "a483f88b-7075-47e4-a535-d23a6d20a8b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 24 09:53:32 compute-0 nova_compute[257700]: 2025-11-24 09:53:32.997 257704 DEBUG nova.network.os_vif_util [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "a483f88b-7075-47e4-a535-d23a6d20a8b0", "address": "fa:16:3e:60:ce:66", "network": {"id": "0dc1b2d1-8ad8-483c-a726-aec9ed2927a1", "bridge": "br-int", "label": "tempest-network-smoke--1014744228", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa483f88b-70", "ovs_interfaceid": "a483f88b-7075-47e4-a535-d23a6d20a8b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 09:53:32 compute-0 nova_compute[257700]: 2025-11-24 09:53:32.998 257704 DEBUG nova.network.os_vif_util [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:60:ce:66,bridge_name='br-int',has_traffic_filtering=True,id=a483f88b-7075-47e4-a535-d23a6d20a8b0,network=Network(0dc1b2d1-8ad8-483c-a726-aec9ed2927a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa483f88b-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 09:53:33 compute-0 nova_compute[257700]: 2025-11-24 09:53:33.000 257704 DEBUG nova.objects.instance [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'pci_devices' on Instance uuid a30689a0-a2d7-4b8d-9f45-9763cda52bf9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 09:53:33 compute-0 nova_compute[257700]: 2025-11-24 09:53:33.013 257704 DEBUG nova.virt.libvirt.driver [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] End _get_guest_xml xml=<domain type="kvm">
Nov 24 09:53:33 compute-0 nova_compute[257700]:   <uuid>a30689a0-a2d7-4b8d-9f45-9763cda52bf9</uuid>
Nov 24 09:53:33 compute-0 nova_compute[257700]:   <name>instance-00000001</name>
Nov 24 09:53:33 compute-0 nova_compute[257700]:   <memory>131072</memory>
Nov 24 09:53:33 compute-0 nova_compute[257700]:   <vcpu>1</vcpu>
Nov 24 09:53:33 compute-0 nova_compute[257700]:   <metadata>
Nov 24 09:53:33 compute-0 nova_compute[257700]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 09:53:33 compute-0 nova_compute[257700]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:       <nova:name>tempest-TestNetworkBasicOps-server-1978315975</nova:name>
Nov 24 09:53:33 compute-0 nova_compute[257700]:       <nova:creationTime>2025-11-24 09:53:32</nova:creationTime>
Nov 24 09:53:33 compute-0 nova_compute[257700]:       <nova:flavor name="m1.nano">
Nov 24 09:53:33 compute-0 nova_compute[257700]:         <nova:memory>128</nova:memory>
Nov 24 09:53:33 compute-0 nova_compute[257700]:         <nova:disk>1</nova:disk>
Nov 24 09:53:33 compute-0 nova_compute[257700]:         <nova:swap>0</nova:swap>
Nov 24 09:53:33 compute-0 nova_compute[257700]:         <nova:ephemeral>0</nova:ephemeral>
Nov 24 09:53:33 compute-0 nova_compute[257700]:         <nova:vcpus>1</nova:vcpus>
Nov 24 09:53:33 compute-0 nova_compute[257700]:       </nova:flavor>
Nov 24 09:53:33 compute-0 nova_compute[257700]:       <nova:owner>
Nov 24 09:53:33 compute-0 nova_compute[257700]:         <nova:user uuid="43f79ff3105e4372a3c095e8057d4f1f">tempest-TestNetworkBasicOps-1844071378-project-member</nova:user>
Nov 24 09:53:33 compute-0 nova_compute[257700]:         <nova:project uuid="94d069fc040647d5a6e54894eec915fe">tempest-TestNetworkBasicOps-1844071378</nova:project>
Nov 24 09:53:33 compute-0 nova_compute[257700]:       </nova:owner>
Nov 24 09:53:33 compute-0 nova_compute[257700]:       <nova:root type="image" uuid="6ef14bdf-4f04-4400-8040-4409d9d5271e"/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:       <nova:ports>
Nov 24 09:53:33 compute-0 nova_compute[257700]:         <nova:port uuid="a483f88b-7075-47e4-a535-d23a6d20a8b0">
Nov 24 09:53:33 compute-0 nova_compute[257700]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:         </nova:port>
Nov 24 09:53:33 compute-0 nova_compute[257700]:       </nova:ports>
Nov 24 09:53:33 compute-0 nova_compute[257700]:     </nova:instance>
Nov 24 09:53:33 compute-0 nova_compute[257700]:   </metadata>
Nov 24 09:53:33 compute-0 nova_compute[257700]:   <sysinfo type="smbios">
Nov 24 09:53:33 compute-0 nova_compute[257700]:     <system>
Nov 24 09:53:33 compute-0 nova_compute[257700]:       <entry name="manufacturer">RDO</entry>
Nov 24 09:53:33 compute-0 nova_compute[257700]:       <entry name="product">OpenStack Compute</entry>
Nov 24 09:53:33 compute-0 nova_compute[257700]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 09:53:33 compute-0 nova_compute[257700]:       <entry name="serial">a30689a0-a2d7-4b8d-9f45-9763cda52bf9</entry>
Nov 24 09:53:33 compute-0 nova_compute[257700]:       <entry name="uuid">a30689a0-a2d7-4b8d-9f45-9763cda52bf9</entry>
Nov 24 09:53:33 compute-0 nova_compute[257700]:       <entry name="family">Virtual Machine</entry>
Nov 24 09:53:33 compute-0 nova_compute[257700]:     </system>
Nov 24 09:53:33 compute-0 nova_compute[257700]:   </sysinfo>
Nov 24 09:53:33 compute-0 nova_compute[257700]:   <os>
Nov 24 09:53:33 compute-0 nova_compute[257700]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 24 09:53:33 compute-0 nova_compute[257700]:     <boot dev="hd"/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:     <smbios mode="sysinfo"/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:   </os>
Nov 24 09:53:33 compute-0 nova_compute[257700]:   <features>
Nov 24 09:53:33 compute-0 nova_compute[257700]:     <acpi/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:     <apic/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:     <vmcoreinfo/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:   </features>
Nov 24 09:53:33 compute-0 nova_compute[257700]:   <clock offset="utc">
Nov 24 09:53:33 compute-0 nova_compute[257700]:     <timer name="pit" tickpolicy="delay"/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:     <timer name="hpet" present="no"/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:   </clock>
Nov 24 09:53:33 compute-0 nova_compute[257700]:   <cpu mode="host-model" match="exact">
Nov 24 09:53:33 compute-0 nova_compute[257700]:     <topology sockets="1" cores="1" threads="1"/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:   </cpu>
Nov 24 09:53:33 compute-0 nova_compute[257700]:   <devices>
Nov 24 09:53:33 compute-0 nova_compute[257700]:     <disk type="network" device="disk">
Nov 24 09:53:33 compute-0 nova_compute[257700]:       <driver type="raw" cache="none"/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:       <source protocol="rbd" name="vms/a30689a0-a2d7-4b8d-9f45-9763cda52bf9_disk">
Nov 24 09:53:33 compute-0 nova_compute[257700]:         <host name="192.168.122.100" port="6789"/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:         <host name="192.168.122.102" port="6789"/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:         <host name="192.168.122.101" port="6789"/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:       </source>
Nov 24 09:53:33 compute-0 nova_compute[257700]:       <auth username="openstack">
Nov 24 09:53:33 compute-0 nova_compute[257700]:         <secret type="ceph" uuid="84a084c3-61a7-5de7-8207-1f88efa59a64"/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:       </auth>
Nov 24 09:53:33 compute-0 nova_compute[257700]:       <target dev="vda" bus="virtio"/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:     </disk>
Nov 24 09:53:33 compute-0 nova_compute[257700]:     <disk type="network" device="cdrom">
Nov 24 09:53:33 compute-0 nova_compute[257700]:       <driver type="raw" cache="none"/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:       <source protocol="rbd" name="vms/a30689a0-a2d7-4b8d-9f45-9763cda52bf9_disk.config">
Nov 24 09:53:33 compute-0 nova_compute[257700]:         <host name="192.168.122.100" port="6789"/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:         <host name="192.168.122.102" port="6789"/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:         <host name="192.168.122.101" port="6789"/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:       </source>
Nov 24 09:53:33 compute-0 nova_compute[257700]:       <auth username="openstack">
Nov 24 09:53:33 compute-0 nova_compute[257700]:         <secret type="ceph" uuid="84a084c3-61a7-5de7-8207-1f88efa59a64"/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:       </auth>
Nov 24 09:53:33 compute-0 nova_compute[257700]:       <target dev="sda" bus="sata"/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:     </disk>
Nov 24 09:53:33 compute-0 nova_compute[257700]:     <interface type="ethernet">
Nov 24 09:53:33 compute-0 nova_compute[257700]:       <mac address="fa:16:3e:60:ce:66"/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:       <model type="virtio"/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:       <driver name="vhost" rx_queue_size="512"/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:       <mtu size="1442"/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:       <target dev="tapa483f88b-70"/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:     </interface>
Nov 24 09:53:33 compute-0 nova_compute[257700]:     <serial type="pty">
Nov 24 09:53:33 compute-0 nova_compute[257700]:       <log file="/var/lib/nova/instances/a30689a0-a2d7-4b8d-9f45-9763cda52bf9/console.log" append="off"/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:     </serial>
Nov 24 09:53:33 compute-0 nova_compute[257700]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:     <video>
Nov 24 09:53:33 compute-0 nova_compute[257700]:       <model type="virtio"/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:     </video>
Nov 24 09:53:33 compute-0 nova_compute[257700]:     <input type="tablet" bus="usb"/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:     <rng model="virtio">
Nov 24 09:53:33 compute-0 nova_compute[257700]:       <backend model="random">/dev/urandom</backend>
Nov 24 09:53:33 compute-0 nova_compute[257700]:     </rng>
Nov 24 09:53:33 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root"/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:     <controller type="usb" index="0"/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:     <memballoon model="virtio">
Nov 24 09:53:33 compute-0 nova_compute[257700]:       <stats period="10"/>
Nov 24 09:53:33 compute-0 nova_compute[257700]:     </memballoon>
Nov 24 09:53:33 compute-0 nova_compute[257700]:   </devices>
Nov 24 09:53:33 compute-0 nova_compute[257700]: </domain>
Nov 24 09:53:33 compute-0 nova_compute[257700]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 24 09:53:33 compute-0 nova_compute[257700]: 2025-11-24 09:53:33.014 257704 DEBUG nova.compute.manager [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Preparing to wait for external event network-vif-plugged-a483f88b-7075-47e4-a535-d23a6d20a8b0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 24 09:53:33 compute-0 nova_compute[257700]: 2025-11-24 09:53:33.015 257704 DEBUG oslo_concurrency.lockutils [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "a30689a0-a2d7-4b8d-9f45-9763cda52bf9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:53:33 compute-0 nova_compute[257700]: 2025-11-24 09:53:33.015 257704 DEBUG oslo_concurrency.lockutils [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "a30689a0-a2d7-4b8d-9f45-9763cda52bf9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:53:33 compute-0 nova_compute[257700]: 2025-11-24 09:53:33.015 257704 DEBUG oslo_concurrency.lockutils [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "a30689a0-a2d7-4b8d-9f45-9763cda52bf9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:53:33 compute-0 nova_compute[257700]: 2025-11-24 09:53:33.016 257704 DEBUG nova.virt.libvirt.vif [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T09:53:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1978315975',display_name='tempest-TestNetworkBasicOps-server-1978315975',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1978315975',id=1,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJNthCOG5s/vj2boz4+BJ2PgQP/GfmCVp6AlZEWP14On33KLzHHGoFmk6PLUtyAqj03T1Qn3XgryOX94XA7OB9At/bgHp1KmuCanoF6+mPqReV5daqHshzy/eMS+IKuQNA==',key_name='tempest-TestNetworkBasicOps-1678285583',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-cxh7ww8x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T09:53:27Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=a30689a0-a2d7-4b8d-9f45-9763cda52bf9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a483f88b-7075-47e4-a535-d23a6d20a8b0", "address": "fa:16:3e:60:ce:66", "network": {"id": "0dc1b2d1-8ad8-483c-a726-aec9ed2927a1", "bridge": "br-int", "label": "tempest-network-smoke--1014744228", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa483f88b-70", "ovs_interfaceid": "a483f88b-7075-47e4-a535-d23a6d20a8b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 24 09:53:33 compute-0 nova_compute[257700]: 2025-11-24 09:53:33.017 257704 DEBUG nova.network.os_vif_util [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "a483f88b-7075-47e4-a535-d23a6d20a8b0", "address": "fa:16:3e:60:ce:66", "network": {"id": "0dc1b2d1-8ad8-483c-a726-aec9ed2927a1", "bridge": "br-int", "label": "tempest-network-smoke--1014744228", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa483f88b-70", "ovs_interfaceid": "a483f88b-7075-47e4-a535-d23a6d20a8b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 09:53:33 compute-0 nova_compute[257700]: 2025-11-24 09:53:33.017 257704 DEBUG nova.network.os_vif_util [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:60:ce:66,bridge_name='br-int',has_traffic_filtering=True,id=a483f88b-7075-47e4-a535-d23a6d20a8b0,network=Network(0dc1b2d1-8ad8-483c-a726-aec9ed2927a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa483f88b-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 09:53:33 compute-0 nova_compute[257700]: 2025-11-24 09:53:33.018 257704 DEBUG os_vif [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:ce:66,bridge_name='br-int',has_traffic_filtering=True,id=a483f88b-7075-47e4-a535-d23a6d20a8b0,network=Network(0dc1b2d1-8ad8-483c-a726-aec9ed2927a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa483f88b-70') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 24 09:53:33 compute-0 nova_compute[257700]: 2025-11-24 09:53:33.052 257704 DEBUG ovsdbapp.backend.ovs_idl [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 24 09:53:33 compute-0 nova_compute[257700]: 2025-11-24 09:53:33.053 257704 DEBUG ovsdbapp.backend.ovs_idl [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 24 09:53:33 compute-0 nova_compute[257700]: 2025-11-24 09:53:33.053 257704 DEBUG ovsdbapp.backend.ovs_idl [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 24 09:53:33 compute-0 nova_compute[257700]: 2025-11-24 09:53:33.054 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 24 09:53:33 compute-0 nova_compute[257700]: 2025-11-24 09:53:33.055 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [POLLOUT] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:53:33 compute-0 nova_compute[257700]: 2025-11-24 09:53:33.055 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 24 09:53:33 compute-0 nova_compute[257700]: 2025-11-24 09:53:33.055 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:53:33 compute-0 nova_compute[257700]: 2025-11-24 09:53:33.057 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:53:33 compute-0 nova_compute[257700]: 2025-11-24 09:53:33.058 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:53:33 compute-0 nova_compute[257700]: 2025-11-24 09:53:33.068 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:53:33 compute-0 nova_compute[257700]: 2025-11-24 09:53:33.069 257704 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:53:33 compute-0 nova_compute[257700]: 2025-11-24 09:53:33.069 257704 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 09:53:33 compute-0 nova_compute[257700]: 2025-11-24 09:53:33.070 257704 INFO oslo.privsep.daemon [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpitknq3y1/privsep.sock']
Nov 24 09:53:33 compute-0 ceph-mon[74331]: pgmap v797: 353 pgs: 353 active+clean; 88 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 52 op/s
Nov 24 09:53:33 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3757053087' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 09:53:33 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1655492014' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 09:53:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:53:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Nov 24 09:53:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Nov 24 09:53:33 compute-0 ceph-mon[74331]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Nov 24 09:53:33 compute-0 nova_compute[257700]: 2025-11-24 09:53:33.695 257704 INFO oslo.privsep.daemon [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Spawned new privsep daemon via rootwrap
Nov 24 09:53:33 compute-0 nova_compute[257700]: 2025-11-24 09:53:33.588 264756 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 24 09:53:33 compute-0 nova_compute[257700]: 2025-11-24 09:53:33.592 264756 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 24 09:53:33 compute-0 nova_compute[257700]: 2025-11-24 09:53:33.595 264756 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Nov 24 09:53:33 compute-0 nova_compute[257700]: 2025-11-24 09:53:33.595 264756 INFO oslo.privsep.daemon [-] privsep daemon running as pid 264756
Nov 24 09:53:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:53:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:33.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:53:33 compute-0 nova_compute[257700]: 2025-11-24 09:53:33.767 257704 DEBUG nova.network.neutron [req-15eaf96f-f07c-499d-8bf1-7ebdf3160929 req-ab99e9b1-5d6c-4ef7-b136-515eca4e4694 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Updated VIF entry in instance network info cache for port a483f88b-7075-47e4-a535-d23a6d20a8b0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 09:53:33 compute-0 nova_compute[257700]: 2025-11-24 09:53:33.767 257704 DEBUG nova.network.neutron [req-15eaf96f-f07c-499d-8bf1-7ebdf3160929 req-ab99e9b1-5d6c-4ef7-b136-515eca4e4694 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Updating instance_info_cache with network_info: [{"id": "a483f88b-7075-47e4-a535-d23a6d20a8b0", "address": "fa:16:3e:60:ce:66", "network": {"id": "0dc1b2d1-8ad8-483c-a726-aec9ed2927a1", "bridge": "br-int", "label": "tempest-network-smoke--1014744228", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa483f88b-70", "ovs_interfaceid": "a483f88b-7075-47e4-a535-d23a6d20a8b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 09:53:33 compute-0 nova_compute[257700]: 2025-11-24 09:53:33.782 257704 DEBUG oslo_concurrency.lockutils [req-15eaf96f-f07c-499d-8bf1-7ebdf3160929 req-ab99e9b1-5d6c-4ef7-b136-515eca4e4694 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Releasing lock "refresh_cache-a30689a0-a2d7-4b8d-9f45-9763cda52bf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 09:53:33 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v799: 353 pgs: 353 active+clean; 88 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 3.5 MiB/s wr, 68 op/s
Nov 24 09:53:33 compute-0 nova_compute[257700]: 2025-11-24 09:53:33.998 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:53:33 compute-0 nova_compute[257700]: 2025-11-24 09:53:33.999 257704 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa483f88b-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:53:34 compute-0 nova_compute[257700]: 2025-11-24 09:53:34.000 257704 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa483f88b-70, col_values=(('external_ids', {'iface-id': 'a483f88b-7075-47e4-a535-d23a6d20a8b0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:60:ce:66', 'vm-uuid': 'a30689a0-a2d7-4b8d-9f45-9763cda52bf9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:53:34 compute-0 nova_compute[257700]: 2025-11-24 09:53:34.002 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:53:34 compute-0 NetworkManager[48883]: <info>  [1763978014.0032] manager: (tapa483f88b-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/23)
Nov 24 09:53:34 compute-0 nova_compute[257700]: 2025-11-24 09:53:34.006 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 09:53:34 compute-0 nova_compute[257700]: 2025-11-24 09:53:34.009 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:53:34 compute-0 nova_compute[257700]: 2025-11-24 09:53:34.010 257704 INFO os_vif [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:ce:66,bridge_name='br-int',has_traffic_filtering=True,id=a483f88b-7075-47e4-a535-d23a6d20a8b0,network=Network(0dc1b2d1-8ad8-483c-a726-aec9ed2927a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa483f88b-70')
Nov 24 09:53:34 compute-0 nova_compute[257700]: 2025-11-24 09:53:34.050 257704 DEBUG nova.virt.libvirt.driver [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 09:53:34 compute-0 nova_compute[257700]: 2025-11-24 09:53:34.051 257704 DEBUG nova.virt.libvirt.driver [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 09:53:34 compute-0 nova_compute[257700]: 2025-11-24 09:53:34.051 257704 DEBUG nova.virt.libvirt.driver [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No VIF found with MAC fa:16:3e:60:ce:66, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 24 09:53:34 compute-0 nova_compute[257700]: 2025-11-24 09:53:34.051 257704 INFO nova.virt.libvirt.driver [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Using config drive
Nov 24 09:53:34 compute-0 nova_compute[257700]: 2025-11-24 09:53:34.076 257704 DEBUG nova.storage.rbd_utils [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image a30689a0-a2d7-4b8d-9f45-9763cda52bf9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 09:53:34 compute-0 ceph-mon[74331]: osdmap e158: 3 total, 3 up, 3 in
Nov 24 09:53:34 compute-0 ceph-mon[74331]: pgmap v799: 353 pgs: 353 active+clean; 88 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 3.5 MiB/s wr, 68 op/s
Nov 24 09:53:34 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:34 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:34 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:34.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:53:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:53:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:53:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:53:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:53:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:53:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:53:35 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:53:35 compute-0 nova_compute[257700]: 2025-11-24 09:53:35.621 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:53:35 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:35 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:35 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:35.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:35 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v800: 353 pgs: 353 active+clean; 88 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 3.5 MiB/s wr, 68 op/s
Nov 24 09:53:36 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:36 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:36 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:36.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:36 compute-0 ceph-mon[74331]: pgmap v800: 353 pgs: 353 active+clean; 88 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 3.5 MiB/s wr, 68 op/s
Nov 24 09:53:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:53:37.102Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:53:37 compute-0 nova_compute[257700]: 2025-11-24 09:53:37.261 257704 INFO nova.virt.libvirt.driver [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Creating config drive at /var/lib/nova/instances/a30689a0-a2d7-4b8d-9f45-9763cda52bf9/disk.config
Nov 24 09:53:37 compute-0 nova_compute[257700]: 2025-11-24 09:53:37.266 257704 DEBUG oslo_concurrency.processutils [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a30689a0-a2d7-4b8d-9f45-9763cda52bf9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5iw_5sz4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:53:37 compute-0 nova_compute[257700]: 2025-11-24 09:53:37.388 257704 DEBUG oslo_concurrency.processutils [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a30689a0-a2d7-4b8d-9f45-9763cda52bf9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5iw_5sz4" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:53:37 compute-0 nova_compute[257700]: 2025-11-24 09:53:37.415 257704 DEBUG nova.storage.rbd_utils [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image a30689a0-a2d7-4b8d-9f45-9763cda52bf9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 09:53:37 compute-0 nova_compute[257700]: 2025-11-24 09:53:37.419 257704 DEBUG oslo_concurrency.processutils [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a30689a0-a2d7-4b8d-9f45-9763cda52bf9/disk.config a30689a0-a2d7-4b8d-9f45-9763cda52bf9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:53:37 compute-0 nova_compute[257700]: 2025-11-24 09:53:37.566 257704 DEBUG oslo_concurrency.processutils [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a30689a0-a2d7-4b8d-9f45-9763cda52bf9/disk.config a30689a0-a2d7-4b8d-9f45-9763cda52bf9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:53:37 compute-0 nova_compute[257700]: 2025-11-24 09:53:37.567 257704 INFO nova.virt.libvirt.driver [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Deleting local config drive /var/lib/nova/instances/a30689a0-a2d7-4b8d-9f45-9763cda52bf9/disk.config because it was imported into RBD.
Nov 24 09:53:37 compute-0 systemd[1]: Starting libvirt secret daemon...
Nov 24 09:53:37 compute-0 systemd[1]: Started libvirt secret daemon.
Nov 24 09:53:37 compute-0 podman[264824]: 2025-11-24 09:53:37.650204558 +0000 UTC m=+0.049993898 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 24 09:53:37 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Nov 24 09:53:37 compute-0 kernel: tapa483f88b-70: entered promiscuous mode
Nov 24 09:53:37 compute-0 NetworkManager[48883]: <info>  [1763978017.6632] manager: (tapa483f88b-70): new Tun device (/org/freedesktop/NetworkManager/Devices/24)
Nov 24 09:53:37 compute-0 ovn_controller[155123]: 2025-11-24T09:53:37Z|00027|binding|INFO|Claiming lport a483f88b-7075-47e4-a535-d23a6d20a8b0 for this chassis.
Nov 24 09:53:37 compute-0 ovn_controller[155123]: 2025-11-24T09:53:37Z|00028|binding|INFO|a483f88b-7075-47e4-a535-d23a6d20a8b0: Claiming fa:16:3e:60:ce:66 10.100.0.9
Nov 24 09:53:37 compute-0 nova_compute[257700]: 2025-11-24 09:53:37.704 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:53:37 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:53:37.717 165073 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:ce:66 10.100.0.9'], port_security=['fa:16:3e:60:ce:66 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'a30689a0-a2d7-4b8d-9f45-9763cda52bf9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0dc1b2d1-8ad8-483c-a726-aec9ed2927a1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94d069fc040647d5a6e54894eec915fe', 'neutron:revision_number': '2', 'neutron:security_group_ids': '94df962a-9564-4d32-ae9d-240621404de3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d43e3d9c-2d20-485b-a9cd-f3ec621a22dc, chassis=[<ovs.db.idl.Row object at 0x7f45b2855760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f45b2855760>], logical_port=a483f88b-7075-47e4-a535-d23a6d20a8b0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 09:53:37 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:53:37.719 165073 INFO neutron.agent.ovn.metadata.agent [-] Port a483f88b-7075-47e4-a535-d23a6d20a8b0 in datapath 0dc1b2d1-8ad8-483c-a726-aec9ed2927a1 bound to our chassis
Nov 24 09:53:37 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:53:37.721 165073 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0dc1b2d1-8ad8-483c-a726-aec9ed2927a1
Nov 24 09:53:37 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:53:37.722 165073 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpc2jqityd/privsep.sock']
Nov 24 09:53:37 compute-0 systemd-udevd[264874]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 09:53:37 compute-0 NetworkManager[48883]: <info>  [1763978017.7429] device (tapa483f88b-70): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 09:53:37 compute-0 NetworkManager[48883]: <info>  [1763978017.7442] device (tapa483f88b-70): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 24 09:53:37 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:37 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:53:37 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:37.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:53:37 compute-0 systemd-machined[219130]: New machine qemu-1-instance-00000001.
Nov 24 09:53:37 compute-0 nova_compute[257700]: 2025-11-24 09:53:37.780 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:53:37 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Nov 24 09:53:37 compute-0 ovn_controller[155123]: 2025-11-24T09:53:37Z|00029|binding|INFO|Setting lport a483f88b-7075-47e4-a535-d23a6d20a8b0 ovn-installed in OVS
Nov 24 09:53:37 compute-0 ovn_controller[155123]: 2025-11-24T09:53:37Z|00030|binding|INFO|Setting lport a483f88b-7075-47e4-a535-d23a6d20a8b0 up in Southbound
Nov 24 09:53:37 compute-0 nova_compute[257700]: 2025-11-24 09:53:37.787 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:53:37 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v801: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.7 MiB/s wr, 53 op/s
Nov 24 09:53:38 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:53:38.365 165073 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 24 09:53:38 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:53:38.366 165073 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpc2jqityd/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 24 09:53:38 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:53:38.234 264910 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 24 09:53:38 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:53:38.238 264910 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 24 09:53:38 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:53:38.241 264910 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Nov 24 09:53:38 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:53:38.241 264910 INFO oslo.privsep.daemon [-] privsep daemon running as pid 264910
Nov 24 09:53:38 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:53:38.369 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[502c0746-bd90-497b-91e0-f35100045350]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:53:38 compute-0 nova_compute[257700]: 2025-11-24 09:53:38.373 257704 DEBUG nova.virt.driver [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Emitting event <LifecycleEvent: 1763978018.3720186, a30689a0-a2d7-4b8d-9f45-9763cda52bf9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 09:53:38 compute-0 nova_compute[257700]: 2025-11-24 09:53:38.374 257704 INFO nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] VM Started (Lifecycle Event)
Nov 24 09:53:38 compute-0 nova_compute[257700]: 2025-11-24 09:53:38.442 257704 DEBUG nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 09:53:38 compute-0 nova_compute[257700]: 2025-11-24 09:53:38.447 257704 DEBUG nova.virt.driver [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Emitting event <LifecycleEvent: 1763978018.372353, a30689a0-a2d7-4b8d-9f45-9763cda52bf9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 09:53:38 compute-0 nova_compute[257700]: 2025-11-24 09:53:38.448 257704 INFO nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] VM Paused (Lifecycle Event)
Nov 24 09:53:38 compute-0 nova_compute[257700]: 2025-11-24 09:53:38.471 257704 DEBUG nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 09:53:38 compute-0 nova_compute[257700]: 2025-11-24 09:53:38.474 257704 DEBUG nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 09:53:38 compute-0 nova_compute[257700]: 2025-11-24 09:53:38.502 257704 INFO nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 09:53:38 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:53:38 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:38 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:38 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:38.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:53:38.855Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:53:38 compute-0 nova_compute[257700]: 2025-11-24 09:53:38.895 257704 DEBUG nova.compute.manager [req-8406447d-581b-4053-a376-79d619fd56e8 req-e52563d0-c2a8-4d81-9b4f-7486d62696cb 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Received event network-vif-plugged-a483f88b-7075-47e4-a535-d23a6d20a8b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 09:53:38 compute-0 nova_compute[257700]: 2025-11-24 09:53:38.896 257704 DEBUG oslo_concurrency.lockutils [req-8406447d-581b-4053-a376-79d619fd56e8 req-e52563d0-c2a8-4d81-9b4f-7486d62696cb 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "a30689a0-a2d7-4b8d-9f45-9763cda52bf9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:53:38 compute-0 nova_compute[257700]: 2025-11-24 09:53:38.896 257704 DEBUG oslo_concurrency.lockutils [req-8406447d-581b-4053-a376-79d619fd56e8 req-e52563d0-c2a8-4d81-9b4f-7486d62696cb 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "a30689a0-a2d7-4b8d-9f45-9763cda52bf9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:53:38 compute-0 nova_compute[257700]: 2025-11-24 09:53:38.896 257704 DEBUG oslo_concurrency.lockutils [req-8406447d-581b-4053-a376-79d619fd56e8 req-e52563d0-c2a8-4d81-9b4f-7486d62696cb 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "a30689a0-a2d7-4b8d-9f45-9763cda52bf9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:53:38 compute-0 nova_compute[257700]: 2025-11-24 09:53:38.896 257704 DEBUG nova.compute.manager [req-8406447d-581b-4053-a376-79d619fd56e8 req-e52563d0-c2a8-4d81-9b4f-7486d62696cb 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Processing event network-vif-plugged-a483f88b-7075-47e4-a535-d23a6d20a8b0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 24 09:53:38 compute-0 nova_compute[257700]: 2025-11-24 09:53:38.897 257704 DEBUG nova.compute.manager [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 24 09:53:38 compute-0 nova_compute[257700]: 2025-11-24 09:53:38.899 257704 DEBUG nova.virt.driver [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Emitting event <LifecycleEvent: 1763978018.899497, a30689a0-a2d7-4b8d-9f45-9763cda52bf9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 09:53:38 compute-0 nova_compute[257700]: 2025-11-24 09:53:38.899 257704 INFO nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] VM Resumed (Lifecycle Event)
Nov 24 09:53:38 compute-0 nova_compute[257700]: 2025-11-24 09:53:38.901 257704 DEBUG nova.virt.libvirt.driver [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 24 09:53:38 compute-0 nova_compute[257700]: 2025-11-24 09:53:38.916 257704 INFO nova.virt.libvirt.driver [-] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Instance spawned successfully.
Nov 24 09:53:38 compute-0 nova_compute[257700]: 2025-11-24 09:53:38.916 257704 DEBUG nova.virt.libvirt.driver [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 24 09:53:38 compute-0 nova_compute[257700]: 2025-11-24 09:53:38.921 257704 DEBUG nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 09:53:38 compute-0 nova_compute[257700]: 2025-11-24 09:53:38.924 257704 DEBUG nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 09:53:38 compute-0 nova_compute[257700]: 2025-11-24 09:53:38.934 257704 DEBUG nova.virt.libvirt.driver [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 09:53:38 compute-0 nova_compute[257700]: 2025-11-24 09:53:38.935 257704 DEBUG nova.virt.libvirt.driver [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 09:53:38 compute-0 nova_compute[257700]: 2025-11-24 09:53:38.935 257704 DEBUG nova.virt.libvirt.driver [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 09:53:38 compute-0 nova_compute[257700]: 2025-11-24 09:53:38.936 257704 DEBUG nova.virt.libvirt.driver [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 09:53:38 compute-0 nova_compute[257700]: 2025-11-24 09:53:38.936 257704 DEBUG nova.virt.libvirt.driver [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 09:53:38 compute-0 nova_compute[257700]: 2025-11-24 09:53:38.937 257704 DEBUG nova.virt.libvirt.driver [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 09:53:38 compute-0 nova_compute[257700]: 2025-11-24 09:53:38.940 257704 INFO nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 09:53:38 compute-0 ceph-mon[74331]: pgmap v801: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.7 MiB/s wr, 53 op/s
Nov 24 09:53:38 compute-0 nova_compute[257700]: 2025-11-24 09:53:38.987 257704 INFO nova.compute.manager [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Took 11.64 seconds to spawn the instance on the hypervisor.
Nov 24 09:53:38 compute-0 nova_compute[257700]: 2025-11-24 09:53:38.988 257704 DEBUG nova.compute.manager [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 09:53:39 compute-0 nova_compute[257700]: 2025-11-24 09:53:39.002 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:53:39 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:53:39.044 264910 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:53:39 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:53:39.044 264910 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:53:39 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:53:39.044 264910 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:53:39 compute-0 nova_compute[257700]: 2025-11-24 09:53:39.056 257704 INFO nova.compute.manager [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Took 12.56 seconds to build instance.
Nov 24 09:53:39 compute-0 nova_compute[257700]: 2025-11-24 09:53:39.071 257704 DEBUG oslo_concurrency.lockutils [None req-ae9af906-861b-4cc8-b299-44e59e2b2771 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "a30689a0-a2d7-4b8d-9f45-9763cda52bf9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:53:39 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:39 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:39 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:39.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:39 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:53:39.885 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[d2c17be1-5796-4e10-968a-fabe7d273b05]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:53:39 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:53:39.886 165073 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0dc1b2d1-81 in ovnmeta-0dc1b2d1-8ad8-483c-a726-aec9ed2927a1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 24 09:53:39 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:53:39.888 264910 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0dc1b2d1-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 24 09:53:39 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:53:39.888 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[f27ee38f-a1fd-4ea2-9079-298f3b9102ef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:53:39 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:53:39.892 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[2830deaa-1e3e-41e4-b5fc-530feca08ad9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:53:39 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v802: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.4 MiB/s wr, 47 op/s
Nov 24 09:53:39 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:53:39.915 165227 DEBUG oslo.privsep.daemon [-] privsep: reply[19a173ca-6275-4aba-888b-c42111edd99c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:53:39 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:53:39.945 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[e5e755a2-da9a-465f-94fb-15ce6b2bcca9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:53:39 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:53:39.947 165073 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmp6nn3nxbo/privsep.sock']
Nov 24 09:53:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:53:39 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:53:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:53:39 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:53:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:53:39 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:53:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:53:40 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:53:40 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:53:40.600 165073 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 24 09:53:40 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:53:40.601 165073 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp6nn3nxbo/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 24 09:53:40 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:53:40.482 264951 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 24 09:53:40 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:53:40.486 264951 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 24 09:53:40 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:53:40.488 264951 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Nov 24 09:53:40 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:53:40.488 264951 INFO oslo.privsep.daemon [-] privsep daemon running as pid 264951
Nov 24 09:53:40 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:53:40.604 264951 DEBUG oslo.privsep.daemon [-] privsep: reply[77181856-f360-40c0-b812-b5af44b8a7cf]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:53:40 compute-0 nova_compute[257700]: 2025-11-24 09:53:40.623 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:53:40 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:40 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:40 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:40.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:53:40] "GET /metrics HTTP/1.1" 200 48397 "" "Prometheus/2.51.0"
Nov 24 09:53:40 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:53:40] "GET /metrics HTTP/1.1" 200 48397 "" "Prometheus/2.51.0"
Nov 24 09:53:40 compute-0 ceph-mon[74331]: pgmap v802: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.4 MiB/s wr, 47 op/s
Nov 24 09:53:41 compute-0 nova_compute[257700]: 2025-11-24 09:53:41.018 257704 DEBUG nova.compute.manager [req-9329edcd-0aa4-45cc-bfdb-f177af25ec06 req-2e085532-e378-43db-b092-408b20b5b69b 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Received event network-vif-plugged-a483f88b-7075-47e4-a535-d23a6d20a8b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 09:53:41 compute-0 nova_compute[257700]: 2025-11-24 09:53:41.018 257704 DEBUG oslo_concurrency.lockutils [req-9329edcd-0aa4-45cc-bfdb-f177af25ec06 req-2e085532-e378-43db-b092-408b20b5b69b 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "a30689a0-a2d7-4b8d-9f45-9763cda52bf9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:53:41 compute-0 nova_compute[257700]: 2025-11-24 09:53:41.018 257704 DEBUG oslo_concurrency.lockutils [req-9329edcd-0aa4-45cc-bfdb-f177af25ec06 req-2e085532-e378-43db-b092-408b20b5b69b 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "a30689a0-a2d7-4b8d-9f45-9763cda52bf9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:53:41 compute-0 nova_compute[257700]: 2025-11-24 09:53:41.018 257704 DEBUG oslo_concurrency.lockutils [req-9329edcd-0aa4-45cc-bfdb-f177af25ec06 req-2e085532-e378-43db-b092-408b20b5b69b 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "a30689a0-a2d7-4b8d-9f45-9763cda52bf9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:53:41 compute-0 nova_compute[257700]: 2025-11-24 09:53:41.019 257704 DEBUG nova.compute.manager [req-9329edcd-0aa4-45cc-bfdb-f177af25ec06 req-2e085532-e378-43db-b092-408b20b5b69b 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] No waiting events found dispatching network-vif-plugged-a483f88b-7075-47e4-a535-d23a6d20a8b0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 09:53:41 compute-0 nova_compute[257700]: 2025-11-24 09:53:41.019 257704 WARNING nova.compute.manager [req-9329edcd-0aa4-45cc-bfdb-f177af25ec06 req-2e085532-e378-43db-b092-408b20b5b69b 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Received unexpected event network-vif-plugged-a483f88b-7075-47e4-a535-d23a6d20a8b0 for instance with vm_state active and task_state None.
Nov 24 09:53:41 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:53:41.094 264951 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:53:41 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:53:41.095 264951 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:53:41 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:53:41.095 264951 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:53:41 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:53:41.657 264951 DEBUG oslo.privsep.daemon [-] privsep: reply[b2b5234d-831a-478d-85c4-ee583df3cde1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:53:41 compute-0 NetworkManager[48883]: <info>  [1763978021.6792] manager: (tap0dc1b2d1-80): new Veth device (/org/freedesktop/NetworkManager/Devices/25)
Nov 24 09:53:41 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:53:41.678 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[c882509d-f19e-4a6b-b00e-07e00151cd03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:53:41 compute-0 systemd-udevd[264964]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 09:53:41 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:53:41.709 264951 DEBUG oslo.privsep.daemon [-] privsep: reply[45c919b4-afd0-497d-9a7a-88bca319e6b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:53:41 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:53:41.712 264951 DEBUG oslo.privsep.daemon [-] privsep: reply[6dc494a0-5842-4d6f-a891-e5eafe5d34f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:53:41 compute-0 NetworkManager[48883]: <info>  [1763978021.7368] device (tap0dc1b2d1-80): carrier: link connected
Nov 24 09:53:41 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:53:41.741 264951 DEBUG oslo.privsep.daemon [-] privsep: reply[0fe20fd7-563e-4672-8d2b-ae2f17159e15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:53:41 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:41 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:53:41 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:41.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:53:41 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:53:41.759 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[eeb3d0dc-db79-4b4e-b738-91832da1d44d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0dc1b2d1-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:eb:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 396930, 'reachable_time': 28212, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264982, 'error': None, 'target': 'ovnmeta-0dc1b2d1-8ad8-483c-a726-aec9ed2927a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:53:41 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:53:41.774 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[87809476-f235-4303-974a-092303663f4e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2d:ebea'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 396930, 'tstamp': 396930}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264983, 'error': None, 'target': 'ovnmeta-0dc1b2d1-8ad8-483c-a726-aec9ed2927a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:53:41 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:53:41.787 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[3f676efd-2334-4eb2-9ef7-eeaa891ca244]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0dc1b2d1-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:eb:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 396930, 'reachable_time': 28212, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 264984, 'error': None, 'target': 'ovnmeta-0dc1b2d1-8ad8-483c-a726-aec9ed2927a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:53:41 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:53:41.812 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[780f3469-d7c1-4bcd-97d8-3016b5a1f56d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:53:41 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:53:41.856 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[41d67f2d-eb53-4144-9080-422c2daacd3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:53:41 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:53:41.858 165073 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0dc1b2d1-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:53:41 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:53:41.858 165073 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 09:53:41 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:53:41.858 165073 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0dc1b2d1-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:53:41 compute-0 nova_compute[257700]: 2025-11-24 09:53:41.860 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:53:41 compute-0 NetworkManager[48883]: <info>  [1763978021.8609] manager: (tap0dc1b2d1-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Nov 24 09:53:41 compute-0 kernel: tap0dc1b2d1-80: entered promiscuous mode
Nov 24 09:53:41 compute-0 nova_compute[257700]: 2025-11-24 09:53:41.863 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:53:41 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:53:41.864 165073 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0dc1b2d1-80, col_values=(('external_ids', {'iface-id': '6d80ed6e-a23b-438d-8881-7f4774f1703c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:53:41 compute-0 nova_compute[257700]: 2025-11-24 09:53:41.865 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:53:41 compute-0 ovn_controller[155123]: 2025-11-24T09:53:41Z|00031|binding|INFO|Releasing lport 6d80ed6e-a23b-438d-8881-7f4774f1703c from this chassis (sb_readonly=0)
Nov 24 09:53:41 compute-0 nova_compute[257700]: 2025-11-24 09:53:41.878 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:53:41 compute-0 nova_compute[257700]: 2025-11-24 09:53:41.880 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:53:41 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:53:41.883 165073 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0dc1b2d1-8ad8-483c-a726-aec9ed2927a1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0dc1b2d1-8ad8-483c-a726-aec9ed2927a1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 24 09:53:41 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:53:41.883 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[ce3467d6-f0bb-493f-8b95-a2c56dec06fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:53:41 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:53:41.884 165073 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 24 09:53:41 compute-0 ovn_metadata_agent[165067]: global
Nov 24 09:53:41 compute-0 ovn_metadata_agent[165067]:     log         /dev/log local0 debug
Nov 24 09:53:41 compute-0 ovn_metadata_agent[165067]:     log-tag     haproxy-metadata-proxy-0dc1b2d1-8ad8-483c-a726-aec9ed2927a1
Nov 24 09:53:41 compute-0 ovn_metadata_agent[165067]:     user        root
Nov 24 09:53:41 compute-0 ovn_metadata_agent[165067]:     group       root
Nov 24 09:53:41 compute-0 ovn_metadata_agent[165067]:     maxconn     1024
Nov 24 09:53:41 compute-0 ovn_metadata_agent[165067]:     pidfile     /var/lib/neutron/external/pids/0dc1b2d1-8ad8-483c-a726-aec9ed2927a1.pid.haproxy
Nov 24 09:53:41 compute-0 ovn_metadata_agent[165067]:     daemon
Nov 24 09:53:41 compute-0 ovn_metadata_agent[165067]: 
Nov 24 09:53:41 compute-0 ovn_metadata_agent[165067]: defaults
Nov 24 09:53:41 compute-0 ovn_metadata_agent[165067]:     log global
Nov 24 09:53:41 compute-0 ovn_metadata_agent[165067]:     mode http
Nov 24 09:53:41 compute-0 ovn_metadata_agent[165067]:     option httplog
Nov 24 09:53:41 compute-0 ovn_metadata_agent[165067]:     option dontlognull
Nov 24 09:53:41 compute-0 ovn_metadata_agent[165067]:     option http-server-close
Nov 24 09:53:41 compute-0 ovn_metadata_agent[165067]:     option forwardfor
Nov 24 09:53:41 compute-0 ovn_metadata_agent[165067]:     retries                 3
Nov 24 09:53:41 compute-0 ovn_metadata_agent[165067]:     timeout http-request    30s
Nov 24 09:53:41 compute-0 ovn_metadata_agent[165067]:     timeout connect         30s
Nov 24 09:53:41 compute-0 ovn_metadata_agent[165067]:     timeout client          32s
Nov 24 09:53:41 compute-0 ovn_metadata_agent[165067]:     timeout server          32s
Nov 24 09:53:41 compute-0 ovn_metadata_agent[165067]:     timeout http-keep-alive 30s
Nov 24 09:53:41 compute-0 ovn_metadata_agent[165067]: 
Nov 24 09:53:41 compute-0 ovn_metadata_agent[165067]: 
Nov 24 09:53:41 compute-0 ovn_metadata_agent[165067]: listen listener
Nov 24 09:53:41 compute-0 ovn_metadata_agent[165067]:     bind 169.254.169.254:80
Nov 24 09:53:41 compute-0 ovn_metadata_agent[165067]:     server metadata /var/lib/neutron/metadata_proxy
Nov 24 09:53:41 compute-0 ovn_metadata_agent[165067]:     http-request add-header X-OVN-Network-ID 0dc1b2d1-8ad8-483c-a726-aec9ed2927a1
Nov 24 09:53:41 compute-0 ovn_metadata_agent[165067]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 24 09:53:41 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:53:41.885 165073 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0dc1b2d1-8ad8-483c-a726-aec9ed2927a1', 'env', 'PROCESS_TAG=haproxy-0dc1b2d1-8ad8-483c-a726-aec9ed2927a1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0dc1b2d1-8ad8-483c-a726-aec9ed2927a1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 24 09:53:41 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v803: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 89 op/s
Nov 24 09:53:42 compute-0 podman[265017]: 2025-11-24 09:53:42.220588651 +0000 UTC m=+0.052204763 container create f2d3e8a33983cf14d2bb3321e1a5d8289b41609578b66e349bbe70131a6c254d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0dc1b2d1-8ad8-483c-a726-aec9ed2927a1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 24 09:53:42 compute-0 systemd[1]: Started libpod-conmon-f2d3e8a33983cf14d2bb3321e1a5d8289b41609578b66e349bbe70131a6c254d.scope.
Nov 24 09:53:42 compute-0 podman[265017]: 2025-11-24 09:53:42.190994618 +0000 UTC m=+0.022610760 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 24 09:53:42 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:53:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a23b761c9bcb67f4eeac16b882980d2d463f8e0133d4e6d0a00c61cd9c851009/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 24 09:53:42 compute-0 podman[265017]: 2025-11-24 09:53:42.316818763 +0000 UTC m=+0.148434895 container init f2d3e8a33983cf14d2bb3321e1a5d8289b41609578b66e349bbe70131a6c254d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0dc1b2d1-8ad8-483c-a726-aec9ed2927a1, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 24 09:53:42 compute-0 podman[265017]: 2025-11-24 09:53:42.321827997 +0000 UTC m=+0.153444109 container start f2d3e8a33983cf14d2bb3321e1a5d8289b41609578b66e349bbe70131a6c254d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0dc1b2d1-8ad8-483c-a726-aec9ed2927a1, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 24 09:53:42 compute-0 neutron-haproxy-ovnmeta-0dc1b2d1-8ad8-483c-a726-aec9ed2927a1[265032]: [NOTICE]   (265037) : New worker (265039) forked
Nov 24 09:53:42 compute-0 neutron-haproxy-ovnmeta-0dc1b2d1-8ad8-483c-a726-aec9ed2927a1[265032]: [NOTICE]   (265037) : Loading success.
Nov 24 09:53:42 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:42 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:42 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:42.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:43 compute-0 ceph-mon[74331]: pgmap v803: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 89 op/s
Nov 24 09:53:43 compute-0 NetworkManager[48883]: <info>  [1763978023.5065] manager: (patch-provnet-aec09a4d-39ae-42d2-80ba-0cd5b53fed5d-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/27)
Nov 24 09:53:43 compute-0 nova_compute[257700]: 2025-11-24 09:53:43.505 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:53:43 compute-0 NetworkManager[48883]: <info>  [1763978023.5073] device (patch-provnet-aec09a4d-39ae-42d2-80ba-0cd5b53fed5d-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 09:53:43 compute-0 NetworkManager[48883]: <info>  [1763978023.5086] manager: (patch-br-int-to-provnet-aec09a4d-39ae-42d2-80ba-0cd5b53fed5d): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/28)
Nov 24 09:53:43 compute-0 ovn_controller[155123]: 2025-11-24T09:53:43Z|00032|binding|INFO|Releasing lport 6d80ed6e-a23b-438d-8881-7f4774f1703c from this chassis (sb_readonly=0)
Nov 24 09:53:43 compute-0 NetworkManager[48883]: <info>  [1763978023.5091] device (patch-br-int-to-provnet-aec09a4d-39ae-42d2-80ba-0cd5b53fed5d)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 09:53:43 compute-0 NetworkManager[48883]: <info>  [1763978023.5102] manager: (patch-br-int-to-provnet-aec09a4d-39ae-42d2-80ba-0cd5b53fed5d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Nov 24 09:53:43 compute-0 NetworkManager[48883]: <info>  [1763978023.5109] manager: (patch-provnet-aec09a4d-39ae-42d2-80ba-0cd5b53fed5d-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Nov 24 09:53:43 compute-0 NetworkManager[48883]: <info>  [1763978023.5114] device (patch-provnet-aec09a4d-39ae-42d2-80ba-0cd5b53fed5d-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 24 09:53:43 compute-0 NetworkManager[48883]: <info>  [1763978023.5118] device (patch-br-int-to-provnet-aec09a4d-39ae-42d2-80ba-0cd5b53fed5d)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 24 09:53:43 compute-0 ovn_controller[155123]: 2025-11-24T09:53:43Z|00033|binding|INFO|Releasing lport 6d80ed6e-a23b-438d-8881-7f4774f1703c from this chassis (sb_readonly=0)
Nov 24 09:53:43 compute-0 nova_compute[257700]: 2025-11-24 09:53:43.523 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:53:43 compute-0 nova_compute[257700]: 2025-11-24 09:53:43.528 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:53:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:53:43 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:43 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:43 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:43.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:43 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v804: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 15 KiB/s wr, 86 op/s
Nov 24 09:53:44 compute-0 nova_compute[257700]: 2025-11-24 09:53:44.004 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:53:44 compute-0 nova_compute[257700]: 2025-11-24 09:53:44.626 257704 DEBUG nova.compute.manager [req-f78317b6-463d-4aa7-a626-88be5a51d6b8 req-2b307b88-ce2a-4731-bf8e-d999c948999e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Received event network-changed-a483f88b-7075-47e4-a535-d23a6d20a8b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 09:53:44 compute-0 nova_compute[257700]: 2025-11-24 09:53:44.626 257704 DEBUG nova.compute.manager [req-f78317b6-463d-4aa7-a626-88be5a51d6b8 req-2b307b88-ce2a-4731-bf8e-d999c948999e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Refreshing instance network info cache due to event network-changed-a483f88b-7075-47e4-a535-d23a6d20a8b0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 09:53:44 compute-0 nova_compute[257700]: 2025-11-24 09:53:44.626 257704 DEBUG oslo_concurrency.lockutils [req-f78317b6-463d-4aa7-a626-88be5a51d6b8 req-2b307b88-ce2a-4731-bf8e-d999c948999e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "refresh_cache-a30689a0-a2d7-4b8d-9f45-9763cda52bf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 09:53:44 compute-0 nova_compute[257700]: 2025-11-24 09:53:44.626 257704 DEBUG oslo_concurrency.lockutils [req-f78317b6-463d-4aa7-a626-88be5a51d6b8 req-2b307b88-ce2a-4731-bf8e-d999c948999e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquired lock "refresh_cache-a30689a0-a2d7-4b8d-9f45-9763cda52bf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 09:53:44 compute-0 nova_compute[257700]: 2025-11-24 09:53:44.627 257704 DEBUG nova.network.neutron [req-f78317b6-463d-4aa7-a626-88be5a51d6b8 req-2b307b88-ce2a-4731-bf8e-d999c948999e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Refreshing network info cache for port a483f88b-7075-47e4-a535-d23a6d20a8b0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 09:53:44 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:44 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:44 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:44.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:53:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:53:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:53:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:53:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:53:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:53:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:53:45 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:53:45 compute-0 ceph-mon[74331]: pgmap v804: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 15 KiB/s wr, 86 op/s
Nov 24 09:53:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Optimize plan auto_2025-11-24_09:53:45
Nov 24 09:53:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 09:53:45 compute-0 ceph-mgr[74626]: [balancer INFO root] do_upmap
Nov 24 09:53:45 compute-0 ceph-mgr[74626]: [balancer INFO root] pools ['default.rgw.log', 'backups', '.rgw.root', 'default.rgw.control', '.nfs', 'volumes', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', '.mgr', 'vms', 'cephfs.cephfs.meta']
Nov 24 09:53:45 compute-0 ceph-mgr[74626]: [balancer INFO root] prepared 0/10 upmap changes
Nov 24 09:53:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:53:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:53:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:53:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:53:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:53:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:53:45 compute-0 nova_compute[257700]: 2025-11-24 09:53:45.625 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:53:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 09:53:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:53:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 09:53:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:53:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Nov 24 09:53:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:53:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:53:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:53:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:53:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:53:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 24 09:53:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:53:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 09:53:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:53:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:53:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:53:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 24 09:53:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:53:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 09:53:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:53:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:53:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:53:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 09:53:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:53:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 09:53:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 09:53:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:53:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:53:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 09:53:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:53:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:53:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:53:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:53:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:53:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:53:45 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:45 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:45 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:45.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:45 compute-0 sudo[265052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:53:45 compute-0 sudo[265052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:53:45 compute-0 sudo[265052]: pam_unix(sudo:session): session closed for user root
Nov 24 09:53:45 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v805: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 24 09:53:46 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:53:46 compute-0 nova_compute[257700]: 2025-11-24 09:53:46.683 257704 DEBUG nova.network.neutron [req-f78317b6-463d-4aa7-a626-88be5a51d6b8 req-2b307b88-ce2a-4731-bf8e-d999c948999e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Updated VIF entry in instance network info cache for port a483f88b-7075-47e4-a535-d23a6d20a8b0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 09:53:46 compute-0 nova_compute[257700]: 2025-11-24 09:53:46.684 257704 DEBUG nova.network.neutron [req-f78317b6-463d-4aa7-a626-88be5a51d6b8 req-2b307b88-ce2a-4731-bf8e-d999c948999e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Updating instance_info_cache with network_info: [{"id": "a483f88b-7075-47e4-a535-d23a6d20a8b0", "address": "fa:16:3e:60:ce:66", "network": {"id": "0dc1b2d1-8ad8-483c-a726-aec9ed2927a1", "bridge": "br-int", "label": "tempest-network-smoke--1014744228", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa483f88b-70", "ovs_interfaceid": "a483f88b-7075-47e4-a535-d23a6d20a8b0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 09:53:46 compute-0 nova_compute[257700]: 2025-11-24 09:53:46.704 257704 DEBUG oslo_concurrency.lockutils [req-f78317b6-463d-4aa7-a626-88be5a51d6b8 req-2b307b88-ce2a-4731-bf8e-d999c948999e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Releasing lock "refresh_cache-a30689a0-a2d7-4b8d-9f45-9763cda52bf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 09:53:46 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:46 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:53:46 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:46.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:53:47 compute-0 ceph-mon[74331]: pgmap v805: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 24 09:53:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:53:47.104Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:53:47 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:47 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:47 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:47.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:47 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v806: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Nov 24 09:53:48 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:53:48 compute-0 sudo[265081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:53:48 compute-0 sudo[265081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:53:48 compute-0 sudo[265081]: pam_unix(sudo:session): session closed for user root
Nov 24 09:53:48 compute-0 sudo[265106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 24 09:53:48 compute-0 sudo[265106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:53:48 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:48 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:53:48 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:48.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:53:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:53:48.856Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:53:49 compute-0 nova_compute[257700]: 2025-11-24 09:53:49.006 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:53:49 compute-0 ceph-mon[74331]: pgmap v806: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Nov 24 09:53:49 compute-0 podman[265203]: 2025-11-24 09:53:49.26141759 +0000 UTC m=+0.072196488 container exec 926e81c0f890a1c1ac5ebf5b0a3fc7d39273a3029701ecf933d5ab782a4c6bc4 (image=quay.io/ceph/ceph:v19, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mon-compute-0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 09:53:49 compute-0 podman[265203]: 2025-11-24 09:53:49.356582066 +0000 UTC m=+0.167360964 container exec_died 926e81c0f890a1c1ac5ebf5b0a3fc7d39273a3029701ecf933d5ab782a4c6bc4 (image=quay.io/ceph/ceph:v19, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mon-compute-0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Nov 24 09:53:49 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:49 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:49 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:49.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:49 compute-0 podman[265341]: 2025-11-24 09:53:49.879539889 +0000 UTC m=+0.055202887 container exec c1042f9aaa96d1cc7323d0bb263b746783ae7f616fd1b71ffa56027caf075582 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:53:49 compute-0 podman[265341]: 2025-11-24 09:53:49.88842978 +0000 UTC m=+0.064092748 container exec_died c1042f9aaa96d1cc7323d0bb263b746783ae7f616fd1b71ffa56027caf075582 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:53:49 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v807: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 24 09:53:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:53:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:53:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:53:50 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:53:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:53:50 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:53:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:53:50 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:53:50 compute-0 podman[265415]: 2025-11-24 09:53:50.120438422 +0000 UTC m=+0.053972987 container exec a8ff859c0ee484e58c6aaf58e6d722a3faffb91c2dea80441e79254f2043cb44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:53:50 compute-0 podman[265415]: 2025-11-24 09:53:50.134625483 +0000 UTC m=+0.068160058 container exec_died a8ff859c0ee484e58c6aaf58e6d722a3faffb91c2dea80441e79254f2043cb44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Nov 24 09:53:50 compute-0 podman[265480]: 2025-11-24 09:53:50.327763624 +0000 UTC m=+0.046850151 container exec 6c3a81d73f056383702bf60c1dab3f213ae48261b4107ee30655cbadd5ed4114 (image=quay.io/ceph/haproxy:2.3, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf)
Nov 24 09:53:50 compute-0 podman[265480]: 2025-11-24 09:53:50.3385579 +0000 UTC m=+0.057644427 container exec_died 6c3a81d73f056383702bf60c1dab3f213ae48261b4107ee30655cbadd5ed4114 (image=quay.io/ceph/haproxy:2.3, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf)
Nov 24 09:53:50 compute-0 podman[265546]: 2025-11-24 09:53:50.556462084 +0000 UTC m=+0.048969193 container exec da5e2e82794b556dfcd8ea30635453752d519b3ce5ab3e77ac09ab6f644d0021 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-0-mglptr, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, release=1793, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.28.2, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64)
Nov 24 09:53:50 compute-0 podman[265546]: 2025-11-24 09:53:50.586486147 +0000 UTC m=+0.078993246 container exec_died da5e2e82794b556dfcd8ea30635453752d519b3ce5ab3e77ac09ab6f644d0021 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-0-mglptr, name=keepalived, vcs-type=git, architecture=x86_64, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, release=1793, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived)
Nov 24 09:53:50 compute-0 nova_compute[257700]: 2025-11-24 09:53:50.628 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:53:50 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:50 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:50 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:50.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:50 compute-0 podman[265611]: 2025-11-24 09:53:50.824477397 +0000 UTC m=+0.057180785 container exec 333e8d52ac14c1ad2562a9b1108149f074ce2b54eb58b09f4ec22c7b717459e6 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:53:50 compute-0 podman[265611]: 2025-11-24 09:53:50.855890055 +0000 UTC m=+0.088593443 container exec_died 333e8d52ac14c1ad2562a9b1108149f074ce2b54eb58b09f4ec22c7b717459e6 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:53:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:53:50] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 24 09:53:50 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:53:50] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 24 09:53:51 compute-0 ceph-mon[74331]: pgmap v807: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 24 09:53:51 compute-0 podman[265684]: 2025-11-24 09:53:51.08719991 +0000 UTC m=+0.055645638 container exec 64e58e60bc23a7d57cc9d528e4c0a82e4df02b33e046975aeb8ef22ad0995bf2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 24 09:53:51 compute-0 podman[265684]: 2025-11-24 09:53:51.264060118 +0000 UTC m=+0.232505846 container exec_died 64e58e60bc23a7d57cc9d528e4c0a82e4df02b33e046975aeb8ef22ad0995bf2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 24 09:53:51 compute-0 podman[265801]: 2025-11-24 09:53:51.618154962 +0000 UTC m=+0.057770061 container exec 10beeaa631829ec8676854498a3516687cc150842a3e976767e7a8406d406beb (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:53:51 compute-0 podman[265801]: 2025-11-24 09:53:51.671423081 +0000 UTC m=+0.111038150 container exec_died 10beeaa631829ec8676854498a3516687cc150842a3e976767e7a8406d406beb (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 09:53:51 compute-0 sudo[265106]: pam_unix(sudo:session): session closed for user root
Nov 24 09:53:51 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:53:51 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:53:51 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:53:51 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:53:51 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:51 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:51 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:51.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:51 compute-0 sudo[265843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:53:51 compute-0 sudo[265843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:53:51 compute-0 sudo[265843]: pam_unix(sudo:session): session closed for user root
Nov 24 09:53:51 compute-0 sudo[265868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:53:51 compute-0 sudo[265868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:53:51 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v808: 353 pgs: 353 active+clean; 113 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 126 op/s
Nov 24 09:53:52 compute-0 sudo[265868]: pam_unix(sudo:session): session closed for user root
Nov 24 09:53:52 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v809: 353 pgs: 353 active+clean; 113 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 372 KiB/s rd, 2.3 MiB/s wr, 59 op/s
Nov 24 09:53:52 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:53:52 compute-0 ovn_controller[155123]: 2025-11-24T09:53:52Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:60:ce:66 10.100.0.9
Nov 24 09:53:52 compute-0 ovn_controller[155123]: 2025-11-24T09:53:52Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:60:ce:66 10.100.0.9
Nov 24 09:53:52 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:53:52 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:53:52 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:53:52 compute-0 sudo[265926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:53:52 compute-0 sudo[265926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:53:52 compute-0 sudo[265926]: pam_unix(sudo:session): session closed for user root
Nov 24 09:53:52 compute-0 sudo[265951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 24 09:53:52 compute-0 sudo[265951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:53:52 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:52 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:52 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:52.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:52 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:53:52 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:53:52 compute-0 ceph-mon[74331]: pgmap v808: 353 pgs: 353 active+clean; 113 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 126 op/s
Nov 24 09:53:52 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:53:52 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:53:52 compute-0 ceph-mon[74331]: pgmap v809: 353 pgs: 353 active+clean; 113 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 372 KiB/s rd, 2.3 MiB/s wr, 59 op/s
Nov 24 09:53:52 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:53:52 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:53:52 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:53:52 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:53:52 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:53:52 compute-0 podman[266017]: 2025-11-24 09:53:52.926558927 +0000 UTC m=+0.038853603 container create def52eb49c9264ba44154c814357365d9ad2817ce3cc45c14e5e5ef071d45736 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 24 09:53:52 compute-0 systemd[1]: Started libpod-conmon-def52eb49c9264ba44154c814357365d9ad2817ce3cc45c14e5e5ef071d45736.scope.
Nov 24 09:53:52 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:53:52 compute-0 podman[266017]: 2025-11-24 09:53:52.994607531 +0000 UTC m=+0.106902237 container init def52eb49c9264ba44154c814357365d9ad2817ce3cc45c14e5e5ef071d45736 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 24 09:53:53 compute-0 podman[266017]: 2025-11-24 09:53:53.001936063 +0000 UTC m=+0.114230749 container start def52eb49c9264ba44154c814357365d9ad2817ce3cc45c14e5e5ef071d45736 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_shockley, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:53:53 compute-0 podman[266017]: 2025-11-24 09:53:52.91091586 +0000 UTC m=+0.023210566 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:53:53 compute-0 podman[266017]: 2025-11-24 09:53:53.006021563 +0000 UTC m=+0.118316239 container attach def52eb49c9264ba44154c814357365d9ad2817ce3cc45c14e5e5ef071d45736 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_shockley, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 09:53:53 compute-0 zen_shockley[266034]: 167 167
Nov 24 09:53:53 compute-0 systemd[1]: libpod-def52eb49c9264ba44154c814357365d9ad2817ce3cc45c14e5e5ef071d45736.scope: Deactivated successfully.
Nov 24 09:53:53 compute-0 conmon[266034]: conmon def52eb49c9264ba4415 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-def52eb49c9264ba44154c814357365d9ad2817ce3cc45c14e5e5ef071d45736.scope/container/memory.events
Nov 24 09:53:53 compute-0 podman[266017]: 2025-11-24 09:53:53.00951233 +0000 UTC m=+0.121807026 container died def52eb49c9264ba44154c814357365d9ad2817ce3cc45c14e5e5ef071d45736 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True)
Nov 24 09:53:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-33703f65f290c51f1abdbe6b2a6aa527602e94550417276017348939a45ab6b0-merged.mount: Deactivated successfully.
Nov 24 09:53:53 compute-0 podman[266017]: 2025-11-24 09:53:53.049446888 +0000 UTC m=+0.161741564 container remove def52eb49c9264ba44154c814357365d9ad2817ce3cc45c14e5e5ef071d45736 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_shockley, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 24 09:53:53 compute-0 systemd[1]: libpod-conmon-def52eb49c9264ba44154c814357365d9ad2817ce3cc45c14e5e5ef071d45736.scope: Deactivated successfully.
Nov 24 09:53:53 compute-0 podman[266057]: 2025-11-24 09:53:53.213909339 +0000 UTC m=+0.043983919 container create f796f34c255c204acf823ffdad7a095b0f583d4d9b72533c9c44c36e7d05c7a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_lamarr, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 24 09:53:53 compute-0 systemd[1]: Started libpod-conmon-f796f34c255c204acf823ffdad7a095b0f583d4d9b72533c9c44c36e7d05c7a9.scope.
Nov 24 09:53:53 compute-0 podman[266057]: 2025-11-24 09:53:53.194210242 +0000 UTC m=+0.024284842 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:53:53 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:53:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f5a64bc169706e24828df46af062d77cbaea97f985755570acd61c35ebd30a0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:53:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f5a64bc169706e24828df46af062d77cbaea97f985755570acd61c35ebd30a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:53:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f5a64bc169706e24828df46af062d77cbaea97f985755570acd61c35ebd30a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:53:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f5a64bc169706e24828df46af062d77cbaea97f985755570acd61c35ebd30a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:53:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f5a64bc169706e24828df46af062d77cbaea97f985755570acd61c35ebd30a0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:53:53 compute-0 podman[266057]: 2025-11-24 09:53:53.309940557 +0000 UTC m=+0.140015157 container init f796f34c255c204acf823ffdad7a095b0f583d4d9b72533c9c44c36e7d05c7a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_lamarr, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0)
Nov 24 09:53:53 compute-0 podman[266057]: 2025-11-24 09:53:53.319684957 +0000 UTC m=+0.149759537 container start f796f34c255c204acf823ffdad7a095b0f583d4d9b72533c9c44c36e7d05c7a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_lamarr, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:53:53 compute-0 podman[266057]: 2025-11-24 09:53:53.323966234 +0000 UTC m=+0.154040834 container attach f796f34c255c204acf823ffdad7a095b0f583d4d9b72533c9c44c36e7d05c7a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:53:53 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:53:53 compute-0 amazing_lamarr[266075]: --> passed data devices: 0 physical, 1 LVM
Nov 24 09:53:53 compute-0 amazing_lamarr[266075]: --> All data devices are unavailable
Nov 24 09:53:53 compute-0 systemd[1]: libpod-f796f34c255c204acf823ffdad7a095b0f583d4d9b72533c9c44c36e7d05c7a9.scope: Deactivated successfully.
Nov 24 09:53:53 compute-0 conmon[266075]: conmon f796f34c255c204acf82 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f796f34c255c204acf823ffdad7a095b0f583d4d9b72533c9c44c36e7d05c7a9.scope/container/memory.events
Nov 24 09:53:53 compute-0 podman[266057]: 2025-11-24 09:53:53.635051443 +0000 UTC m=+0.465126023 container died f796f34c255c204acf823ffdad7a095b0f583d4d9b72533c9c44c36e7d05c7a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:53:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f5a64bc169706e24828df46af062d77cbaea97f985755570acd61c35ebd30a0-merged.mount: Deactivated successfully.
Nov 24 09:53:53 compute-0 podman[266057]: 2025-11-24 09:53:53.673146506 +0000 UTC m=+0.503221086 container remove f796f34c255c204acf823ffdad7a095b0f583d4d9b72533c9c44c36e7d05c7a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_lamarr, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 09:53:53 compute-0 systemd[1]: libpod-conmon-f796f34c255c204acf823ffdad7a095b0f583d4d9b72533c9c44c36e7d05c7a9.scope: Deactivated successfully.
Nov 24 09:53:53 compute-0 sudo[265951]: pam_unix(sudo:session): session closed for user root
Nov 24 09:53:53 compute-0 sudo[266103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:53:53 compute-0 sudo[266103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:53:53 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:53 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:53:53 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:53.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:53:53 compute-0 sudo[266103]: pam_unix(sudo:session): session closed for user root
Nov 24 09:53:53 compute-0 sudo[266128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- lvm list --format json
Nov 24 09:53:53 compute-0 sudo[266128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:53:54 compute-0 nova_compute[257700]: 2025-11-24 09:53:54.008 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:53:54 compute-0 podman[266193]: 2025-11-24 09:53:54.197174386 +0000 UTC m=+0.037303334 container create c021b65eb9123e9f3b4f4008321d2f9257b0ed71236c3e4859af792a65eec7b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Nov 24 09:53:54 compute-0 systemd[1]: Started libpod-conmon-c021b65eb9123e9f3b4f4008321d2f9257b0ed71236c3e4859af792a65eec7b3.scope.
Nov 24 09:53:54 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:53:54 compute-0 podman[266193]: 2025-11-24 09:53:54.267607399 +0000 UTC m=+0.107736347 container init c021b65eb9123e9f3b4f4008321d2f9257b0ed71236c3e4859af792a65eec7b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Nov 24 09:53:54 compute-0 podman[266193]: 2025-11-24 09:53:54.27411068 +0000 UTC m=+0.114239628 container start c021b65eb9123e9f3b4f4008321d2f9257b0ed71236c3e4859af792a65eec7b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_wu, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Nov 24 09:53:54 compute-0 podman[266193]: 2025-11-24 09:53:54.181329444 +0000 UTC m=+0.021458392 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:53:54 compute-0 podman[266193]: 2025-11-24 09:53:54.277373861 +0000 UTC m=+0.117502829 container attach c021b65eb9123e9f3b4f4008321d2f9257b0ed71236c3e4859af792a65eec7b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_wu, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 24 09:53:54 compute-0 elegant_wu[266209]: 167 167
Nov 24 09:53:54 compute-0 systemd[1]: libpod-c021b65eb9123e9f3b4f4008321d2f9257b0ed71236c3e4859af792a65eec7b3.scope: Deactivated successfully.
Nov 24 09:53:54 compute-0 conmon[266209]: conmon c021b65eb9123e9f3b4f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c021b65eb9123e9f3b4f4008321d2f9257b0ed71236c3e4859af792a65eec7b3.scope/container/memory.events
Nov 24 09:53:54 compute-0 podman[266193]: 2025-11-24 09:53:54.280850707 +0000 UTC m=+0.120979655 container died c021b65eb9123e9f3b4f4008321d2f9257b0ed71236c3e4859af792a65eec7b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_wu, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:53:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef62089ba2432d67658d0949a32b24fbc3e0cdab4dbde32e4938fddc942eca59-merged.mount: Deactivated successfully.
Nov 24 09:53:54 compute-0 podman[266193]: 2025-11-24 09:53:54.3132897 +0000 UTC m=+0.153418648 container remove c021b65eb9123e9f3b4f4008321d2f9257b0ed71236c3e4859af792a65eec7b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_wu, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 24 09:53:54 compute-0 systemd[1]: libpod-conmon-c021b65eb9123e9f3b4f4008321d2f9257b0ed71236c3e4859af792a65eec7b3.scope: Deactivated successfully.
Nov 24 09:53:54 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v810: 353 pgs: 353 active+clean; 113 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 372 KiB/s rd, 2.3 MiB/s wr, 59 op/s
Nov 24 09:53:54 compute-0 podman[266233]: 2025-11-24 09:53:54.472499241 +0000 UTC m=+0.040038022 container create 64ec4b959d24240252d1c3526c430d62d74927be86247908ae8357bc03afbc2d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_robinson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:53:54 compute-0 ceph-mon[74331]: pgmap v810: 353 pgs: 353 active+clean; 113 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 372 KiB/s rd, 2.3 MiB/s wr, 59 op/s
Nov 24 09:53:54 compute-0 systemd[1]: Started libpod-conmon-64ec4b959d24240252d1c3526c430d62d74927be86247908ae8357bc03afbc2d.scope.
Nov 24 09:53:54 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:53:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a50aad61765a43f94a525c209f01aae374a784649829ecaa8d349150d74778e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:53:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a50aad61765a43f94a525c209f01aae374a784649829ecaa8d349150d74778e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:53:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a50aad61765a43f94a525c209f01aae374a784649829ecaa8d349150d74778e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:53:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a50aad61765a43f94a525c209f01aae374a784649829ecaa8d349150d74778e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:53:54 compute-0 podman[266233]: 2025-11-24 09:53:54.545347294 +0000 UTC m=+0.112886075 container init 64ec4b959d24240252d1c3526c430d62d74927be86247908ae8357bc03afbc2d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_robinson, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:53:54 compute-0 podman[266233]: 2025-11-24 09:53:54.45588049 +0000 UTC m=+0.023419281 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:53:54 compute-0 podman[266233]: 2025-11-24 09:53:54.552192424 +0000 UTC m=+0.119731195 container start 64ec4b959d24240252d1c3526c430d62d74927be86247908ae8357bc03afbc2d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_robinson, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:53:54 compute-0 podman[266233]: 2025-11-24 09:53:54.555778122 +0000 UTC m=+0.123316963 container attach 64ec4b959d24240252d1c3526c430d62d74927be86247908ae8357bc03afbc2d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid)
Nov 24 09:53:54 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:54 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:54 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:54.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:54 compute-0 affectionate_robinson[266249]: {
Nov 24 09:53:54 compute-0 affectionate_robinson[266249]:     "0": [
Nov 24 09:53:54 compute-0 affectionate_robinson[266249]:         {
Nov 24 09:53:54 compute-0 affectionate_robinson[266249]:             "devices": [
Nov 24 09:53:54 compute-0 affectionate_robinson[266249]:                 "/dev/loop3"
Nov 24 09:53:54 compute-0 affectionate_robinson[266249]:             ],
Nov 24 09:53:54 compute-0 affectionate_robinson[266249]:             "lv_name": "ceph_lv0",
Nov 24 09:53:54 compute-0 affectionate_robinson[266249]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:53:54 compute-0 affectionate_robinson[266249]:             "lv_size": "21470642176",
Nov 24 09:53:54 compute-0 affectionate_robinson[266249]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=84a084c3-61a7-5de7-8207-1f88efa59a64,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 24 09:53:54 compute-0 affectionate_robinson[266249]:             "lv_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:53:54 compute-0 affectionate_robinson[266249]:             "name": "ceph_lv0",
Nov 24 09:53:54 compute-0 affectionate_robinson[266249]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:53:54 compute-0 affectionate_robinson[266249]:             "tags": {
Nov 24 09:53:54 compute-0 affectionate_robinson[266249]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:53:54 compute-0 affectionate_robinson[266249]:                 "ceph.block_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:53:54 compute-0 affectionate_robinson[266249]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 09:53:54 compute-0 affectionate_robinson[266249]:                 "ceph.cluster_fsid": "84a084c3-61a7-5de7-8207-1f88efa59a64",
Nov 24 09:53:54 compute-0 affectionate_robinson[266249]:                 "ceph.cluster_name": "ceph",
Nov 24 09:53:54 compute-0 affectionate_robinson[266249]:                 "ceph.crush_device_class": "",
Nov 24 09:53:54 compute-0 affectionate_robinson[266249]:                 "ceph.encrypted": "0",
Nov 24 09:53:54 compute-0 affectionate_robinson[266249]:                 "ceph.osd_fsid": "4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c",
Nov 24 09:53:54 compute-0 affectionate_robinson[266249]:                 "ceph.osd_id": "0",
Nov 24 09:53:54 compute-0 affectionate_robinson[266249]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 09:53:54 compute-0 affectionate_robinson[266249]:                 "ceph.type": "block",
Nov 24 09:53:54 compute-0 affectionate_robinson[266249]:                 "ceph.vdo": "0",
Nov 24 09:53:54 compute-0 affectionate_robinson[266249]:                 "ceph.with_tpm": "0"
Nov 24 09:53:54 compute-0 affectionate_robinson[266249]:             },
Nov 24 09:53:54 compute-0 affectionate_robinson[266249]:             "type": "block",
Nov 24 09:53:54 compute-0 affectionate_robinson[266249]:             "vg_name": "ceph_vg0"
Nov 24 09:53:54 compute-0 affectionate_robinson[266249]:         }
Nov 24 09:53:54 compute-0 affectionate_robinson[266249]:     ]
Nov 24 09:53:54 compute-0 affectionate_robinson[266249]: }
Nov 24 09:53:54 compute-0 systemd[1]: libpod-64ec4b959d24240252d1c3526c430d62d74927be86247908ae8357bc03afbc2d.scope: Deactivated successfully.
Nov 24 09:53:54 compute-0 podman[266233]: 2025-11-24 09:53:54.85950376 +0000 UTC m=+0.427042531 container died 64ec4b959d24240252d1c3526c430d62d74927be86247908ae8357bc03afbc2d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_robinson, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 24 09:53:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a50aad61765a43f94a525c209f01aae374a784649829ecaa8d349150d74778e-merged.mount: Deactivated successfully.
Nov 24 09:53:54 compute-0 podman[266233]: 2025-11-24 09:53:54.899216573 +0000 UTC m=+0.466755344 container remove 64ec4b959d24240252d1c3526c430d62d74927be86247908ae8357bc03afbc2d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:53:54 compute-0 systemd[1]: libpod-conmon-64ec4b959d24240252d1c3526c430d62d74927be86247908ae8357bc03afbc2d.scope: Deactivated successfully.
Nov 24 09:53:54 compute-0 sudo[266128]: pam_unix(sudo:session): session closed for user root
Nov 24 09:53:54 compute-0 sudo[266271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:53:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:53:55 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:53:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:53:55 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:53:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:53:55 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:53:55 compute-0 sudo[266271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:53:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:53:55 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:53:55 compute-0 sudo[266271]: pam_unix(sudo:session): session closed for user root
Nov 24 09:53:55 compute-0 sudo[266296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- raw list --format json
Nov 24 09:53:55 compute-0 sudo[266296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:53:55 compute-0 podman[266361]: 2025-11-24 09:53:55.493319718 +0000 UTC m=+0.040799131 container create 52c4f8c8ea73530c6c53a0ad017b8fbae49ab940bec8064f21820fb30120ec26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 24 09:53:55 compute-0 systemd[1]: Started libpod-conmon-52c4f8c8ea73530c6c53a0ad017b8fbae49ab940bec8064f21820fb30120ec26.scope.
Nov 24 09:53:55 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:53:55 compute-0 podman[266361]: 2025-11-24 09:53:55.475925767 +0000 UTC m=+0.023405210 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:53:55 compute-0 nova_compute[257700]: 2025-11-24 09:53:55.629 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:53:55 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:55 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:55 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:55.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:55 compute-0 podman[266361]: 2025-11-24 09:53:55.822459734 +0000 UTC m=+0.369939187 container init 52c4f8c8ea73530c6c53a0ad017b8fbae49ab940bec8064f21820fb30120ec26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 09:53:55 compute-0 podman[266361]: 2025-11-24 09:53:55.831462517 +0000 UTC m=+0.378941940 container start 52c4f8c8ea73530c6c53a0ad017b8fbae49ab940bec8064f21820fb30120ec26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:53:55 compute-0 podman[266361]: 2025-11-24 09:53:55.834987524 +0000 UTC m=+0.382466937 container attach 52c4f8c8ea73530c6c53a0ad017b8fbae49ab940bec8064f21820fb30120ec26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:53:55 compute-0 great_lichterman[266378]: 167 167
Nov 24 09:53:55 compute-0 systemd[1]: libpod-52c4f8c8ea73530c6c53a0ad017b8fbae49ab940bec8064f21820fb30120ec26.scope: Deactivated successfully.
Nov 24 09:53:55 compute-0 podman[266361]: 2025-11-24 09:53:55.839323471 +0000 UTC m=+0.386802884 container died 52c4f8c8ea73530c6c53a0ad017b8fbae49ab940bec8064f21820fb30120ec26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_lichterman, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 09:53:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-5441984c699d9d7acb379405aa46aeacf8373ba2ce8393e3cbd01a9777840d98-merged.mount: Deactivated successfully.
Nov 24 09:53:55 compute-0 podman[266361]: 2025-11-24 09:53:55.869623022 +0000 UTC m=+0.417102435 container remove 52c4f8c8ea73530c6c53a0ad017b8fbae49ab940bec8064f21820fb30120ec26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:53:55 compute-0 systemd[1]: libpod-conmon-52c4f8c8ea73530c6c53a0ad017b8fbae49ab940bec8064f21820fb30120ec26.scope: Deactivated successfully.
Nov 24 09:53:56 compute-0 podman[266401]: 2025-11-24 09:53:56.050206562 +0000 UTC m=+0.052255856 container create 6aaa1f06f69aba96235da79bfb74b003ba93f954c48cb687ebfc965360fd29df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_galileo, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Nov 24 09:53:56 compute-0 systemd[1]: Started libpod-conmon-6aaa1f06f69aba96235da79bfb74b003ba93f954c48cb687ebfc965360fd29df.scope.
Nov 24 09:53:56 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:53:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d50d2cd0a64867b7476462b0bc4fd0b5b5bc5bd6fa7b4669f8565c36ae243a2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:53:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d50d2cd0a64867b7476462b0bc4fd0b5b5bc5bd6fa7b4669f8565c36ae243a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:53:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d50d2cd0a64867b7476462b0bc4fd0b5b5bc5bd6fa7b4669f8565c36ae243a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:53:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d50d2cd0a64867b7476462b0bc4fd0b5b5bc5bd6fa7b4669f8565c36ae243a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:53:56 compute-0 podman[266401]: 2025-11-24 09:53:56.032174945 +0000 UTC m=+0.034224259 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:53:56 compute-0 podman[266401]: 2025-11-24 09:53:56.137317937 +0000 UTC m=+0.139367341 container init 6aaa1f06f69aba96235da79bfb74b003ba93f954c48cb687ebfc965360fd29df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_galileo, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 24 09:53:56 compute-0 podman[266401]: 2025-11-24 09:53:56.14427122 +0000 UTC m=+0.146320514 container start 6aaa1f06f69aba96235da79bfb74b003ba93f954c48cb687ebfc965360fd29df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_galileo, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:53:56 compute-0 podman[266401]: 2025-11-24 09:53:56.148199047 +0000 UTC m=+0.150248431 container attach 6aaa1f06f69aba96235da79bfb74b003ba93f954c48cb687ebfc965360fd29df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 24 09:53:56 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v811: 353 pgs: 353 active+clean; 113 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 372 KiB/s rd, 2.3 MiB/s wr, 59 op/s
Nov 24 09:53:56 compute-0 ceph-mon[74331]: pgmap v811: 353 pgs: 353 active+clean; 113 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 372 KiB/s rd, 2.3 MiB/s wr, 59 op/s
Nov 24 09:53:56 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:56 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:56 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:56.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:56 compute-0 lvm[266493]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 09:53:56 compute-0 lvm[266493]: VG ceph_vg0 finished
Nov 24 09:53:56 compute-0 loving_galileo[266418]: {}
Nov 24 09:53:56 compute-0 systemd[1]: libpod-6aaa1f06f69aba96235da79bfb74b003ba93f954c48cb687ebfc965360fd29df.scope: Deactivated successfully.
Nov 24 09:53:56 compute-0 systemd[1]: libpod-6aaa1f06f69aba96235da79bfb74b003ba93f954c48cb687ebfc965360fd29df.scope: Consumed 1.114s CPU time.
Nov 24 09:53:56 compute-0 podman[266401]: 2025-11-24 09:53:56.854093189 +0000 UTC m=+0.856142503 container died 6aaa1f06f69aba96235da79bfb74b003ba93f954c48cb687ebfc965360fd29df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_galileo, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:53:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d50d2cd0a64867b7476462b0bc4fd0b5b5bc5bd6fa7b4669f8565c36ae243a2-merged.mount: Deactivated successfully.
Nov 24 09:53:56 compute-0 podman[266401]: 2025-11-24 09:53:56.893196756 +0000 UTC m=+0.895246040 container remove 6aaa1f06f69aba96235da79bfb74b003ba93f954c48cb687ebfc965360fd29df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_galileo, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Nov 24 09:53:56 compute-0 systemd[1]: libpod-conmon-6aaa1f06f69aba96235da79bfb74b003ba93f954c48cb687ebfc965360fd29df.scope: Deactivated successfully.
Nov 24 09:53:56 compute-0 sudo[266296]: pam_unix(sudo:session): session closed for user root
Nov 24 09:53:56 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:53:56 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:53:56 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:53:56 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:53:57 compute-0 sudo[266507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:53:57 compute-0 sudo[266507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:53:57 compute-0 sudo[266507]: pam_unix(sudo:session): session closed for user root
Nov 24 09:53:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:53:57.105Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:53:57 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:57 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:57 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:57.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:57 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:53:57 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:53:58 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v812: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 442 KiB/s rd, 2.4 MiB/s wr, 75 op/s
Nov 24 09:53:58 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:53:58 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:58 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:53:58 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:53:58.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:53:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:53:58.856Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:53:58 compute-0 ceph-mon[74331]: pgmap v812: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 442 KiB/s rd, 2.4 MiB/s wr, 75 op/s
Nov 24 09:53:59 compute-0 nova_compute[257700]: 2025-11-24 09:53:59.011 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:53:59 compute-0 nova_compute[257700]: 2025-11-24 09:53:59.095 257704 INFO nova.compute.manager [None req-14df6aac-4f0a-4f08-ab72-f7af7cba3ba2 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Get console output
Nov 24 09:53:59 compute-0 nova_compute[257700]: 2025-11-24 09:53:59.100 257704 INFO oslo.privsep.daemon [None req-14df6aac-4f0a-4f08-ab72-f7af7cba3ba2 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpl62p6zwu/privsep.sock']
Nov 24 09:53:59 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:53:59 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:53:59 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:53:59.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:53:59 compute-0 nova_compute[257700]: 2025-11-24 09:53:59.794 257704 INFO oslo.privsep.daemon [None req-14df6aac-4f0a-4f08-ab72-f7af7cba3ba2 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Spawned new privsep daemon via rootwrap
Nov 24 09:53:59 compute-0 nova_compute[257700]: 2025-11-24 09:53:59.672 266539 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 24 09:53:59 compute-0 nova_compute[257700]: 2025-11-24 09:53:59.680 266539 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 24 09:53:59 compute-0 nova_compute[257700]: 2025-11-24 09:53:59.685 266539 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Nov 24 09:53:59 compute-0 nova_compute[257700]: 2025-11-24 09:53:59.686 266539 INFO oslo.privsep.daemon [-] privsep daemon running as pid 266539
Nov 24 09:53:59 compute-0 podman[266540]: 2025-11-24 09:53:59.800025564 +0000 UTC m=+0.077434468 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 24 09:53:59 compute-0 podman[266541]: 2025-11-24 09:53:59.814969953 +0000 UTC m=+0.082012160 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 09:53:59 compute-0 nova_compute[257700]: 2025-11-24 09:53:59.896 266539 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 24 09:54:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:53:59 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:54:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:53:59 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:54:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:53:59 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:54:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:54:00 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:54:00 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v813: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 442 KiB/s rd, 2.4 MiB/s wr, 75 op/s
Nov 24 09:54:00 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:54:00 compute-0 ceph-mon[74331]: pgmap v813: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 442 KiB/s rd, 2.4 MiB/s wr, 75 op/s
Nov 24 09:54:00 compute-0 nova_compute[257700]: 2025-11-24 09:54:00.631 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:00 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:00 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:00 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:00.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:54:00] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Nov 24 09:54:00 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:54:00] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Nov 24 09:54:01 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 24 09:54:01 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1086416871' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 09:54:01 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 24 09:54:01 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1086416871' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 09:54:01 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/1086416871' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 09:54:01 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/1086416871' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 09:54:01 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:01 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:54:01 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:01.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:54:02 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v814: 353 pgs: 353 active+clean; 121 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 117 KiB/s wr, 18 op/s
Nov 24 09:54:02 compute-0 ceph-mon[74331]: pgmap v814: 353 pgs: 353 active+clean; 121 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 117 KiB/s wr, 18 op/s
Nov 24 09:54:02 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:02 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:02 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:02.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:03 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:54:03 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:03 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:03 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:03.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:04 compute-0 nova_compute[257700]: 2025-11-24 09:54:04.013 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:04 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v815: 353 pgs: 353 active+clean; 121 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 102 KiB/s wr, 16 op/s
Nov 24 09:54:04 compute-0 ceph-mon[74331]: pgmap v815: 353 pgs: 353 active+clean; 121 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 102 KiB/s wr, 16 op/s
Nov 24 09:54:04 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:04 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:54:04 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:04.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:54:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:54:04 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:54:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:54:04 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:54:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:54:04 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:54:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:54:05 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:54:05 compute-0 nova_compute[257700]: 2025-11-24 09:54:05.633 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:05 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:05 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:05 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:05.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:05 compute-0 sudo[266594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:54:05 compute-0 sudo[266594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:54:05 compute-0 sudo[266594]: pam_unix(sudo:session): session closed for user root
Nov 24 09:54:06 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:54:06.406 165073 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:13:51', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4e:f0:a8:6f:5e:1b'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 09:54:06 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:54:06.409 165073 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 09:54:06 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v816: 353 pgs: 353 active+clean; 121 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 102 KiB/s wr, 16 op/s
Nov 24 09:54:06 compute-0 nova_compute[257700]: 2025-11-24 09:54:06.440 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:06 compute-0 ceph-mon[74331]: pgmap v816: 353 pgs: 353 active+clean; 121 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 102 KiB/s wr, 16 op/s
Nov 24 09:54:06 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:06 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:06 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:06.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:54:07.107Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:54:07 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:54:07.411 165073 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feb242b9-6422-4c37-bc7a-5c14a79beaf8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:54:07 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:07 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:07 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:07.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:07 compute-0 podman[266621]: 2025-11-24 09:54:07.807653159 +0000 UTC m=+0.072144867 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Nov 24 09:54:08 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v817: 353 pgs: 353 active+clean; 121 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 106 KiB/s wr, 17 op/s
Nov 24 09:54:08 compute-0 ceph-mon[74331]: pgmap v817: 353 pgs: 353 active+clean; 121 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 106 KiB/s wr, 17 op/s
Nov 24 09:54:08 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:54:08 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:08 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:08 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:08.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:54:08.856Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:54:09 compute-0 nova_compute[257700]: 2025-11-24 09:54:09.016 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:09 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:09 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:09 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:09.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:54:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:54:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:54:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:54:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:54:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:54:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:54:10 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:54:10 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v818: 353 pgs: 353 active+clean; 121 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 15 KiB/s wr, 2 op/s
Nov 24 09:54:10 compute-0 ceph-mon[74331]: pgmap v818: 353 pgs: 353 active+clean; 121 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 15 KiB/s wr, 2 op/s
Nov 24 09:54:10 compute-0 nova_compute[257700]: 2025-11-24 09:54:10.635 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:10 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:10 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:10 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:10.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:54:10] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Nov 24 09:54:10 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:54:10] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Nov 24 09:54:11 compute-0 sshd-session[266642]: Invalid user test from 83.229.122.23 port 54896
Nov 24 09:54:11 compute-0 sshd-session[266642]: Received disconnect from 83.229.122.23 port 54896:11: Bye Bye [preauth]
Nov 24 09:54:11 compute-0 sshd-session[266642]: Disconnected from invalid user test 83.229.122.23 port 54896 [preauth]
Nov 24 09:54:11 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:11 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:11 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:11.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:11 compute-0 nova_compute[257700]: 2025-11-24 09:54:11.922 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:54:12 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v819: 353 pgs: 353 active+clean; 121 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 7.5 KiB/s rd, 15 KiB/s wr, 2 op/s
Nov 24 09:54:12 compute-0 ceph-mon[74331]: pgmap v819: 353 pgs: 353 active+clean; 121 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 7.5 KiB/s rd, 15 KiB/s wr, 2 op/s
Nov 24 09:54:12 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:12 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:12 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:12.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:12 compute-0 nova_compute[257700]: 2025-11-24 09:54:12.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:54:13 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/1658824246' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:54:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:54:13 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:13 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:54:13 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:13.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:54:13 compute-0 nova_compute[257700]: 2025-11-24 09:54:13.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:54:13 compute-0 nova_compute[257700]: 2025-11-24 09:54:13.922 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:54:13 compute-0 nova_compute[257700]: 2025-11-24 09:54:13.922 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 09:54:14 compute-0 nova_compute[257700]: 2025-11-24 09:54:14.018 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:14 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v820: 353 pgs: 353 active+clean; 121 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 3.3 KiB/s wr, 1 op/s
Nov 24 09:54:14 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/4102294369' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:54:14 compute-0 ceph-mon[74331]: pgmap v820: 353 pgs: 353 active+clean; 121 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 3.3 KiB/s wr, 1 op/s
Nov 24 09:54:14 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:14 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:14 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:14.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:14 compute-0 nova_compute[257700]: 2025-11-24 09:54:14.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:54:14 compute-0 nova_compute[257700]: 2025-11-24 09:54:14.946 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:54:14 compute-0 nova_compute[257700]: 2025-11-24 09:54:14.946 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:54:14 compute-0 nova_compute[257700]: 2025-11-24 09:54:14.947 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:54:14 compute-0 nova_compute[257700]: 2025-11-24 09:54:14.947 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 09:54:14 compute-0 nova_compute[257700]: 2025-11-24 09:54:14.947 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:54:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:54:14 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:54:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:54:14 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:54:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:54:14 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:54:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:54:15 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:54:15 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:54:15 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3640733103' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:54:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:54:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:54:15 compute-0 nova_compute[257700]: 2025-11-24 09:54:15.405 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:54:15 compute-0 nova_compute[257700]: 2025-11-24 09:54:15.478 257704 DEBUG nova.virt.libvirt.driver [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 24 09:54:15 compute-0 nova_compute[257700]: 2025-11-24 09:54:15.478 257704 DEBUG nova.virt.libvirt.driver [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 24 09:54:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:54:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:54:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:54:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:54:15 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/3060430779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:54:15 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3640733103' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:54:15 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:54:15 compute-0 nova_compute[257700]: 2025-11-24 09:54:15.637 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:15 compute-0 nova_compute[257700]: 2025-11-24 09:54:15.646 257704 WARNING nova.virt.libvirt.driver [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 09:54:15 compute-0 nova_compute[257700]: 2025-11-24 09:54:15.648 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4446MB free_disk=59.942718505859375GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 09:54:15 compute-0 nova_compute[257700]: 2025-11-24 09:54:15.648 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:54:15 compute-0 nova_compute[257700]: 2025-11-24 09:54:15.648 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:54:15 compute-0 nova_compute[257700]: 2025-11-24 09:54:15.722 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Instance a30689a0-a2d7-4b8d-9f45-9763cda52bf9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 09:54:15 compute-0 nova_compute[257700]: 2025-11-24 09:54:15.723 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 09:54:15 compute-0 nova_compute[257700]: 2025-11-24 09:54:15.723 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 09:54:15 compute-0 nova_compute[257700]: 2025-11-24 09:54:15.770 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:54:15 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:15 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:54:15 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:15.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:54:15 compute-0 sshd-session[266648]: Invalid user erpnext from 36.255.3.203 port 35613
Nov 24 09:54:16 compute-0 sshd-session[266648]: Received disconnect from 36.255.3.203 port 35613:11: Bye Bye [preauth]
Nov 24 09:54:16 compute-0 sshd-session[266648]: Disconnected from invalid user erpnext 36.255.3.203 port 35613 [preauth]
Nov 24 09:54:16 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:54:16 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2588724526' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:54:16 compute-0 nova_compute[257700]: 2025-11-24 09:54:16.220 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:54:16 compute-0 nova_compute[257700]: 2025-11-24 09:54:16.225 257704 DEBUG nova.compute.provider_tree [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Updating inventory in ProviderTree for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 24 09:54:16 compute-0 nova_compute[257700]: 2025-11-24 09:54:16.272 257704 ERROR nova.scheduler.client.report [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] [req-7a0ee742-be34-478c-9376-fc5857c20a12] Failed to update inventory to [{'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID a50ce3b5-7e9e-4263-a4aa-c35573ac7257.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-7a0ee742-be34-478c-9376-fc5857c20a12"}]}
Nov 24 09:54:16 compute-0 nova_compute[257700]: 2025-11-24 09:54:16.284 257704 DEBUG nova.scheduler.client.report [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Refreshing inventories for resource provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 24 09:54:16 compute-0 nova_compute[257700]: 2025-11-24 09:54:16.300 257704 DEBUG nova.scheduler.client.report [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Updating ProviderTree inventory for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 24 09:54:16 compute-0 nova_compute[257700]: 2025-11-24 09:54:16.300 257704 DEBUG nova.compute.provider_tree [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Updating inventory in ProviderTree for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 24 09:54:16 compute-0 nova_compute[257700]: 2025-11-24 09:54:16.314 257704 DEBUG nova.scheduler.client.report [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Refreshing aggregate associations for resource provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 24 09:54:16 compute-0 nova_compute[257700]: 2025-11-24 09:54:16.331 257704 DEBUG nova.scheduler.client.report [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Refreshing trait associations for resource provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257, traits: COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_F16C,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SATA,COMPUTE_ACCELERATORS,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_SHA,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSSE3,HW_CPU_X86_AVX,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AESNI,HW_CPU_X86_BMI,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_MMX,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 24 09:54:16 compute-0 nova_compute[257700]: 2025-11-24 09:54:16.360 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:54:16 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v821: 353 pgs: 353 active+clean; 121 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 3.3 KiB/s wr, 1 op/s
Nov 24 09:54:16 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/2342644203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:54:16 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2588724526' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:54:16 compute-0 ceph-mon[74331]: pgmap v821: 353 pgs: 353 active+clean; 121 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 3.3 KiB/s wr, 1 op/s
Nov 24 09:54:16 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:16 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:54:16 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:16.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:54:16 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:54:16 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3524898620' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:54:16 compute-0 nova_compute[257700]: 2025-11-24 09:54:16.812 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:54:16 compute-0 nova_compute[257700]: 2025-11-24 09:54:16.818 257704 DEBUG nova.compute.provider_tree [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Updating inventory in ProviderTree for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 24 09:54:16 compute-0 nova_compute[257700]: 2025-11-24 09:54:16.864 257704 DEBUG nova.scheduler.client.report [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Updated inventory for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Nov 24 09:54:16 compute-0 nova_compute[257700]: 2025-11-24 09:54:16.864 257704 DEBUG nova.compute.provider_tree [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Updating resource provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 24 09:54:16 compute-0 nova_compute[257700]: 2025-11-24 09:54:16.864 257704 DEBUG nova.compute.provider_tree [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Updating inventory in ProviderTree for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 24 09:54:16 compute-0 nova_compute[257700]: 2025-11-24 09:54:16.884 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 09:54:16 compute-0 nova_compute[257700]: 2025-11-24 09:54:16.885 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.236s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:54:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:54:17.108Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:54:17 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3524898620' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:54:17 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/1456248496' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:54:17 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:17 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:54:17 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:17.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:54:17 compute-0 nova_compute[257700]: 2025-11-24 09:54:17.885 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:54:17 compute-0 nova_compute[257700]: 2025-11-24 09:54:17.885 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 09:54:17 compute-0 nova_compute[257700]: 2025-11-24 09:54:17.885 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 09:54:18 compute-0 nova_compute[257700]: 2025-11-24 09:54:18.058 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "refresh_cache-a30689a0-a2d7-4b8d-9f45-9763cda52bf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 09:54:18 compute-0 nova_compute[257700]: 2025-11-24 09:54:18.059 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquired lock "refresh_cache-a30689a0-a2d7-4b8d-9f45-9763cda52bf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 09:54:18 compute-0 nova_compute[257700]: 2025-11-24 09:54:18.059 257704 DEBUG nova.network.neutron [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 24 09:54:18 compute-0 nova_compute[257700]: 2025-11-24 09:54:18.059 257704 DEBUG nova.objects.instance [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lazy-loading 'info_cache' on Instance uuid a30689a0-a2d7-4b8d-9f45-9763cda52bf9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 09:54:18 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v822: 353 pgs: 353 active+clean; 167 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 35 op/s
Nov 24 09:54:18 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:54:18 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/3790276299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:54:18 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/2473323629' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 09:54:18 compute-0 ceph-mon[74331]: pgmap v822: 353 pgs: 353 active+clean; 167 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 35 op/s
Nov 24 09:54:18 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:18 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:54:18 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:18.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:54:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:54:18.858Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:54:19 compute-0 nova_compute[257700]: 2025-11-24 09:54:19.020 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:19 compute-0 nova_compute[257700]: 2025-11-24 09:54:19.082 257704 DEBUG nova.network.neutron [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Updating instance_info_cache with network_info: [{"id": "a483f88b-7075-47e4-a535-d23a6d20a8b0", "address": "fa:16:3e:60:ce:66", "network": {"id": "0dc1b2d1-8ad8-483c-a726-aec9ed2927a1", "bridge": "br-int", "label": "tempest-network-smoke--1014744228", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa483f88b-70", "ovs_interfaceid": "a483f88b-7075-47e4-a535-d23a6d20a8b0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 09:54:19 compute-0 nova_compute[257700]: 2025-11-24 09:54:19.094 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Releasing lock "refresh_cache-a30689a0-a2d7-4b8d-9f45-9763cda52bf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 09:54:19 compute-0 nova_compute[257700]: 2025-11-24 09:54:19.095 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 24 09:54:19 compute-0 nova_compute[257700]: 2025-11-24 09:54:19.095 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:54:19 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/3807305221' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 09:54:19 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:19 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:54:19 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:19.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:54:19 compute-0 nova_compute[257700]: 2025-11-24 09:54:19.920 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:54:19 compute-0 nova_compute[257700]: 2025-11-24 09:54:19.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:54:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:54:19 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:54:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:54:19 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:54:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:54:19 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:54:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:54:20 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:54:20 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v823: 353 pgs: 353 active+clean; 167 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Nov 24 09:54:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:54:20.563 165073 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:54:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:54:20.563 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:54:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:54:20.564 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:54:20 compute-0 ceph-mon[74331]: pgmap v823: 353 pgs: 353 active+clean; 167 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Nov 24 09:54:20 compute-0 nova_compute[257700]: 2025-11-24 09:54:20.640 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:20 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:20 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:20 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:20.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:54:20] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Nov 24 09:54:20 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:54:20] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Nov 24 09:54:21 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:21 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:21 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:21.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:22 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v824: 353 pgs: 353 active+clean; 167 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 45 op/s
Nov 24 09:54:22 compute-0 ceph-mon[74331]: pgmap v824: 353 pgs: 353 active+clean; 167 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 45 op/s
Nov 24 09:54:22 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:22 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:22 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:22.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:23 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:54:23 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:23 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:23 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:23.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:24 compute-0 nova_compute[257700]: 2025-11-24 09:54:24.021 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:24 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v825: 353 pgs: 353 active+clean; 167 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 44 op/s
Nov 24 09:54:24 compute-0 ceph-mon[74331]: pgmap v825: 353 pgs: 353 active+clean; 167 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 44 op/s
Nov 24 09:54:24 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:24 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:24 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:24.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:54:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:54:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:54:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:54:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:54:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:54:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:54:25 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:54:25 compute-0 nova_compute[257700]: 2025-11-24 09:54:25.642 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:25 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:25 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:25 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:25.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:25 compute-0 sudo[266729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:54:25 compute-0 sudo[266729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:54:25 compute-0 sudo[266729]: pam_unix(sudo:session): session closed for user root
Nov 24 09:54:26 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v826: 353 pgs: 353 active+clean; 167 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 44 op/s
Nov 24 09:54:26 compute-0 ceph-mon[74331]: pgmap v826: 353 pgs: 353 active+clean; 167 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 44 op/s
Nov 24 09:54:26 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:26 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:26 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:26.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:54:27.110Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:54:27 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:27 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:54:27 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:27.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:54:28 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v827: 353 pgs: 353 active+clean; 167 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 109 op/s
Nov 24 09:54:28 compute-0 ceph-mon[74331]: pgmap v827: 353 pgs: 353 active+clean; 167 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 109 op/s
Nov 24 09:54:28 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:54:28 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:28 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:28 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:28.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:54:28.858Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:54:29 compute-0 nova_compute[257700]: 2025-11-24 09:54:29.072 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:29 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:29 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:29 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:29.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:54:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:54:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:54:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:54:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:54:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:54:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:54:30 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:54:30 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v828: 353 pgs: 353 active+clean; 167 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 75 op/s
Nov 24 09:54:30 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:54:30 compute-0 nova_compute[257700]: 2025-11-24 09:54:30.645 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:30 compute-0 nova_compute[257700]: 2025-11-24 09:54:30.648 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:30 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:30 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:54:30 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:30.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:54:30 compute-0 podman[266760]: 2025-11-24 09:54:30.806338166 +0000 UTC m=+0.078161135 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:54:30 compute-0 podman[266761]: 2025-11-24 09:54:30.830451373 +0000 UTC m=+0.102946969 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 09:54:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:54:30] "GET /metrics HTTP/1.1" 200 48467 "" "Prometheus/2.51.0"
Nov 24 09:54:30 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:54:30] "GET /metrics HTTP/1.1" 200 48467 "" "Prometheus/2.51.0"
Nov 24 09:54:31 compute-0 ceph-mon[74331]: pgmap v828: 353 pgs: 353 active+clean; 167 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 75 op/s
Nov 24 09:54:31 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:31 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:31 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:31.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:32 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v829: 353 pgs: 353 active+clean; 167 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 75 op/s
Nov 24 09:54:32 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:32 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:54:32 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:32.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:54:33 compute-0 ceph-mon[74331]: pgmap v829: 353 pgs: 353 active+clean; 167 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 75 op/s
Nov 24 09:54:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:54:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:33.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:33 compute-0 ceph-osd[82549]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 24 09:54:34 compute-0 nova_compute[257700]: 2025-11-24 09:54:34.075 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:34 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v830: 353 pgs: 353 active+clean; 167 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.3 KiB/s wr, 65 op/s
Nov 24 09:54:34 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:34 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:34 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:34.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:54:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:54:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:54:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:54:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:54:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:54:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:54:35 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:54:35 compute-0 ceph-mon[74331]: pgmap v830: 353 pgs: 353 active+clean; 167 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.3 KiB/s wr, 65 op/s
Nov 24 09:54:35 compute-0 nova_compute[257700]: 2025-11-24 09:54:35.648 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:35 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:35 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:54:35 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:35.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:54:36 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v831: 353 pgs: 353 active+clean; 167 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.3 KiB/s wr, 65 op/s
Nov 24 09:54:36 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:36 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:36 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:36.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:54:37.111Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:54:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:54:37.111Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 09:54:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:54:37.111Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:54:37 compute-0 ceph-mon[74331]: pgmap v831: 353 pgs: 353 active+clean; 167 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.3 KiB/s wr, 65 op/s
Nov 24 09:54:37 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:37 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:37 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:37.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:38 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v832: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 129 op/s
Nov 24 09:54:38 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:54:38 compute-0 podman[266811]: 2025-11-24 09:54:38.772914688 +0000 UTC m=+0.044986984 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 09:54:38 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:38 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:38 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:38.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:54:38.859Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:54:39 compute-0 nova_compute[257700]: 2025-11-24 09:54:39.078 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:39 compute-0 ceph-mon[74331]: pgmap v832: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 129 op/s
Nov 24 09:54:39 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:39 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:54:39 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:39.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:54:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:54:40 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:54:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:54:40 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:54:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:54:40 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:54:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:54:40 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:54:40 compute-0 sshd-session[266758]: error: kex_exchange_identification: read: Connection timed out
Nov 24 09:54:40 compute-0 sshd-session[266758]: banner exchange: Connection from 121.31.210.125 port 58524: Connection timed out
Nov 24 09:54:40 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v833: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 304 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 24 09:54:40 compute-0 nova_compute[257700]: 2025-11-24 09:54:40.652 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:40 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:40 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:54:40 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:40.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:54:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:54:40] "GET /metrics HTTP/1.1" 200 48467 "" "Prometheus/2.51.0"
Nov 24 09:54:40 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:54:40] "GET /metrics HTTP/1.1" 200 48467 "" "Prometheus/2.51.0"
Nov 24 09:54:41 compute-0 ceph-mon[74331]: pgmap v833: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 304 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 24 09:54:41 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:41 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:41 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:41.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:42 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v834: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 305 KiB/s rd, 2.2 MiB/s wr, 66 op/s
Nov 24 09:54:42 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:42 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:54:42 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:42.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:54:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:54:43 compute-0 ceph-mon[74331]: pgmap v834: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 305 KiB/s rd, 2.2 MiB/s wr, 66 op/s
Nov 24 09:54:43 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:43 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:43 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:43.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:44 compute-0 nova_compute[257700]: 2025-11-24 09:54:44.080 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:44 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v835: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 304 KiB/s rd, 2.2 MiB/s wr, 66 op/s
Nov 24 09:54:44 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/1476535774' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:54:44 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:44 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:44 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:44.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:54:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:54:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:54:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:54:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:54:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:54:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:54:45 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:54:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Optimize plan auto_2025-11-24_09:54:45
Nov 24 09:54:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 09:54:45 compute-0 ceph-mgr[74626]: [balancer INFO root] do_upmap
Nov 24 09:54:45 compute-0 ceph-mgr[74626]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'default.rgw.control', 'vms', 'images', '.rgw.root', 'default.rgw.meta', 'backups', '.nfs', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.log']
Nov 24 09:54:45 compute-0 ceph-mgr[74626]: [balancer INFO root] prepared 0/10 upmap changes
Nov 24 09:54:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:54:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:54:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:54:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:54:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:54:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:54:45 compute-0 nova_compute[257700]: 2025-11-24 09:54:45.656 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:45 compute-0 ceph-mon[74331]: pgmap v835: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 304 KiB/s rd, 2.2 MiB/s wr, 66 op/s
Nov 24 09:54:45 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:54:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 09:54:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:54:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 09:54:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:54:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001518991237709188 of space, bias 1.0, pg target 0.4556973713127564 quantized to 32 (current 32)
Nov 24 09:54:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:54:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:54:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:54:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:54:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:54:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 24 09:54:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:54:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 09:54:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:54:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:54:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:54:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 24 09:54:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:54:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 09:54:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:54:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:54:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:54:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 09:54:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:54:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 09:54:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 09:54:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:54:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 09:54:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:54:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:54:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:54:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:54:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:54:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:54:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:54:45 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:45 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:45 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:45.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:46 compute-0 sudo[266837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:54:46 compute-0 sudo[266837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:54:46 compute-0 sudo[266837]: pam_unix(sudo:session): session closed for user root
Nov 24 09:54:46 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v836: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 304 KiB/s rd, 2.2 MiB/s wr, 66 op/s
Nov 24 09:54:46 compute-0 ovn_controller[155123]: 2025-11-24T09:54:46Z|00034|binding|INFO|Releasing lport 6d80ed6e-a23b-438d-8881-7f4774f1703c from this chassis (sb_readonly=0)
Nov 24 09:54:46 compute-0 nova_compute[257700]: 2025-11-24 09:54:46.633 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:46 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:46 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:54:46 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:46.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:54:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:54:47.112Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:54:47 compute-0 ceph-mon[74331]: pgmap v836: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 304 KiB/s rd, 2.2 MiB/s wr, 66 op/s
Nov 24 09:54:47 compute-0 nova_compute[257700]: 2025-11-24 09:54:47.830 257704 DEBUG nova.compute.manager [req-f9133572-449d-41d8-bd62-3abd743459cd req-6488b064-3148-4eff-a127-579d18d59e0c 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Received event network-changed-a483f88b-7075-47e4-a535-d23a6d20a8b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 09:54:47 compute-0 nova_compute[257700]: 2025-11-24 09:54:47.831 257704 DEBUG nova.compute.manager [req-f9133572-449d-41d8-bd62-3abd743459cd req-6488b064-3148-4eff-a127-579d18d59e0c 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Refreshing instance network info cache due to event network-changed-a483f88b-7075-47e4-a535-d23a6d20a8b0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 09:54:47 compute-0 nova_compute[257700]: 2025-11-24 09:54:47.831 257704 DEBUG oslo_concurrency.lockutils [req-f9133572-449d-41d8-bd62-3abd743459cd req-6488b064-3148-4eff-a127-579d18d59e0c 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "refresh_cache-a30689a0-a2d7-4b8d-9f45-9763cda52bf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 09:54:47 compute-0 nova_compute[257700]: 2025-11-24 09:54:47.831 257704 DEBUG oslo_concurrency.lockutils [req-f9133572-449d-41d8-bd62-3abd743459cd req-6488b064-3148-4eff-a127-579d18d59e0c 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquired lock "refresh_cache-a30689a0-a2d7-4b8d-9f45-9763cda52bf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 09:54:47 compute-0 nova_compute[257700]: 2025-11-24 09:54:47.831 257704 DEBUG nova.network.neutron [req-f9133572-449d-41d8-bd62-3abd743459cd req-6488b064-3148-4eff-a127-579d18d59e0c 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Refreshing network info cache for port a483f88b-7075-47e4-a535-d23a6d20a8b0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 09:54:47 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:47 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:47 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:47.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:47 compute-0 nova_compute[257700]: 2025-11-24 09:54:47.893 257704 DEBUG oslo_concurrency.lockutils [None req-22d3f09b-bf83-4e48-a61a-ef8446b0465d 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "a30689a0-a2d7-4b8d-9f45-9763cda52bf9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:54:47 compute-0 nova_compute[257700]: 2025-11-24 09:54:47.893 257704 DEBUG oslo_concurrency.lockutils [None req-22d3f09b-bf83-4e48-a61a-ef8446b0465d 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "a30689a0-a2d7-4b8d-9f45-9763cda52bf9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:54:47 compute-0 nova_compute[257700]: 2025-11-24 09:54:47.893 257704 DEBUG oslo_concurrency.lockutils [None req-22d3f09b-bf83-4e48-a61a-ef8446b0465d 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "a30689a0-a2d7-4b8d-9f45-9763cda52bf9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:54:47 compute-0 nova_compute[257700]: 2025-11-24 09:54:47.894 257704 DEBUG oslo_concurrency.lockutils [None req-22d3f09b-bf83-4e48-a61a-ef8446b0465d 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "a30689a0-a2d7-4b8d-9f45-9763cda52bf9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:54:47 compute-0 nova_compute[257700]: 2025-11-24 09:54:47.894 257704 DEBUG oslo_concurrency.lockutils [None req-22d3f09b-bf83-4e48-a61a-ef8446b0465d 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "a30689a0-a2d7-4b8d-9f45-9763cda52bf9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:54:47 compute-0 nova_compute[257700]: 2025-11-24 09:54:47.895 257704 INFO nova.compute.manager [None req-22d3f09b-bf83-4e48-a61a-ef8446b0465d 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Terminating instance
Nov 24 09:54:47 compute-0 nova_compute[257700]: 2025-11-24 09:54:47.896 257704 DEBUG nova.compute.manager [None req-22d3f09b-bf83-4e48-a61a-ef8446b0465d 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 24 09:54:48 compute-0 kernel: tapa483f88b-70 (unregistering): left promiscuous mode
Nov 24 09:54:48 compute-0 NetworkManager[48883]: <info>  [1763978088.0215] device (tapa483f88b-70): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 24 09:54:48 compute-0 ovn_controller[155123]: 2025-11-24T09:54:48Z|00035|binding|INFO|Releasing lport a483f88b-7075-47e4-a535-d23a6d20a8b0 from this chassis (sb_readonly=0)
Nov 24 09:54:48 compute-0 ovn_controller[155123]: 2025-11-24T09:54:48Z|00036|binding|INFO|Setting lport a483f88b-7075-47e4-a535-d23a6d20a8b0 down in Southbound
Nov 24 09:54:48 compute-0 ovn_controller[155123]: 2025-11-24T09:54:48Z|00037|binding|INFO|Removing iface tapa483f88b-70 ovn-installed in OVS
Nov 24 09:54:48 compute-0 nova_compute[257700]: 2025-11-24 09:54:48.040 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:48 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:54:48.045 165073 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:ce:66 10.100.0.9'], port_security=['fa:16:3e:60:ce:66 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'a30689a0-a2d7-4b8d-9f45-9763cda52bf9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0dc1b2d1-8ad8-483c-a726-aec9ed2927a1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94d069fc040647d5a6e54894eec915fe', 'neutron:revision_number': '4', 'neutron:security_group_ids': '94df962a-9564-4d32-ae9d-240621404de3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d43e3d9c-2d20-485b-a9cd-f3ec621a22dc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f45b2855760>], logical_port=a483f88b-7075-47e4-a535-d23a6d20a8b0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f45b2855760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 09:54:48 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:54:48.046 165073 INFO neutron.agent.ovn.metadata.agent [-] Port a483f88b-7075-47e4-a535-d23a6d20a8b0 in datapath 0dc1b2d1-8ad8-483c-a726-aec9ed2927a1 unbound from our chassis
Nov 24 09:54:48 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:54:48.047 165073 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0dc1b2d1-8ad8-483c-a726-aec9ed2927a1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 24 09:54:48 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:54:48.048 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[d3a76a36-0f12-4b2f-974d-f64d837c4807]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:54:48 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:54:48.050 165073 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0dc1b2d1-8ad8-483c-a726-aec9ed2927a1 namespace which is not needed anymore
Nov 24 09:54:48 compute-0 nova_compute[257700]: 2025-11-24 09:54:48.059 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:48 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Nov 24 09:54:48 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 16.128s CPU time.
Nov 24 09:54:48 compute-0 systemd-machined[219130]: Machine qemu-1-instance-00000001 terminated.
Nov 24 09:54:48 compute-0 neutron-haproxy-ovnmeta-0dc1b2d1-8ad8-483c-a726-aec9ed2927a1[265032]: [NOTICE]   (265037) : haproxy version is 2.8.14-c23fe91
Nov 24 09:54:48 compute-0 neutron-haproxy-ovnmeta-0dc1b2d1-8ad8-483c-a726-aec9ed2927a1[265032]: [NOTICE]   (265037) : path to executable is /usr/sbin/haproxy
Nov 24 09:54:48 compute-0 neutron-haproxy-ovnmeta-0dc1b2d1-8ad8-483c-a726-aec9ed2927a1[265032]: [WARNING]  (265037) : Exiting Master process...
Nov 24 09:54:48 compute-0 neutron-haproxy-ovnmeta-0dc1b2d1-8ad8-483c-a726-aec9ed2927a1[265032]: [ALERT]    (265037) : Current worker (265039) exited with code 143 (Terminated)
Nov 24 09:54:48 compute-0 neutron-haproxy-ovnmeta-0dc1b2d1-8ad8-483c-a726-aec9ed2927a1[265032]: [WARNING]  (265037) : All workers exited. Exiting... (0)
Nov 24 09:54:48 compute-0 systemd[1]: libpod-f2d3e8a33983cf14d2bb3321e1a5d8289b41609578b66e349bbe70131a6c254d.scope: Deactivated successfully.
Nov 24 09:54:48 compute-0 podman[266887]: 2025-11-24 09:54:48.247082197 +0000 UTC m=+0.062273962 container died f2d3e8a33983cf14d2bb3321e1a5d8289b41609578b66e349bbe70131a6c254d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0dc1b2d1-8ad8-483c-a726-aec9ed2927a1, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 24 09:54:48 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f2d3e8a33983cf14d2bb3321e1a5d8289b41609578b66e349bbe70131a6c254d-userdata-shm.mount: Deactivated successfully.
Nov 24 09:54:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-a23b761c9bcb67f4eeac16b882980d2d463f8e0133d4e6d0a00c61cd9c851009-merged.mount: Deactivated successfully.
Nov 24 09:54:48 compute-0 podman[266887]: 2025-11-24 09:54:48.2863874 +0000 UTC m=+0.101579165 container cleanup f2d3e8a33983cf14d2bb3321e1a5d8289b41609578b66e349bbe70131a6c254d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0dc1b2d1-8ad8-483c-a726-aec9ed2927a1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:54:48 compute-0 systemd[1]: libpod-conmon-f2d3e8a33983cf14d2bb3321e1a5d8289b41609578b66e349bbe70131a6c254d.scope: Deactivated successfully.
Nov 24 09:54:48 compute-0 nova_compute[257700]: 2025-11-24 09:54:48.298 257704 DEBUG nova.compute.manager [req-6b1941dc-2612-4534-bb84-03d093f8bab3 req-9a686cbc-b34b-4757-80ea-404978971ac2 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Received event network-vif-unplugged-a483f88b-7075-47e4-a535-d23a6d20a8b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 09:54:48 compute-0 nova_compute[257700]: 2025-11-24 09:54:48.300 257704 DEBUG oslo_concurrency.lockutils [req-6b1941dc-2612-4534-bb84-03d093f8bab3 req-9a686cbc-b34b-4757-80ea-404978971ac2 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "a30689a0-a2d7-4b8d-9f45-9763cda52bf9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:54:48 compute-0 nova_compute[257700]: 2025-11-24 09:54:48.301 257704 DEBUG oslo_concurrency.lockutils [req-6b1941dc-2612-4534-bb84-03d093f8bab3 req-9a686cbc-b34b-4757-80ea-404978971ac2 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "a30689a0-a2d7-4b8d-9f45-9763cda52bf9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:54:48 compute-0 nova_compute[257700]: 2025-11-24 09:54:48.302 257704 DEBUG oslo_concurrency.lockutils [req-6b1941dc-2612-4534-bb84-03d093f8bab3 req-9a686cbc-b34b-4757-80ea-404978971ac2 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "a30689a0-a2d7-4b8d-9f45-9763cda52bf9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:54:48 compute-0 nova_compute[257700]: 2025-11-24 09:54:48.303 257704 DEBUG nova.compute.manager [req-6b1941dc-2612-4534-bb84-03d093f8bab3 req-9a686cbc-b34b-4757-80ea-404978971ac2 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] No waiting events found dispatching network-vif-unplugged-a483f88b-7075-47e4-a535-d23a6d20a8b0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 09:54:48 compute-0 nova_compute[257700]: 2025-11-24 09:54:48.304 257704 DEBUG nova.compute.manager [req-6b1941dc-2612-4534-bb84-03d093f8bab3 req-9a686cbc-b34b-4757-80ea-404978971ac2 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Received event network-vif-unplugged-a483f88b-7075-47e4-a535-d23a6d20a8b0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 24 09:54:48 compute-0 kernel: tapa483f88b-70: entered promiscuous mode
Nov 24 09:54:48 compute-0 kernel: tapa483f88b-70 (unregistering): left promiscuous mode
Nov 24 09:54:48 compute-0 NetworkManager[48883]: <info>  [1763978088.3219] manager: (tapa483f88b-70): new Tun device (/org/freedesktop/NetworkManager/Devices/31)
Nov 24 09:54:48 compute-0 nova_compute[257700]: 2025-11-24 09:54:48.332 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:48 compute-0 nova_compute[257700]: 2025-11-24 09:54:48.340 257704 INFO nova.virt.libvirt.driver [-] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Instance destroyed successfully.
Nov 24 09:54:48 compute-0 nova_compute[257700]: 2025-11-24 09:54:48.341 257704 DEBUG nova.objects.instance [None req-22d3f09b-bf83-4e48-a61a-ef8446b0465d 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'resources' on Instance uuid a30689a0-a2d7-4b8d-9f45-9763cda52bf9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 09:54:48 compute-0 nova_compute[257700]: 2025-11-24 09:54:48.351 257704 DEBUG nova.virt.libvirt.vif [None req-22d3f09b-bf83-4e48-a61a-ef8446b0465d 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T09:53:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1978315975',display_name='tempest-TestNetworkBasicOps-server-1978315975',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1978315975',id=1,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJNthCOG5s/vj2boz4+BJ2PgQP/GfmCVp6AlZEWP14On33KLzHHGoFmk6PLUtyAqj03T1Qn3XgryOX94XA7OB9At/bgHp1KmuCanoF6+mPqReV5daqHshzy/eMS+IKuQNA==',key_name='tempest-TestNetworkBasicOps-1678285583',keypairs=<?>,launch_index=0,launched_at=2025-11-24T09:53:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-cxh7ww8x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T09:53:39Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=a30689a0-a2d7-4b8d-9f45-9763cda52bf9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a483f88b-7075-47e4-a535-d23a6d20a8b0", "address": "fa:16:3e:60:ce:66", "network": {"id": "0dc1b2d1-8ad8-483c-a726-aec9ed2927a1", "bridge": "br-int", "label": "tempest-network-smoke--1014744228", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa483f88b-70", "ovs_interfaceid": "a483f88b-7075-47e4-a535-d23a6d20a8b0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 24 09:54:48 compute-0 nova_compute[257700]: 2025-11-24 09:54:48.351 257704 DEBUG nova.network.os_vif_util [None req-22d3f09b-bf83-4e48-a61a-ef8446b0465d 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "a483f88b-7075-47e4-a535-d23a6d20a8b0", "address": "fa:16:3e:60:ce:66", "network": {"id": "0dc1b2d1-8ad8-483c-a726-aec9ed2927a1", "bridge": "br-int", "label": "tempest-network-smoke--1014744228", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa483f88b-70", "ovs_interfaceid": "a483f88b-7075-47e4-a535-d23a6d20a8b0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 09:54:48 compute-0 nova_compute[257700]: 2025-11-24 09:54:48.352 257704 DEBUG nova.network.os_vif_util [None req-22d3f09b-bf83-4e48-a61a-ef8446b0465d 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:60:ce:66,bridge_name='br-int',has_traffic_filtering=True,id=a483f88b-7075-47e4-a535-d23a6d20a8b0,network=Network(0dc1b2d1-8ad8-483c-a726-aec9ed2927a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa483f88b-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 09:54:48 compute-0 nova_compute[257700]: 2025-11-24 09:54:48.352 257704 DEBUG os_vif [None req-22d3f09b-bf83-4e48-a61a-ef8446b0465d 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:60:ce:66,bridge_name='br-int',has_traffic_filtering=True,id=a483f88b-7075-47e4-a535-d23a6d20a8b0,network=Network(0dc1b2d1-8ad8-483c-a726-aec9ed2927a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa483f88b-70') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 24 09:54:48 compute-0 nova_compute[257700]: 2025-11-24 09:54:48.355 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:48 compute-0 nova_compute[257700]: 2025-11-24 09:54:48.355 257704 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa483f88b-70, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:54:48 compute-0 nova_compute[257700]: 2025-11-24 09:54:48.357 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:48 compute-0 nova_compute[257700]: 2025-11-24 09:54:48.358 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:48 compute-0 nova_compute[257700]: 2025-11-24 09:54:48.361 257704 INFO os_vif [None req-22d3f09b-bf83-4e48-a61a-ef8446b0465d 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:60:ce:66,bridge_name='br-int',has_traffic_filtering=True,id=a483f88b-7075-47e4-a535-d23a6d20a8b0,network=Network(0dc1b2d1-8ad8-483c-a726-aec9ed2927a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa483f88b-70')
Nov 24 09:54:48 compute-0 podman[266917]: 2025-11-24 09:54:48.364718699 +0000 UTC m=+0.059337760 container remove f2d3e8a33983cf14d2bb3321e1a5d8289b41609578b66e349bbe70131a6c254d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0dc1b2d1-8ad8-483c-a726-aec9ed2927a1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:54:48 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:54:48.373 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[1b1f82a2-9df1-465e-acc0-aecd654f02b3]: (4, ('Mon Nov 24 09:54:48 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0dc1b2d1-8ad8-483c-a726-aec9ed2927a1 (f2d3e8a33983cf14d2bb3321e1a5d8289b41609578b66e349bbe70131a6c254d)\nf2d3e8a33983cf14d2bb3321e1a5d8289b41609578b66e349bbe70131a6c254d\nMon Nov 24 09:54:48 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0dc1b2d1-8ad8-483c-a726-aec9ed2927a1 (f2d3e8a33983cf14d2bb3321e1a5d8289b41609578b66e349bbe70131a6c254d)\nf2d3e8a33983cf14d2bb3321e1a5d8289b41609578b66e349bbe70131a6c254d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:54:48 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:54:48.374 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[8a771ce8-1ae6-4fcb-9e62-e345a7f08858]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:54:48 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:54:48.375 165073 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0dc1b2d1-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:54:48 compute-0 nova_compute[257700]: 2025-11-24 09:54:48.378 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:48 compute-0 kernel: tap0dc1b2d1-80: left promiscuous mode
Nov 24 09:54:48 compute-0 nova_compute[257700]: 2025-11-24 09:54:48.395 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:48 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:54:48.397 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[a2eae234-fa2f-464e-8fda-b75b8182359a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:54:48 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:54:48.410 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[0f4ae903-b78e-4389-9bfb-4f969abe543a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:54:48 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:54:48.411 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[92405607-1c2d-4e60-97b4-b3bdaa90901e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:54:48 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:54:48.426 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[cf858cbb-5428-41e3-8cff-14c5c67ce4cc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 396922, 'reachable_time': 32025, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266956, 'error': None, 'target': 'ovnmeta-0dc1b2d1-8ad8-483c-a726-aec9ed2927a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:54:48 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:54:48.436 165227 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0dc1b2d1-8ad8-483c-a726-aec9ed2927a1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 24 09:54:48 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:54:48.436 165227 DEBUG oslo.privsep.daemon [-] privsep: reply[4df5548b-dabc-4e58-ae6a-d311b4ad450f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:54:48 compute-0 systemd[1]: run-netns-ovnmeta\x2d0dc1b2d1\x2d8ad8\x2d483c\x2da726\x2daec9ed2927a1.mount: Deactivated successfully.
Nov 24 09:54:48 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v837: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 324 KiB/s rd, 2.2 MiB/s wr, 94 op/s
Nov 24 09:54:48 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:54:48 compute-0 nova_compute[257700]: 2025-11-24 09:54:48.774 257704 INFO nova.virt.libvirt.driver [None req-22d3f09b-bf83-4e48-a61a-ef8446b0465d 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Deleting instance files /var/lib/nova/instances/a30689a0-a2d7-4b8d-9f45-9763cda52bf9_del
Nov 24 09:54:48 compute-0 nova_compute[257700]: 2025-11-24 09:54:48.775 257704 INFO nova.virt.libvirt.driver [None req-22d3f09b-bf83-4e48-a61a-ef8446b0465d 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Deletion of /var/lib/nova/instances/a30689a0-a2d7-4b8d-9f45-9763cda52bf9_del complete
Nov 24 09:54:48 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:48 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:48 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:48.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:48 compute-0 nova_compute[257700]: 2025-11-24 09:54:48.815 257704 DEBUG nova.virt.libvirt.host [None req-22d3f09b-bf83-4e48-a61a-ef8446b0465d 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Nov 24 09:54:48 compute-0 nova_compute[257700]: 2025-11-24 09:54:48.816 257704 INFO nova.virt.libvirt.host [None req-22d3f09b-bf83-4e48-a61a-ef8446b0465d 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] UEFI support detected
Nov 24 09:54:48 compute-0 nova_compute[257700]: 2025-11-24 09:54:48.817 257704 INFO nova.compute.manager [None req-22d3f09b-bf83-4e48-a61a-ef8446b0465d 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Took 0.92 seconds to destroy the instance on the hypervisor.
Nov 24 09:54:48 compute-0 nova_compute[257700]: 2025-11-24 09:54:48.817 257704 DEBUG oslo.service.loopingcall [None req-22d3f09b-bf83-4e48-a61a-ef8446b0465d 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 24 09:54:48 compute-0 nova_compute[257700]: 2025-11-24 09:54:48.817 257704 DEBUG nova.compute.manager [-] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 24 09:54:48 compute-0 nova_compute[257700]: 2025-11-24 09:54:48.818 257704 DEBUG nova.network.neutron [-] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 24 09:54:48 compute-0 nova_compute[257700]: 2025-11-24 09:54:48.821 257704 DEBUG nova.network.neutron [req-f9133572-449d-41d8-bd62-3abd743459cd req-6488b064-3148-4eff-a127-579d18d59e0c 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Updated VIF entry in instance network info cache for port a483f88b-7075-47e4-a535-d23a6d20a8b0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 09:54:48 compute-0 nova_compute[257700]: 2025-11-24 09:54:48.821 257704 DEBUG nova.network.neutron [req-f9133572-449d-41d8-bd62-3abd743459cd req-6488b064-3148-4eff-a127-579d18d59e0c 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Updating instance_info_cache with network_info: [{"id": "a483f88b-7075-47e4-a535-d23a6d20a8b0", "address": "fa:16:3e:60:ce:66", "network": {"id": "0dc1b2d1-8ad8-483c-a726-aec9ed2927a1", "bridge": "br-int", "label": "tempest-network-smoke--1014744228", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa483f88b-70", "ovs_interfaceid": "a483f88b-7075-47e4-a535-d23a6d20a8b0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 09:54:48 compute-0 nova_compute[257700]: 2025-11-24 09:54:48.834 257704 DEBUG oslo_concurrency.lockutils [req-f9133572-449d-41d8-bd62-3abd743459cd req-6488b064-3148-4eff-a127-579d18d59e0c 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Releasing lock "refresh_cache-a30689a0-a2d7-4b8d-9f45-9763cda52bf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 09:54:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:54:48.860Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:54:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:54:48.860Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:54:49 compute-0 nova_compute[257700]: 2025-11-24 09:54:49.609 257704 DEBUG nova.network.neutron [-] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 09:54:49 compute-0 nova_compute[257700]: 2025-11-24 09:54:49.626 257704 INFO nova.compute.manager [-] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Took 0.81 seconds to deallocate network for instance.
Nov 24 09:54:49 compute-0 nova_compute[257700]: 2025-11-24 09:54:49.662 257704 DEBUG oslo_concurrency.lockutils [None req-22d3f09b-bf83-4e48-a61a-ef8446b0465d 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:54:49 compute-0 nova_compute[257700]: 2025-11-24 09:54:49.662 257704 DEBUG oslo_concurrency.lockutils [None req-22d3f09b-bf83-4e48-a61a-ef8446b0465d 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:54:49 compute-0 nova_compute[257700]: 2025-11-24 09:54:49.705 257704 DEBUG oslo_concurrency.processutils [None req-22d3f09b-bf83-4e48-a61a-ef8446b0465d 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:54:49 compute-0 ceph-mon[74331]: pgmap v837: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 324 KiB/s rd, 2.2 MiB/s wr, 94 op/s
Nov 24 09:54:49 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:49 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:49 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:49.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:49 compute-0 nova_compute[257700]: 2025-11-24 09:54:49.913 257704 DEBUG nova.compute.manager [req-aaddd1c3-03a8-4756-9fe3-ed429b494280 req-b29bb4e0-a7f1-4dd9-b621-5df8e04c3e3f 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Received event network-vif-deleted-a483f88b-7075-47e4-a535-d23a6d20a8b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 09:54:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:54:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:54:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:54:50 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:54:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:54:50 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:54:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:54:50 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:54:50 compute-0 nova_compute[257700]: 2025-11-24 09:54:50.189 257704 DEBUG oslo_concurrency.processutils [None req-22d3f09b-bf83-4e48-a61a-ef8446b0465d 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:54:50 compute-0 nova_compute[257700]: 2025-11-24 09:54:50.194 257704 DEBUG nova.compute.provider_tree [None req-22d3f09b-bf83-4e48-a61a-ef8446b0465d 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed in ProviderTree for provider: a50ce3b5-7e9e-4263-a4aa-c35573ac7257 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 09:54:50 compute-0 nova_compute[257700]: 2025-11-24 09:54:50.212 257704 DEBUG nova.scheduler.client.report [None req-22d3f09b-bf83-4e48-a61a-ef8446b0465d 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 09:54:50 compute-0 nova_compute[257700]: 2025-11-24 09:54:50.231 257704 DEBUG oslo_concurrency.lockutils [None req-22d3f09b-bf83-4e48-a61a-ef8446b0465d 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.568s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:54:50 compute-0 nova_compute[257700]: 2025-11-24 09:54:50.256 257704 INFO nova.scheduler.client.report [None req-22d3f09b-bf83-4e48-a61a-ef8446b0465d 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Deleted allocations for instance a30689a0-a2d7-4b8d-9f45-9763cda52bf9
Nov 24 09:54:50 compute-0 nova_compute[257700]: 2025-11-24 09:54:50.321 257704 DEBUG oslo_concurrency.lockutils [None req-22d3f09b-bf83-4e48-a61a-ef8446b0465d 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "a30689a0-a2d7-4b8d-9f45-9763cda52bf9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.428s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:54:50 compute-0 nova_compute[257700]: 2025-11-24 09:54:50.386 257704 DEBUG nova.compute.manager [req-027b7c4e-aa7b-4165-87c6-d99217607ee1 req-6fa63ce5-a7fa-458f-a11c-5bc212259591 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Received event network-vif-plugged-a483f88b-7075-47e4-a535-d23a6d20a8b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 09:54:50 compute-0 nova_compute[257700]: 2025-11-24 09:54:50.386 257704 DEBUG oslo_concurrency.lockutils [req-027b7c4e-aa7b-4165-87c6-d99217607ee1 req-6fa63ce5-a7fa-458f-a11c-5bc212259591 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "a30689a0-a2d7-4b8d-9f45-9763cda52bf9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:54:50 compute-0 nova_compute[257700]: 2025-11-24 09:54:50.387 257704 DEBUG oslo_concurrency.lockutils [req-027b7c4e-aa7b-4165-87c6-d99217607ee1 req-6fa63ce5-a7fa-458f-a11c-5bc212259591 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "a30689a0-a2d7-4b8d-9f45-9763cda52bf9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:54:50 compute-0 nova_compute[257700]: 2025-11-24 09:54:50.387 257704 DEBUG oslo_concurrency.lockutils [req-027b7c4e-aa7b-4165-87c6-d99217607ee1 req-6fa63ce5-a7fa-458f-a11c-5bc212259591 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "a30689a0-a2d7-4b8d-9f45-9763cda52bf9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:54:50 compute-0 nova_compute[257700]: 2025-11-24 09:54:50.387 257704 DEBUG nova.compute.manager [req-027b7c4e-aa7b-4165-87c6-d99217607ee1 req-6fa63ce5-a7fa-458f-a11c-5bc212259591 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] No waiting events found dispatching network-vif-plugged-a483f88b-7075-47e4-a535-d23a6d20a8b0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 09:54:50 compute-0 nova_compute[257700]: 2025-11-24 09:54:50.387 257704 WARNING nova.compute.manager [req-027b7c4e-aa7b-4165-87c6-d99217607ee1 req-6fa63ce5-a7fa-458f-a11c-5bc212259591 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Received unexpected event network-vif-plugged-a483f88b-7075-47e4-a535-d23a6d20a8b0 for instance with vm_state deleted and task_state None.
Nov 24 09:54:50 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v838: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 22 KiB/s wr, 30 op/s
Nov 24 09:54:50 compute-0 nova_compute[257700]: 2025-11-24 09:54:50.657 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:50 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:50 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:50 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:50.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:50 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2362967868' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:54:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:54:50] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Nov 24 09:54:50 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:54:50] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Nov 24 09:54:51 compute-0 ceph-mon[74331]: pgmap v838: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 22 KiB/s wr, 30 op/s
Nov 24 09:54:51 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:51 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:51 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:51.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:52 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v839: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 24 KiB/s wr, 58 op/s
Nov 24 09:54:52 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:52 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:52 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:52.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:53 compute-0 nova_compute[257700]: 2025-11-24 09:54:53.357 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:53 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:54:53 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:53 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:54:53 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:53.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:54:53 compute-0 ceph-mon[74331]: pgmap v839: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 24 KiB/s wr, 58 op/s
Nov 24 09:54:54 compute-0 nova_compute[257700]: 2025-11-24 09:54:54.193 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:54 compute-0 nova_compute[257700]: 2025-11-24 09:54:54.284 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:54 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v840: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 3.3 KiB/s wr, 56 op/s
Nov 24 09:54:54 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:54 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:54 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:54.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:54:54 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:54:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:54:54 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:54:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:54:54 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:54:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:54:55 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:54:55 compute-0 nova_compute[257700]: 2025-11-24 09:54:55.660 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:55 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:55 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:55 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:55.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:55 compute-0 ceph-mon[74331]: pgmap v840: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 3.3 KiB/s wr, 56 op/s
Nov 24 09:54:56 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v841: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 3.3 KiB/s wr, 56 op/s
Nov 24 09:54:56 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:56 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:54:56 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:56.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:54:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:54:57.504Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:54:57 compute-0 sudo[266990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:54:57 compute-0 sudo[266990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:54:57 compute-0 sudo[266990]: pam_unix(sudo:session): session closed for user root
Nov 24 09:54:57 compute-0 ceph-mon[74331]: pgmap v841: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 3.3 KiB/s wr, 56 op/s
Nov 24 09:54:57 compute-0 sudo[267016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:54:57 compute-0 sudo[267016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:54:57 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:57 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:57 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:57.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:58 compute-0 sudo[267016]: pam_unix(sudo:session): session closed for user root
Nov 24 09:54:58 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v842: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 3.4 KiB/s wr, 58 op/s
Nov 24 09:54:58 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:54:58 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:54:58 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:54:58 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:54:58 compute-0 sudo[267074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:54:58 compute-0 sudo[267074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:54:58 compute-0 sudo[267074]: pam_unix(sudo:session): session closed for user root
Nov 24 09:54:58 compute-0 sudo[267099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 24 09:54:58 compute-0 sudo[267099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:54:58 compute-0 nova_compute[257700]: 2025-11-24 09:54:58.358 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:54:58 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:54:58 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:54:58 compute-0 ceph-mon[74331]: pgmap v842: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 3.4 KiB/s wr, 58 op/s
Nov 24 09:54:58 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:54:58 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:54:58 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:54:58 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:54:58 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:54:58 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:54:58 compute-0 podman[267164]: 2025-11-24 09:54:58.613960579 +0000 UTC m=+0.037308105 container create a5dfe71d7ccb2e4e09ce8e53d36c3763e82006d652feb7668e872bac48d4ab3b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_boyd, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 24 09:54:58 compute-0 systemd[1]: Started libpod-conmon-a5dfe71d7ccb2e4e09ce8e53d36c3763e82006d652feb7668e872bac48d4ab3b.scope.
Nov 24 09:54:58 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:54:58 compute-0 podman[267164]: 2025-11-24 09:54:58.688756 +0000 UTC m=+0.112103556 container init a5dfe71d7ccb2e4e09ce8e53d36c3763e82006d652feb7668e872bac48d4ab3b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_boyd, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 09:54:58 compute-0 podman[267164]: 2025-11-24 09:54:58.597764108 +0000 UTC m=+0.021111654 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:54:58 compute-0 podman[267164]: 2025-11-24 09:54:58.695333203 +0000 UTC m=+0.118680729 container start a5dfe71d7ccb2e4e09ce8e53d36c3763e82006d652feb7668e872bac48d4ab3b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_boyd, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Nov 24 09:54:58 compute-0 podman[267164]: 2025-11-24 09:54:58.698319287 +0000 UTC m=+0.121666863 container attach a5dfe71d7ccb2e4e09ce8e53d36c3763e82006d652feb7668e872bac48d4ab3b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_boyd, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:54:58 compute-0 blissful_boyd[267180]: 167 167
Nov 24 09:54:58 compute-0 systemd[1]: libpod-a5dfe71d7ccb2e4e09ce8e53d36c3763e82006d652feb7668e872bac48d4ab3b.scope: Deactivated successfully.
Nov 24 09:54:58 compute-0 podman[267164]: 2025-11-24 09:54:58.701560197 +0000 UTC m=+0.124907743 container died a5dfe71d7ccb2e4e09ce8e53d36c3763e82006d652feb7668e872bac48d4ab3b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_boyd, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:54:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-aeba885631d95d3c0722d28f22f26bb9f2c73af73a1c080444ccc700508c2c35-merged.mount: Deactivated successfully.
Nov 24 09:54:58 compute-0 podman[267164]: 2025-11-24 09:54:58.737198819 +0000 UTC m=+0.160546345 container remove a5dfe71d7ccb2e4e09ce8e53d36c3763e82006d652feb7668e872bac48d4ab3b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_boyd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 24 09:54:58 compute-0 systemd[1]: libpod-conmon-a5dfe71d7ccb2e4e09ce8e53d36c3763e82006d652feb7668e872bac48d4ab3b.scope: Deactivated successfully.
Nov 24 09:54:58 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:58 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:54:58 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:54:58.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:54:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:54:58.861Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:54:58 compute-0 podman[267205]: 2025-11-24 09:54:58.91178704 +0000 UTC m=+0.045087227 container create 3985efa6db9e6823b2f1a7776e560110193aa365916bb0bba419fc342da9aef4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 24 09:54:58 compute-0 systemd[1]: Started libpod-conmon-3985efa6db9e6823b2f1a7776e560110193aa365916bb0bba419fc342da9aef4.scope.
Nov 24 09:54:58 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:54:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59f4675601645a9733ed3e7b1ed7694d373bf10e7b350b753cd2308755c2aed2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:54:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59f4675601645a9733ed3e7b1ed7694d373bf10e7b350b753cd2308755c2aed2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:54:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59f4675601645a9733ed3e7b1ed7694d373bf10e7b350b753cd2308755c2aed2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:54:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59f4675601645a9733ed3e7b1ed7694d373bf10e7b350b753cd2308755c2aed2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:54:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59f4675601645a9733ed3e7b1ed7694d373bf10e7b350b753cd2308755c2aed2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:54:58 compute-0 podman[267205]: 2025-11-24 09:54:58.890247207 +0000 UTC m=+0.023547444 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:54:58 compute-0 podman[267205]: 2025-11-24 09:54:58.992268452 +0000 UTC m=+0.125568669 container init 3985efa6db9e6823b2f1a7776e560110193aa365916bb0bba419fc342da9aef4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_stonebraker, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Nov 24 09:54:59 compute-0 podman[267205]: 2025-11-24 09:54:59.000302321 +0000 UTC m=+0.133602518 container start 3985efa6db9e6823b2f1a7776e560110193aa365916bb0bba419fc342da9aef4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:54:59 compute-0 podman[267205]: 2025-11-24 09:54:59.003709645 +0000 UTC m=+0.137009862 container attach 3985efa6db9e6823b2f1a7776e560110193aa365916bb0bba419fc342da9aef4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_stonebraker, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:54:59 compute-0 ecstatic_stonebraker[267222]: --> passed data devices: 0 physical, 1 LVM
Nov 24 09:54:59 compute-0 ecstatic_stonebraker[267222]: --> All data devices are unavailable
Nov 24 09:54:59 compute-0 systemd[1]: libpod-3985efa6db9e6823b2f1a7776e560110193aa365916bb0bba419fc342da9aef4.scope: Deactivated successfully.
Nov 24 09:54:59 compute-0 podman[267205]: 2025-11-24 09:54:59.341782473 +0000 UTC m=+0.475082670 container died 3985efa6db9e6823b2f1a7776e560110193aa365916bb0bba419fc342da9aef4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Nov 24 09:54:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-59f4675601645a9733ed3e7b1ed7694d373bf10e7b350b753cd2308755c2aed2-merged.mount: Deactivated successfully.
Nov 24 09:54:59 compute-0 podman[267205]: 2025-11-24 09:54:59.385606288 +0000 UTC m=+0.518906475 container remove 3985efa6db9e6823b2f1a7776e560110193aa365916bb0bba419fc342da9aef4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_stonebraker, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:54:59 compute-0 systemd[1]: libpod-conmon-3985efa6db9e6823b2f1a7776e560110193aa365916bb0bba419fc342da9aef4.scope: Deactivated successfully.
Nov 24 09:54:59 compute-0 sudo[267099]: pam_unix(sudo:session): session closed for user root
Nov 24 09:54:59 compute-0 sudo[267249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:54:59 compute-0 sudo[267249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:54:59 compute-0 sudo[267249]: pam_unix(sudo:session): session closed for user root
Nov 24 09:54:59 compute-0 sudo[267274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- lvm list --format json
Nov 24 09:54:59 compute-0 sudo[267274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:54:59 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:54:59 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:54:59 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:54:59.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:54:59 compute-0 podman[267340]: 2025-11-24 09:54:59.920994839 +0000 UTC m=+0.036757980 container create d83f50b0738efbf52815f855dfe55a36c6c5c9c1b29b08b7693978ada3fbb706 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_napier, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2)
Nov 24 09:54:59 compute-0 systemd[1]: Started libpod-conmon-d83f50b0738efbf52815f855dfe55a36c6c5c9c1b29b08b7693978ada3fbb706.scope.
Nov 24 09:54:59 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:54:59 compute-0 podman[267340]: 2025-11-24 09:54:59.976909813 +0000 UTC m=+0.092672954 container init d83f50b0738efbf52815f855dfe55a36c6c5c9c1b29b08b7693978ada3fbb706 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_napier, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:54:59 compute-0 podman[267340]: 2025-11-24 09:54:59.983030344 +0000 UTC m=+0.098793515 container start d83f50b0738efbf52815f855dfe55a36c6c5c9c1b29b08b7693978ada3fbb706 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_napier, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:54:59 compute-0 podman[267340]: 2025-11-24 09:54:59.986822458 +0000 UTC m=+0.102585619 container attach d83f50b0738efbf52815f855dfe55a36c6c5c9c1b29b08b7693978ada3fbb706 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_napier, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 09:54:59 compute-0 admiring_napier[267356]: 167 167
Nov 24 09:54:59 compute-0 systemd[1]: libpod-d83f50b0738efbf52815f855dfe55a36c6c5c9c1b29b08b7693978ada3fbb706.scope: Deactivated successfully.
Nov 24 09:54:59 compute-0 podman[267340]: 2025-11-24 09:54:59.988413818 +0000 UTC m=+0.104176959 container died d83f50b0738efbf52815f855dfe55a36c6c5c9c1b29b08b7693978ada3fbb706 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_napier, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:55:00 compute-0 podman[267340]: 2025-11-24 09:54:59.905647249 +0000 UTC m=+0.021410410 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:55:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:54:59 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:55:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:55:00 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:55:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:55:00 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:55:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:55:00 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:55:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-03e0a07337da371da6ccb09aa35add8c6a473610e44cfc871469c5d7871512cf-merged.mount: Deactivated successfully.
Nov 24 09:55:00 compute-0 podman[267340]: 2025-11-24 09:55:00.02282025 +0000 UTC m=+0.138583391 container remove d83f50b0738efbf52815f855dfe55a36c6c5c9c1b29b08b7693978ada3fbb706 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_napier, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:55:00 compute-0 systemd[1]: libpod-conmon-d83f50b0738efbf52815f855dfe55a36c6c5c9c1b29b08b7693978ada3fbb706.scope: Deactivated successfully.
Nov 24 09:55:00 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v843: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Nov 24 09:55:00 compute-0 podman[267381]: 2025-11-24 09:55:00.175732674 +0000 UTC m=+0.045121887 container create c46f8cda2dcc964b010e722afda8cab934c373726b580a19414448681ea129df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_cannon, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Nov 24 09:55:00 compute-0 systemd[1]: Started libpod-conmon-c46f8cda2dcc964b010e722afda8cab934c373726b580a19414448681ea129df.scope.
Nov 24 09:55:00 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:55:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f73846489f3c5af9eaff37ba00da427e8998cd40196b167aa3b3b1c9c58b032a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:55:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f73846489f3c5af9eaff37ba00da427e8998cd40196b167aa3b3b1c9c58b032a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:55:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f73846489f3c5af9eaff37ba00da427e8998cd40196b167aa3b3b1c9c58b032a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:55:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f73846489f3c5af9eaff37ba00da427e8998cd40196b167aa3b3b1c9c58b032a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:55:00 compute-0 podman[267381]: 2025-11-24 09:55:00.158799295 +0000 UTC m=+0.028188528 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:55:00 compute-0 podman[267381]: 2025-11-24 09:55:00.254679678 +0000 UTC m=+0.124068931 container init c46f8cda2dcc964b010e722afda8cab934c373726b580a19414448681ea129df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_cannon, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Nov 24 09:55:00 compute-0 podman[267381]: 2025-11-24 09:55:00.262767238 +0000 UTC m=+0.132156461 container start c46f8cda2dcc964b010e722afda8cab934c373726b580a19414448681ea129df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:55:00 compute-0 podman[267381]: 2025-11-24 09:55:00.266561993 +0000 UTC m=+0.135951266 container attach c46f8cda2dcc964b010e722afda8cab934c373726b580a19414448681ea129df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_cannon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:55:00 compute-0 admiring_cannon[267397]: {
Nov 24 09:55:00 compute-0 admiring_cannon[267397]:     "0": [
Nov 24 09:55:00 compute-0 admiring_cannon[267397]:         {
Nov 24 09:55:00 compute-0 admiring_cannon[267397]:             "devices": [
Nov 24 09:55:00 compute-0 admiring_cannon[267397]:                 "/dev/loop3"
Nov 24 09:55:00 compute-0 admiring_cannon[267397]:             ],
Nov 24 09:55:00 compute-0 admiring_cannon[267397]:             "lv_name": "ceph_lv0",
Nov 24 09:55:00 compute-0 admiring_cannon[267397]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:55:00 compute-0 admiring_cannon[267397]:             "lv_size": "21470642176",
Nov 24 09:55:00 compute-0 admiring_cannon[267397]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=84a084c3-61a7-5de7-8207-1f88efa59a64,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 24 09:55:00 compute-0 admiring_cannon[267397]:             "lv_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:55:00 compute-0 admiring_cannon[267397]:             "name": "ceph_lv0",
Nov 24 09:55:00 compute-0 admiring_cannon[267397]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:55:00 compute-0 admiring_cannon[267397]:             "tags": {
Nov 24 09:55:00 compute-0 admiring_cannon[267397]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:55:00 compute-0 admiring_cannon[267397]:                 "ceph.block_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:55:00 compute-0 admiring_cannon[267397]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 09:55:00 compute-0 admiring_cannon[267397]:                 "ceph.cluster_fsid": "84a084c3-61a7-5de7-8207-1f88efa59a64",
Nov 24 09:55:00 compute-0 admiring_cannon[267397]:                 "ceph.cluster_name": "ceph",
Nov 24 09:55:00 compute-0 admiring_cannon[267397]:                 "ceph.crush_device_class": "",
Nov 24 09:55:00 compute-0 admiring_cannon[267397]:                 "ceph.encrypted": "0",
Nov 24 09:55:00 compute-0 admiring_cannon[267397]:                 "ceph.osd_fsid": "4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c",
Nov 24 09:55:00 compute-0 admiring_cannon[267397]:                 "ceph.osd_id": "0",
Nov 24 09:55:00 compute-0 admiring_cannon[267397]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 09:55:00 compute-0 admiring_cannon[267397]:                 "ceph.type": "block",
Nov 24 09:55:00 compute-0 admiring_cannon[267397]:                 "ceph.vdo": "0",
Nov 24 09:55:00 compute-0 admiring_cannon[267397]:                 "ceph.with_tpm": "0"
Nov 24 09:55:00 compute-0 admiring_cannon[267397]:             },
Nov 24 09:55:00 compute-0 admiring_cannon[267397]:             "type": "block",
Nov 24 09:55:00 compute-0 admiring_cannon[267397]:             "vg_name": "ceph_vg0"
Nov 24 09:55:00 compute-0 admiring_cannon[267397]:         }
Nov 24 09:55:00 compute-0 admiring_cannon[267397]:     ]
Nov 24 09:55:00 compute-0 admiring_cannon[267397]: }
Nov 24 09:55:00 compute-0 systemd[1]: libpod-c46f8cda2dcc964b010e722afda8cab934c373726b580a19414448681ea129df.scope: Deactivated successfully.
Nov 24 09:55:00 compute-0 podman[267381]: 2025-11-24 09:55:00.619880297 +0000 UTC m=+0.489269520 container died c46f8cda2dcc964b010e722afda8cab934c373726b580a19414448681ea129df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_cannon, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:55:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-f73846489f3c5af9eaff37ba00da427e8998cd40196b167aa3b3b1c9c58b032a-merged.mount: Deactivated successfully.
Nov 24 09:55:00 compute-0 nova_compute[257700]: 2025-11-24 09:55:00.661 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:00 compute-0 podman[267381]: 2025-11-24 09:55:00.663128618 +0000 UTC m=+0.532517841 container remove c46f8cda2dcc964b010e722afda8cab934c373726b580a19414448681ea129df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 24 09:55:00 compute-0 systemd[1]: libpod-conmon-c46f8cda2dcc964b010e722afda8cab934c373726b580a19414448681ea129df.scope: Deactivated successfully.
Nov 24 09:55:00 compute-0 sudo[267274]: pam_unix(sudo:session): session closed for user root
Nov 24 09:55:00 compute-0 sudo[267419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:55:00 compute-0 sudo[267419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:55:00 compute-0 sudo[267419]: pam_unix(sudo:session): session closed for user root
Nov 24 09:55:00 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:00 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:00 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:00.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:00 compute-0 sudo[267444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- raw list --format json
Nov 24 09:55:00 compute-0 sudo[267444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:55:00 compute-0 podman[267468]: 2025-11-24 09:55:00.929127902 +0000 UTC m=+0.058710094 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118)
Nov 24 09:55:01 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:55:01] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 24 09:55:01 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:55:01] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 24 09:55:01 compute-0 podman[267469]: 2025-11-24 09:55:01.020001011 +0000 UTC m=+0.145197465 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Nov 24 09:55:01 compute-0 ceph-mon[74331]: pgmap v843: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Nov 24 09:55:01 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:55:01 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/1926278703' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 09:55:01 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/1926278703' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 09:55:01 compute-0 podman[267555]: 2025-11-24 09:55:01.250197538 +0000 UTC m=+0.037616972 container create 5c85aaf896814c51123ed0364792405482d54a765aba7b380c93f0b0c7d03e12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_nash, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:55:01 compute-0 systemd[1]: Started libpod-conmon-5c85aaf896814c51123ed0364792405482d54a765aba7b380c93f0b0c7d03e12.scope.
Nov 24 09:55:01 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:55:01 compute-0 podman[267555]: 2025-11-24 09:55:01.233004643 +0000 UTC m=+0.020424087 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:55:01 compute-0 podman[267555]: 2025-11-24 09:55:01.342348979 +0000 UTC m=+0.129768423 container init 5c85aaf896814c51123ed0364792405482d54a765aba7b380c93f0b0c7d03e12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_nash, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 09:55:01 compute-0 podman[267555]: 2025-11-24 09:55:01.348983944 +0000 UTC m=+0.136403378 container start 5c85aaf896814c51123ed0364792405482d54a765aba7b380c93f0b0c7d03e12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_nash, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:55:01 compute-0 podman[267555]: 2025-11-24 09:55:01.352680016 +0000 UTC m=+0.140099440 container attach 5c85aaf896814c51123ed0364792405482d54a765aba7b380c93f0b0c7d03e12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Nov 24 09:55:01 compute-0 funny_nash[267572]: 167 167
Nov 24 09:55:01 compute-0 systemd[1]: libpod-5c85aaf896814c51123ed0364792405482d54a765aba7b380c93f0b0c7d03e12.scope: Deactivated successfully.
Nov 24 09:55:01 compute-0 podman[267555]: 2025-11-24 09:55:01.356868089 +0000 UTC m=+0.144287513 container died 5c85aaf896814c51123ed0364792405482d54a765aba7b380c93f0b0c7d03e12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_nash, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 24 09:55:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-48e62c00873737f5f7021445210dd656499387fdc66cc48643edc7f4ff85b8e5-merged.mount: Deactivated successfully.
Nov 24 09:55:01 compute-0 podman[267555]: 2025-11-24 09:55:01.39452185 +0000 UTC m=+0.181941274 container remove 5c85aaf896814c51123ed0364792405482d54a765aba7b380c93f0b0c7d03e12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 09:55:01 compute-0 systemd[1]: libpod-conmon-5c85aaf896814c51123ed0364792405482d54a765aba7b380c93f0b0c7d03e12.scope: Deactivated successfully.
Nov 24 09:55:01 compute-0 podman[267595]: 2025-11-24 09:55:01.545890348 +0000 UTC m=+0.040203456 container create 0cbe445c91428ed7aedff8d752276f3256669bf15c5891b699dcac0fe7eaca3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_ardinghelli, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 24 09:55:01 compute-0 systemd[1]: Started libpod-conmon-0cbe445c91428ed7aedff8d752276f3256669bf15c5891b699dcac0fe7eaca3a.scope.
Nov 24 09:55:01 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:55:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4c69164f3eabb58930e564a94b8657bb05cfe963166bb648bf0b36c0eb81dd5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:55:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4c69164f3eabb58930e564a94b8657bb05cfe963166bb648bf0b36c0eb81dd5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:55:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4c69164f3eabb58930e564a94b8657bb05cfe963166bb648bf0b36c0eb81dd5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:55:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4c69164f3eabb58930e564a94b8657bb05cfe963166bb648bf0b36c0eb81dd5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:55:01 compute-0 podman[267595]: 2025-11-24 09:55:01.622921114 +0000 UTC m=+0.117234242 container init 0cbe445c91428ed7aedff8d752276f3256669bf15c5891b699dcac0fe7eaca3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Nov 24 09:55:01 compute-0 podman[267595]: 2025-11-24 09:55:01.530861986 +0000 UTC m=+0.025175114 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:55:01 compute-0 podman[267595]: 2025-11-24 09:55:01.630366158 +0000 UTC m=+0.124679266 container start 0cbe445c91428ed7aedff8d752276f3256669bf15c5891b699dcac0fe7eaca3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_ardinghelli, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 09:55:01 compute-0 podman[267595]: 2025-11-24 09:55:01.633774082 +0000 UTC m=+0.128087190 container attach 0cbe445c91428ed7aedff8d752276f3256669bf15c5891b699dcac0fe7eaca3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:55:01 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:01 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:01 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:01.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:02 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v844: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 30 op/s
Nov 24 09:55:02 compute-0 lvm[267687]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 09:55:02 compute-0 lvm[267687]: VG ceph_vg0 finished
Nov 24 09:55:02 compute-0 upbeat_ardinghelli[267613]: {}
Nov 24 09:55:02 compute-0 systemd[1]: libpod-0cbe445c91428ed7aedff8d752276f3256669bf15c5891b699dcac0fe7eaca3a.scope: Deactivated successfully.
Nov 24 09:55:02 compute-0 systemd[1]: libpod-0cbe445c91428ed7aedff8d752276f3256669bf15c5891b699dcac0fe7eaca3a.scope: Consumed 1.026s CPU time.
Nov 24 09:55:02 compute-0 podman[267595]: 2025-11-24 09:55:02.295523302 +0000 UTC m=+0.789836410 container died 0cbe445c91428ed7aedff8d752276f3256669bf15c5891b699dcac0fe7eaca3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Nov 24 09:55:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4c69164f3eabb58930e564a94b8657bb05cfe963166bb648bf0b36c0eb81dd5-merged.mount: Deactivated successfully.
Nov 24 09:55:02 compute-0 podman[267595]: 2025-11-24 09:55:02.340767611 +0000 UTC m=+0.835080719 container remove 0cbe445c91428ed7aedff8d752276f3256669bf15c5891b699dcac0fe7eaca3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_ardinghelli, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 09:55:02 compute-0 systemd[1]: libpod-conmon-0cbe445c91428ed7aedff8d752276f3256669bf15c5891b699dcac0fe7eaca3a.scope: Deactivated successfully.
Nov 24 09:55:02 compute-0 sudo[267444]: pam_unix(sudo:session): session closed for user root
Nov 24 09:55:02 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:55:02 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:55:02 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:55:02 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:55:02 compute-0 sudo[267703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:55:02 compute-0 sudo[267703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:55:02 compute-0 sudo[267703]: pam_unix(sudo:session): session closed for user root
Nov 24 09:55:02 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:02 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:02 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:02.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:03 compute-0 nova_compute[257700]: 2025-11-24 09:55:03.339 257704 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763978088.3378954, a30689a0-a2d7-4b8d-9f45-9763cda52bf9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 09:55:03 compute-0 nova_compute[257700]: 2025-11-24 09:55:03.340 257704 INFO nova.compute.manager [-] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] VM Stopped (Lifecycle Event)
Nov 24 09:55:03 compute-0 nova_compute[257700]: 2025-11-24 09:55:03.357 257704 DEBUG nova.compute.manager [None req-7672d19d-e439-45cd-b54b-a0a7fe9f573a - - - - - -] [instance: a30689a0-a2d7-4b8d-9f45-9763cda52bf9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 09:55:03 compute-0 nova_compute[257700]: 2025-11-24 09:55:03.359 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:03 compute-0 ceph-mon[74331]: pgmap v844: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 30 op/s
Nov 24 09:55:03 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:55:03 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:55:03 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:55:03 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:03 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:03 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:03.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:04 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v845: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 09:55:04 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:04 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:04 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:04.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:55:04 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:55:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:55:04 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:55:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:55:04 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:55:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:55:05 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:55:05 compute-0 ceph-mon[74331]: pgmap v845: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 09:55:05 compute-0 nova_compute[257700]: 2025-11-24 09:55:05.662 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:05 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:05 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:05 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:05.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:06 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v846: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 09:55:06 compute-0 sudo[267731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:55:06 compute-0 sudo[267731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:55:06 compute-0 sudo[267731]: pam_unix(sudo:session): session closed for user root
Nov 24 09:55:06 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:06 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:06 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:06.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:07 compute-0 ceph-mon[74331]: pgmap v846: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 09:55:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:55:07.505Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:55:07 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 09:55:07 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 5763 writes, 25K keys, 5763 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.03 MB/s
                                           Cumulative WAL: 5763 writes, 5763 syncs, 1.00 writes per sync, written: 0.05 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1542 writes, 6562 keys, 1542 commit groups, 1.0 writes per commit group, ingest: 11.29 MB, 0.02 MB/s
                                           Interval WAL: 1542 writes, 1542 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    101.1      0.40              0.12        14    0.028       0      0       0.0       0.0
                                             L6      1/0   11.97 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   4.1    160.9    137.6      1.20              0.40        13    0.092     67K   6885       0.0       0.0
                                            Sum      1/0   11.97 MB   0.0      0.2     0.0      0.1       0.2      0.1       0.0   5.1    121.0    128.5      1.60              0.51        27    0.059     67K   6885       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.4    163.3    161.1      0.45              0.19        10    0.045     29K   2551       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   0.0    160.9    137.6      1.20              0.40        13    0.092     67K   6885       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    102.0      0.39              0.12        13    0.030       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.6      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.039, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.20 GB write, 0.11 MB/s write, 0.19 GB read, 0.11 MB/s read, 1.6 seconds
                                           Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b87797d350#2 capacity: 304.00 MB usage: 14.59 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000121 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(798,14.04 MB,4.61698%) FilterBlock(28,202.30 KB,0.0649854%) IndexBlock(28,360.30 KB,0.115741%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 24 09:55:07 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:55:07.724 165073 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:13:51', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4e:f0:a8:6f:5e:1b'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 09:55:07 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:55:07.725 165073 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 09:55:07 compute-0 nova_compute[257700]: 2025-11-24 09:55:07.725 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:07 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:55:07.726 165073 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feb242b9-6422-4c37-bc7a-5c14a79beaf8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:55:07 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:07 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:07 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:07.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:08 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v847: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 09:55:08 compute-0 nova_compute[257700]: 2025-11-24 09:55:08.360 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:08 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:55:08 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:08 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:55:08 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:08.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:55:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:55:08.864Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:55:09 compute-0 ceph-mon[74331]: pgmap v847: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 09:55:09 compute-0 podman[267760]: 2025-11-24 09:55:09.798068036 +0000 UTC m=+0.064669452 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 09:55:09 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:09 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:55:09 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:09.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:55:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:55:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:55:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:55:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:55:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:55:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:55:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:55:10 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:55:10 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v848: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 09:55:10 compute-0 ceph-mon[74331]: pgmap v848: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 09:55:10 compute-0 nova_compute[257700]: 2025-11-24 09:55:10.665 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:10 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:10 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:10 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:10.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:55:10] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 24 09:55:10 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:55:10] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 24 09:55:11 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:11 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:11 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:11.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:12 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v849: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 09:55:12 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:12 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:12 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:12.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:13 compute-0 ceph-mon[74331]: pgmap v849: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 09:55:13 compute-0 nova_compute[257700]: 2025-11-24 09:55:13.362 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:55:13 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:13 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:13 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:13.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:13 compute-0 nova_compute[257700]: 2025-11-24 09:55:13.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:55:13 compute-0 nova_compute[257700]: 2025-11-24 09:55:13.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:55:13 compute-0 nova_compute[257700]: 2025-11-24 09:55:13.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:55:13 compute-0 nova_compute[257700]: 2025-11-24 09:55:13.921 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 09:55:14 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v850: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 09:55:14 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:14 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:14 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:14.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:14 compute-0 nova_compute[257700]: 2025-11-24 09:55:14.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:55:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:55:14 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:55:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:55:14 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:55:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:55:14 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:55:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:55:15 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:55:15 compute-0 ceph-mon[74331]: pgmap v850: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 09:55:15 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/69888656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:55:15 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/1018635266' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:55:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:55:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:55:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:55:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:55:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:55:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:55:15 compute-0 nova_compute[257700]: 2025-11-24 09:55:15.666 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:15 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:15 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:55:15 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:15.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:55:15 compute-0 nova_compute[257700]: 2025-11-24 09:55:15.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:55:15 compute-0 nova_compute[257700]: 2025-11-24 09:55:15.921 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 09:55:15 compute-0 nova_compute[257700]: 2025-11-24 09:55:15.921 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 09:55:15 compute-0 nova_compute[257700]: 2025-11-24 09:55:15.934 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 09:55:15 compute-0 nova_compute[257700]: 2025-11-24 09:55:15.934 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:55:15 compute-0 nova_compute[257700]: 2025-11-24 09:55:15.935 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:55:15 compute-0 nova_compute[257700]: 2025-11-24 09:55:15.952 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:55:15 compute-0 nova_compute[257700]: 2025-11-24 09:55:15.952 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:55:15 compute-0 nova_compute[257700]: 2025-11-24 09:55:15.952 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:55:15 compute-0 nova_compute[257700]: 2025-11-24 09:55:15.952 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 09:55:15 compute-0 nova_compute[257700]: 2025-11-24 09:55:15.953 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:55:16 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v851: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 09:55:16 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:55:16 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:55:16 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2401010604' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:55:16 compute-0 nova_compute[257700]: 2025-11-24 09:55:16.368 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:55:16 compute-0 nova_compute[257700]: 2025-11-24 09:55:16.537 257704 WARNING nova.virt.libvirt.driver [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 09:55:16 compute-0 nova_compute[257700]: 2025-11-24 09:55:16.539 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4643MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 09:55:16 compute-0 nova_compute[257700]: 2025-11-24 09:55:16.540 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:55:16 compute-0 nova_compute[257700]: 2025-11-24 09:55:16.540 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:55:16 compute-0 nova_compute[257700]: 2025-11-24 09:55:16.593 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 09:55:16 compute-0 nova_compute[257700]: 2025-11-24 09:55:16.594 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 09:55:16 compute-0 nova_compute[257700]: 2025-11-24 09:55:16.613 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:55:16 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:16 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:16 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:16.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:17 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:55:17 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2373378575' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:55:17 compute-0 nova_compute[257700]: 2025-11-24 09:55:17.080 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:55:17 compute-0 nova_compute[257700]: 2025-11-24 09:55:17.085 257704 DEBUG nova.compute.provider_tree [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Inventory has not changed in ProviderTree for provider: a50ce3b5-7e9e-4263-a4aa-c35573ac7257 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 09:55:17 compute-0 nova_compute[257700]: 2025-11-24 09:55:17.102 257704 DEBUG nova.scheduler.client.report [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Inventory has not changed for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 09:55:17 compute-0 nova_compute[257700]: 2025-11-24 09:55:17.122 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 09:55:17 compute-0 nova_compute[257700]: 2025-11-24 09:55:17.122 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.582s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:55:17 compute-0 ceph-mon[74331]: pgmap v851: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 09:55:17 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2401010604' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:55:17 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2373378575' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:55:17 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/2838487968' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:55:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:55:17.507Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:55:17 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:17 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:17 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:17.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:18 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v852: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 09:55:18 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/4207547769' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:55:18 compute-0 nova_compute[257700]: 2025-11-24 09:55:18.363 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:18 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:55:18 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:18 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:18 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:18.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:55:18.864Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:55:19 compute-0 nova_compute[257700]: 2025-11-24 09:55:19.117 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:55:19 compute-0 ceph-mon[74331]: pgmap v852: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 09:55:19 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:19 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:19 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:19.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:19 compute-0 nova_compute[257700]: 2025-11-24 09:55:19.927 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:55:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:55:20 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:55:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:55:20 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:55:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:55:20 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:55:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:55:20 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:55:20 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v853: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 09:55:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:55:20.564 165073 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:55:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:55:20.565 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:55:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:55:20.565 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:55:20 compute-0 nova_compute[257700]: 2025-11-24 09:55:20.670 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:20 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:20 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:20 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:20.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:20 compute-0 nova_compute[257700]: 2025-11-24 09:55:20.920 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:55:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:55:20] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Nov 24 09:55:20 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:55:20] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Nov 24 09:55:21 compute-0 ceph-mon[74331]: pgmap v853: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 09:55:21 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:55:21 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3417310250' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:55:21 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:21 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:21 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:21.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:22 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v854: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 09:55:22 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/3417310250' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:55:22 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:22 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:22 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:22.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:23 compute-0 ceph-mon[74331]: pgmap v854: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 09:55:23 compute-0 nova_compute[257700]: 2025-11-24 09:55:23.366 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:23 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:55:23 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:23 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:23 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:23.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:24 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v855: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 09:55:24 compute-0 sshd-session[267837]: Invalid user angel from 36.255.3.203 port 47770
Nov 24 09:55:24 compute-0 sshd-session[267837]: Received disconnect from 36.255.3.203 port 47770:11: Bye Bye [preauth]
Nov 24 09:55:24 compute-0 sshd-session[267837]: Disconnected from invalid user angel 36.255.3.203 port 47770 [preauth]
Nov 24 09:55:24 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:24 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:24 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:24.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:55:25 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:55:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:55:25 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:55:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:55:25 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:55:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:55:25 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:55:25 compute-0 ceph-mon[74331]: pgmap v855: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 09:55:25 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/3838070251' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 09:55:25 compute-0 nova_compute[257700]: 2025-11-24 09:55:25.712 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:25 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:25 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:25 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:25.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:26 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v856: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 09:55:26 compute-0 sudo[267842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:55:26 compute-0 sudo[267842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:55:26 compute-0 sudo[267842]: pam_unix(sudo:session): session closed for user root
Nov 24 09:55:26 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/861026441' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 09:55:26 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:26 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:26 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:26.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:27 compute-0 ceph-mon[74331]: pgmap v856: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 09:55:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:55:27.508Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:55:27 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:27 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:27 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:27.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:28 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v857: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Nov 24 09:55:28 compute-0 nova_compute[257700]: 2025-11-24 09:55:28.368 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:28 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:55:28 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:28 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:28 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:28.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:55:28.865Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:55:29 compute-0 ceph-mon[74331]: pgmap v857: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Nov 24 09:55:29 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:29 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:29 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:29.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:55:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:55:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:55:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:55:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:55:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:55:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:55:30 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:55:30 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v858: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Nov 24 09:55:30 compute-0 nova_compute[257700]: 2025-11-24 09:55:30.715 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:30 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:30 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:30 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:30.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:55:30] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Nov 24 09:55:30 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:55:30] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Nov 24 09:55:31 compute-0 ceph-mon[74331]: pgmap v858: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Nov 24 09:55:31 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:55:31 compute-0 sshd-session[267872]: Received disconnect from 83.229.122.23 port 53202:11: Bye Bye [preauth]
Nov 24 09:55:31 compute-0 sshd-session[267872]: Disconnected from authenticating user root 83.229.122.23 port 53202 [preauth]
Nov 24 09:55:31 compute-0 podman[267876]: 2025-11-24 09:55:31.811894767 +0000 UTC m=+0.089106446 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 09:55:31 compute-0 podman[267877]: 2025-11-24 09:55:31.845625402 +0000 UTC m=+0.120699078 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:55:31 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:31 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:31 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:31.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:32 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v859: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Nov 24 09:55:32 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:32 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:32 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:32.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:33 compute-0 nova_compute[257700]: 2025-11-24 09:55:33.370 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:33 compute-0 ceph-mon[74331]: pgmap v859: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Nov 24 09:55:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:55:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:33.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:34 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v860: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 24 09:55:34 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:34 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:34 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:34.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:55:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:55:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:55:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:55:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:55:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:55:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:55:35 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:55:35 compute-0 ceph-mon[74331]: pgmap v860: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 24 09:55:35 compute-0 nova_compute[257700]: 2025-11-24 09:55:35.757 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:35 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:35 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:55:35 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:35.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:55:36 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v861: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 24 09:55:36 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:36 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:36 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:36.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:37 compute-0 ceph-mon[74331]: pgmap v861: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 24 09:55:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:55:37.509Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:55:37 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 24 09:55:37 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:37 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 09:55:37 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:37.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 09:55:38 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v862: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Nov 24 09:55:38 compute-0 nova_compute[257700]: 2025-11-24 09:55:38.371 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:38 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:55:38 compute-0 sshd-session[267870]: error: kex_exchange_identification: read: Connection timed out
Nov 24 09:55:38 compute-0 sshd-session[267870]: banner exchange: Connection from 14.215.126.91 port 42464: Connection timed out
Nov 24 09:55:38 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:38 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:38 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:38.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:55:38.866Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:55:39 compute-0 ceph-mon[74331]: pgmap v862: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Nov 24 09:55:39 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:39 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:39 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:39.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:55:39 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:55:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:55:39 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:55:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:55:39 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:55:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:55:40 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:55:40 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v863: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 69 op/s
Nov 24 09:55:40 compute-0 ovn_controller[155123]: 2025-11-24T09:55:40Z|00038|memory_trim|INFO|Detected inactivity (last active 30008 ms ago): trimming memory
Nov 24 09:55:40 compute-0 ceph-mon[74331]: pgmap v863: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 69 op/s
Nov 24 09:55:40 compute-0 nova_compute[257700]: 2025-11-24 09:55:40.758 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:40 compute-0 podman[267929]: 2025-11-24 09:55:40.77803069 +0000 UTC m=+0.050679196 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 24 09:55:40 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:40 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:40 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:40.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:55:40] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Nov 24 09:55:40 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:55:40] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Nov 24 09:55:41 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:41 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:41 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:41.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:42 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v864: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Nov 24 09:55:42 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:42 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:55:42 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:42.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:55:43 compute-0 ceph-mon[74331]: pgmap v864: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Nov 24 09:55:43 compute-0 nova_compute[257700]: 2025-11-24 09:55:43.373 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:55:43 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:43 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:43 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:43.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:44 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v865: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 391 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Nov 24 09:55:44 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:44 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:44 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:44.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:55:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:55:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:55:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:55:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:55:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:55:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:55:45 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:55:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-0[79585]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Nov 24 09:55:45 compute-0 ceph-mon[74331]: pgmap v865: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 391 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Nov 24 09:55:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Optimize plan auto_2025-11-24_09:55:45
Nov 24 09:55:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 09:55:45 compute-0 ceph-mgr[74626]: [balancer INFO root] do_upmap
Nov 24 09:55:45 compute-0 ceph-mgr[74626]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', '.nfs', 'volumes', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', 'vms', 'default.rgw.control', 'images', '.mgr', 'default.rgw.meta']
Nov 24 09:55:45 compute-0 ceph-mgr[74626]: [balancer INFO root] prepared 0/10 upmap changes
Nov 24 09:55:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:55:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:55:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:55:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:55:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:55:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:55:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 09:55:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:55:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 09:55:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:55:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00075666583235658 of space, bias 1.0, pg target 0.226999749706974 quantized to 32 (current 32)
Nov 24 09:55:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:55:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:55:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:55:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:55:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:55:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 24 09:55:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:55:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 09:55:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:55:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:55:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:55:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 24 09:55:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:55:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 09:55:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:55:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:55:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:55:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 09:55:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:55:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 09:55:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 09:55:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:55:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 09:55:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:55:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:55:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:55:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:55:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:55:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:55:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:55:45 compute-0 nova_compute[257700]: 2025-11-24 09:55:45.760 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:45 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:45 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:55:45 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:45.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:55:46 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v866: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 391 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Nov 24 09:55:46 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:55:46 compute-0 sudo[267954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:55:46 compute-0 sudo[267954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:55:46 compute-0 sudo[267954]: pam_unix(sudo:session): session closed for user root
Nov 24 09:55:46 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:46 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:46 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:46.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:47 compute-0 ceph-mon[74331]: pgmap v866: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 391 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Nov 24 09:55:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:55:47.510Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:55:47 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:47 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:55:47 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:47.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:55:48 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v867: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 391 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Nov 24 09:55:48 compute-0 nova_compute[257700]: 2025-11-24 09:55:48.374 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:48 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:55:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:55:48.867Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 09:55:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:55:48.867Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 09:55:48 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:48 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:55:48 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:48.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:55:49 compute-0 ceph-mon[74331]: pgmap v867: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 391 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Nov 24 09:55:49 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:49 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:49 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:49.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:55:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:55:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:55:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:55:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:55:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:55:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:55:50 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:55:50 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v868: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 391 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Nov 24 09:55:50 compute-0 nova_compute[257700]: 2025-11-24 09:55:50.762 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:50 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:50 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:55:50 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:50.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:55:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:55:50] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Nov 24 09:55:50 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:55:50] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Nov 24 09:55:51 compute-0 ceph-mon[74331]: pgmap v868: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 391 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Nov 24 09:55:51 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:51 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:51 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:51.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:52 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v869: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 392 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Nov 24 09:55:52 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:52 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:52 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:52.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:53 compute-0 ceph-mon[74331]: pgmap v869: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 392 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Nov 24 09:55:53 compute-0 nova_compute[257700]: 2025-11-24 09:55:53.376 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:53 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:55:53 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:53 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:55:53 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:53.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:55:54 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v870: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 12 KiB/s wr, 1 op/s
Nov 24 09:55:54 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:54 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:54 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:54.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:55:54 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:55:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:55:54 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:55:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:55:54 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:55:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:55:55 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:55:55 compute-0 ceph-mon[74331]: pgmap v870: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 12 KiB/s wr, 1 op/s
Nov 24 09:55:55 compute-0 nova_compute[257700]: 2025-11-24 09:55:55.764 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:55 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:55 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:55 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:55.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:56 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v871: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 12 KiB/s wr, 1 op/s
Nov 24 09:55:56 compute-0 ceph-mon[74331]: pgmap v871: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 12 KiB/s wr, 1 op/s
Nov 24 09:55:56 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:56 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:55:56 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:56.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:55:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:55:57.511Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:55:57 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:57 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:55:57 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:57.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:55:58 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v872: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 14 KiB/s wr, 2 op/s
Nov 24 09:55:58 compute-0 sshd-session[267991]: Invalid user support from 78.128.112.74 port 55260
Nov 24 09:55:58 compute-0 nova_compute[257700]: 2025-11-24 09:55:58.378 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:55:58 compute-0 sshd-session[267991]: Connection closed by invalid user support 78.128.112.74 port 55260 [preauth]
Nov 24 09:55:58 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:55:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:55:58.868Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:55:58 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:58 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:58 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:55:58.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:55:59 compute-0 ceph-mon[74331]: pgmap v872: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 14 KiB/s wr, 2 op/s
Nov 24 09:55:59 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:55:59 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:55:59 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:55:59.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:56:00 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:56:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:56:00 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:56:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:56:00 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:56:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:56:00 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:56:00 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v873: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 2.0 KiB/s wr, 1 op/s
Nov 24 09:56:00 compute-0 nova_compute[257700]: 2025-11-24 09:56:00.765 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:00 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:00 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:00 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:00.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:56:00] "GET /metrics HTTP/1.1" 200 48471 "" "Prometheus/2.51.0"
Nov 24 09:56:00 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:56:00] "GET /metrics HTTP/1.1" 200 48471 "" "Prometheus/2.51.0"
Nov 24 09:56:01 compute-0 ceph-mon[74331]: pgmap v873: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 2.0 KiB/s wr, 1 op/s
Nov 24 09:56:01 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:56:01 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/2361167424' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 09:56:01 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/2361167424' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 09:56:01 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:01 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:01 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:01.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:02 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v874: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 3.0 KiB/s wr, 1 op/s
Nov 24 09:56:02 compute-0 sudo[267999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:56:02 compute-0 sudo[267999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:56:02 compute-0 sudo[267999]: pam_unix(sudo:session): session closed for user root
Nov 24 09:56:02 compute-0 podman[267998]: 2025-11-24 09:56:02.77848894 +0000 UTC m=+0.052248035 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 24 09:56:02 compute-0 podman[268002]: 2025-11-24 09:56:02.809239421 +0000 UTC m=+0.080526185 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 24 09:56:02 compute-0 sudo[268057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:56:02 compute-0 sudo[268057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:56:02 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:02 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:02 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:02.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:03 compute-0 ceph-mon[74331]: pgmap v874: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 3.0 KiB/s wr, 1 op/s
Nov 24 09:56:03 compute-0 sudo[268057]: pam_unix(sudo:session): session closed for user root
Nov 24 09:56:03 compute-0 nova_compute[257700]: 2025-11-24 09:56:03.379 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:03 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v875: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 3.2 KiB/s wr, 1 op/s
Nov 24 09:56:03 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:56:03 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:56:03 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:56:03 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:56:03 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:56:03 compute-0 sudo[268125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:56:03 compute-0 sudo[268125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:56:03 compute-0 sudo[268125]: pam_unix(sudo:session): session closed for user root
Nov 24 09:56:03 compute-0 sudo[268150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 24 09:56:03 compute-0 sudo[268150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:56:03 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:03 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:56:03 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:03.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:56:04 compute-0 podman[268217]: 2025-11-24 09:56:04.106778295 +0000 UTC m=+0.041429966 container create 174dc23a3f236aaaa132acb03884d4ef34c3c4f380153dc38a8ca63a90166c4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_dijkstra, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 09:56:04 compute-0 systemd[1]: Started libpod-conmon-174dc23a3f236aaaa132acb03884d4ef34c3c4f380153dc38a8ca63a90166c4e.scope.
Nov 24 09:56:04 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:56:04 compute-0 podman[268217]: 2025-11-24 09:56:04.088870382 +0000 UTC m=+0.023522083 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:56:04 compute-0 podman[268217]: 2025-11-24 09:56:04.192217179 +0000 UTC m=+0.126868930 container init 174dc23a3f236aaaa132acb03884d4ef34c3c4f380153dc38a8ca63a90166c4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_dijkstra, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:56:04 compute-0 podman[268217]: 2025-11-24 09:56:04.200664188 +0000 UTC m=+0.135315859 container start 174dc23a3f236aaaa132acb03884d4ef34c3c4f380153dc38a8ca63a90166c4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_dijkstra, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True)
Nov 24 09:56:04 compute-0 podman[268217]: 2025-11-24 09:56:04.205178951 +0000 UTC m=+0.139830642 container attach 174dc23a3f236aaaa132acb03884d4ef34c3c4f380153dc38a8ca63a90166c4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_dijkstra, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:56:04 compute-0 ecstatic_dijkstra[268234]: 167 167
Nov 24 09:56:04 compute-0 systemd[1]: libpod-174dc23a3f236aaaa132acb03884d4ef34c3c4f380153dc38a8ca63a90166c4e.scope: Deactivated successfully.
Nov 24 09:56:04 compute-0 podman[268217]: 2025-11-24 09:56:04.208359059 +0000 UTC m=+0.143010740 container died 174dc23a3f236aaaa132acb03884d4ef34c3c4f380153dc38a8ca63a90166c4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 24 09:56:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c2b9726f5770f8fea93eb88c2ecbc0490263db76e611e2a88a522cbdc4fa032-merged.mount: Deactivated successfully.
Nov 24 09:56:04 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:56:04 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:56:04 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:56:04 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:56:04 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:56:04 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:56:04 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:56:04 compute-0 podman[268217]: 2025-11-24 09:56:04.251808525 +0000 UTC m=+0.186460196 container remove 174dc23a3f236aaaa132acb03884d4ef34c3c4f380153dc38a8ca63a90166c4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_dijkstra, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:56:04 compute-0 systemd[1]: libpod-conmon-174dc23a3f236aaaa132acb03884d4ef34c3c4f380153dc38a8ca63a90166c4e.scope: Deactivated successfully.
Nov 24 09:56:04 compute-0 podman[268256]: 2025-11-24 09:56:04.404795961 +0000 UTC m=+0.043056437 container create bd394816bd63eb0b66099f4f8adbf16f429f038df8287b242e0c10cc4434979e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_booth, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:56:04 compute-0 systemd[1]: Started libpod-conmon-bd394816bd63eb0b66099f4f8adbf16f429f038df8287b242e0c10cc4434979e.scope.
Nov 24 09:56:04 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:56:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/016edb2b378d9bf74cffd97e7d621c2a26bf85588ea8323c0040aa4cbfb7ac35/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:56:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/016edb2b378d9bf74cffd97e7d621c2a26bf85588ea8323c0040aa4cbfb7ac35/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:56:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/016edb2b378d9bf74cffd97e7d621c2a26bf85588ea8323c0040aa4cbfb7ac35/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:56:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/016edb2b378d9bf74cffd97e7d621c2a26bf85588ea8323c0040aa4cbfb7ac35/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:56:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/016edb2b378d9bf74cffd97e7d621c2a26bf85588ea8323c0040aa4cbfb7ac35/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:56:04 compute-0 podman[268256]: 2025-11-24 09:56:04.47420562 +0000 UTC m=+0.112466096 container init bd394816bd63eb0b66099f4f8adbf16f429f038df8287b242e0c10cc4434979e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_booth, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325)
Nov 24 09:56:04 compute-0 podman[268256]: 2025-11-24 09:56:04.386993331 +0000 UTC m=+0.025253827 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:56:04 compute-0 podman[268256]: 2025-11-24 09:56:04.482542726 +0000 UTC m=+0.120803202 container start bd394816bd63eb0b66099f4f8adbf16f429f038df8287b242e0c10cc4434979e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_booth, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 24 09:56:04 compute-0 podman[268256]: 2025-11-24 09:56:04.485488559 +0000 UTC m=+0.123749055 container attach bd394816bd63eb0b66099f4f8adbf16f429f038df8287b242e0c10cc4434979e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_booth, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:56:04 compute-0 xenodochial_booth[268273]: --> passed data devices: 0 physical, 1 LVM
Nov 24 09:56:04 compute-0 xenodochial_booth[268273]: --> All data devices are unavailable
Nov 24 09:56:04 compute-0 systemd[1]: libpod-bd394816bd63eb0b66099f4f8adbf16f429f038df8287b242e0c10cc4434979e.scope: Deactivated successfully.
Nov 24 09:56:04 compute-0 podman[268256]: 2025-11-24 09:56:04.794945719 +0000 UTC m=+0.433206195 container died bd394816bd63eb0b66099f4f8adbf16f429f038df8287b242e0c10cc4434979e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_booth, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 09:56:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-016edb2b378d9bf74cffd97e7d621c2a26bf85588ea8323c0040aa4cbfb7ac35-merged.mount: Deactivated successfully.
Nov 24 09:56:04 compute-0 podman[268256]: 2025-11-24 09:56:04.835722938 +0000 UTC m=+0.473983414 container remove bd394816bd63eb0b66099f4f8adbf16f429f038df8287b242e0c10cc4434979e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 24 09:56:04 compute-0 systemd[1]: libpod-conmon-bd394816bd63eb0b66099f4f8adbf16f429f038df8287b242e0c10cc4434979e.scope: Deactivated successfully.
Nov 24 09:56:04 compute-0 sudo[268150]: pam_unix(sudo:session): session closed for user root
Nov 24 09:56:04 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:04 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:04 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:04.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:04 compute-0 sudo[268300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:56:04 compute-0 sudo[268300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:56:04 compute-0 sudo[268300]: pam_unix(sudo:session): session closed for user root
Nov 24 09:56:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:56:04 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:56:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:56:05 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:56:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:56:05 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:56:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:56:05 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:56:05 compute-0 sudo[268325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- lvm list --format json
Nov 24 09:56:05 compute-0 sudo[268325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:56:05 compute-0 ceph-mon[74331]: pgmap v875: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 3.2 KiB/s wr, 1 op/s
Nov 24 09:56:05 compute-0 podman[268389]: 2025-11-24 09:56:05.371852178 +0000 UTC m=+0.037649533 container create 482c2b7cfb9e59d66cd258d677d170434a369ebf6f93069de71e9a60bc80db5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:56:05 compute-0 systemd[1]: Started libpod-conmon-482c2b7cfb9e59d66cd258d677d170434a369ebf6f93069de71e9a60bc80db5b.scope.
Nov 24 09:56:05 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:56:05 compute-0 podman[268389]: 2025-11-24 09:56:05.447653134 +0000 UTC m=+0.113450509 container init 482c2b7cfb9e59d66cd258d677d170434a369ebf6f93069de71e9a60bc80db5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_gagarin, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:56:05 compute-0 podman[268389]: 2025-11-24 09:56:05.356908037 +0000 UTC m=+0.022705412 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:56:05 compute-0 podman[268389]: 2025-11-24 09:56:05.456701658 +0000 UTC m=+0.122499013 container start 482c2b7cfb9e59d66cd258d677d170434a369ebf6f93069de71e9a60bc80db5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:56:05 compute-0 podman[268389]: 2025-11-24 09:56:05.459742803 +0000 UTC m=+0.125540188 container attach 482c2b7cfb9e59d66cd258d677d170434a369ebf6f93069de71e9a60bc80db5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_gagarin, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:56:05 compute-0 nostalgic_gagarin[268406]: 167 167
Nov 24 09:56:05 compute-0 systemd[1]: libpod-482c2b7cfb9e59d66cd258d677d170434a369ebf6f93069de71e9a60bc80db5b.scope: Deactivated successfully.
Nov 24 09:56:05 compute-0 podman[268389]: 2025-11-24 09:56:05.462925762 +0000 UTC m=+0.128723127 container died 482c2b7cfb9e59d66cd258d677d170434a369ebf6f93069de71e9a60bc80db5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_gagarin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:56:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-05169e4ae7dc94fa08d69a7641dedec8f4719305d5699a7d43affb842ce2311b-merged.mount: Deactivated successfully.
Nov 24 09:56:05 compute-0 podman[268389]: 2025-11-24 09:56:05.504633574 +0000 UTC m=+0.170430929 container remove 482c2b7cfb9e59d66cd258d677d170434a369ebf6f93069de71e9a60bc80db5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_gagarin, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Nov 24 09:56:05 compute-0 systemd[1]: libpod-conmon-482c2b7cfb9e59d66cd258d677d170434a369ebf6f93069de71e9a60bc80db5b.scope: Deactivated successfully.
Nov 24 09:56:05 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v876: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 3.2 KiB/s wr, 1 op/s
Nov 24 09:56:05 compute-0 podman[268430]: 2025-11-24 09:56:05.674253993 +0000 UTC m=+0.045549739 container create 5cb4b8c7076946a0bb988501787473e11d5f456b7e580ca012bdcc2cd960267c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 09:56:05 compute-0 systemd[1]: Started libpod-conmon-5cb4b8c7076946a0bb988501787473e11d5f456b7e580ca012bdcc2cd960267c.scope.
Nov 24 09:56:05 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:56:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dfae2dbe2fa892d8cea342216a3039d5ab0d58ad27d2aa5edacd6b11e44ffb3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:56:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dfae2dbe2fa892d8cea342216a3039d5ab0d58ad27d2aa5edacd6b11e44ffb3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:56:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dfae2dbe2fa892d8cea342216a3039d5ab0d58ad27d2aa5edacd6b11e44ffb3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:56:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dfae2dbe2fa892d8cea342216a3039d5ab0d58ad27d2aa5edacd6b11e44ffb3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:56:05 compute-0 podman[268430]: 2025-11-24 09:56:05.657034007 +0000 UTC m=+0.028329793 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:56:05 compute-0 podman[268430]: 2025-11-24 09:56:05.761914733 +0000 UTC m=+0.133210489 container init 5cb4b8c7076946a0bb988501787473e11d5f456b7e580ca012bdcc2cd960267c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:56:05 compute-0 podman[268430]: 2025-11-24 09:56:05.768398373 +0000 UTC m=+0.139694119 container start 5cb4b8c7076946a0bb988501787473e11d5f456b7e580ca012bdcc2cd960267c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_pascal, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Nov 24 09:56:05 compute-0 nova_compute[257700]: 2025-11-24 09:56:05.768 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:05 compute-0 podman[268430]: 2025-11-24 09:56:05.771417157 +0000 UTC m=+0.142712903 container attach 5cb4b8c7076946a0bb988501787473e11d5f456b7e580ca012bdcc2cd960267c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 09:56:05 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:05 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:05 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:05.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:06 compute-0 blissful_pascal[268448]: {
Nov 24 09:56:06 compute-0 blissful_pascal[268448]:     "0": [
Nov 24 09:56:06 compute-0 blissful_pascal[268448]:         {
Nov 24 09:56:06 compute-0 blissful_pascal[268448]:             "devices": [
Nov 24 09:56:06 compute-0 blissful_pascal[268448]:                 "/dev/loop3"
Nov 24 09:56:06 compute-0 blissful_pascal[268448]:             ],
Nov 24 09:56:06 compute-0 blissful_pascal[268448]:             "lv_name": "ceph_lv0",
Nov 24 09:56:06 compute-0 blissful_pascal[268448]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:56:06 compute-0 blissful_pascal[268448]:             "lv_size": "21470642176",
Nov 24 09:56:06 compute-0 blissful_pascal[268448]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=84a084c3-61a7-5de7-8207-1f88efa59a64,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 24 09:56:06 compute-0 blissful_pascal[268448]:             "lv_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:56:06 compute-0 blissful_pascal[268448]:             "name": "ceph_lv0",
Nov 24 09:56:06 compute-0 blissful_pascal[268448]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:56:06 compute-0 blissful_pascal[268448]:             "tags": {
Nov 24 09:56:06 compute-0 blissful_pascal[268448]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:56:06 compute-0 blissful_pascal[268448]:                 "ceph.block_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:56:06 compute-0 blissful_pascal[268448]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 09:56:06 compute-0 blissful_pascal[268448]:                 "ceph.cluster_fsid": "84a084c3-61a7-5de7-8207-1f88efa59a64",
Nov 24 09:56:06 compute-0 blissful_pascal[268448]:                 "ceph.cluster_name": "ceph",
Nov 24 09:56:06 compute-0 blissful_pascal[268448]:                 "ceph.crush_device_class": "",
Nov 24 09:56:06 compute-0 blissful_pascal[268448]:                 "ceph.encrypted": "0",
Nov 24 09:56:06 compute-0 blissful_pascal[268448]:                 "ceph.osd_fsid": "4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c",
Nov 24 09:56:06 compute-0 blissful_pascal[268448]:                 "ceph.osd_id": "0",
Nov 24 09:56:06 compute-0 blissful_pascal[268448]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 09:56:06 compute-0 blissful_pascal[268448]:                 "ceph.type": "block",
Nov 24 09:56:06 compute-0 blissful_pascal[268448]:                 "ceph.vdo": "0",
Nov 24 09:56:06 compute-0 blissful_pascal[268448]:                 "ceph.with_tpm": "0"
Nov 24 09:56:06 compute-0 blissful_pascal[268448]:             },
Nov 24 09:56:06 compute-0 blissful_pascal[268448]:             "type": "block",
Nov 24 09:56:06 compute-0 blissful_pascal[268448]:             "vg_name": "ceph_vg0"
Nov 24 09:56:06 compute-0 blissful_pascal[268448]:         }
Nov 24 09:56:06 compute-0 blissful_pascal[268448]:     ]
Nov 24 09:56:06 compute-0 blissful_pascal[268448]: }
Nov 24 09:56:06 compute-0 systemd[1]: libpod-5cb4b8c7076946a0bb988501787473e11d5f456b7e580ca012bdcc2cd960267c.scope: Deactivated successfully.
Nov 24 09:56:06 compute-0 podman[268430]: 2025-11-24 09:56:06.057332085 +0000 UTC m=+0.428627831 container died 5cb4b8c7076946a0bb988501787473e11d5f456b7e580ca012bdcc2cd960267c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_pascal, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 24 09:56:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-6dfae2dbe2fa892d8cea342216a3039d5ab0d58ad27d2aa5edacd6b11e44ffb3-merged.mount: Deactivated successfully.
Nov 24 09:56:06 compute-0 podman[268430]: 2025-11-24 09:56:06.094291189 +0000 UTC m=+0.465586935 container remove 5cb4b8c7076946a0bb988501787473e11d5f456b7e580ca012bdcc2cd960267c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Nov 24 09:56:06 compute-0 systemd[1]: libpod-conmon-5cb4b8c7076946a0bb988501787473e11d5f456b7e580ca012bdcc2cd960267c.scope: Deactivated successfully.
Nov 24 09:56:06 compute-0 sudo[268325]: pam_unix(sudo:session): session closed for user root
Nov 24 09:56:06 compute-0 sudo[268469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:56:06 compute-0 sudo[268469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:56:06 compute-0 sudo[268469]: pam_unix(sudo:session): session closed for user root
Nov 24 09:56:06 compute-0 sudo[268494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- raw list --format json
Nov 24 09:56:06 compute-0 sudo[268494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:56:06 compute-0 sudo[268520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:56:06 compute-0 sudo[268520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:56:06 compute-0 sudo[268520]: pam_unix(sudo:session): session closed for user root
Nov 24 09:56:06 compute-0 podman[268585]: 2025-11-24 09:56:06.63329522 +0000 UTC m=+0.036583136 container create 4b08ef4bec81370aea3c8fdbc3bc1240eaf4ca7cca577daa766ec84980c3f86d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:56:06 compute-0 systemd[1]: Started libpod-conmon-4b08ef4bec81370aea3c8fdbc3bc1240eaf4ca7cca577daa766ec84980c3f86d.scope.
Nov 24 09:56:06 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:56:06 compute-0 podman[268585]: 2025-11-24 09:56:06.691319076 +0000 UTC m=+0.094607022 container init 4b08ef4bec81370aea3c8fdbc3bc1240eaf4ca7cca577daa766ec84980c3f86d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_hermann, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Nov 24 09:56:06 compute-0 podman[268585]: 2025-11-24 09:56:06.697179231 +0000 UTC m=+0.100467157 container start 4b08ef4bec81370aea3c8fdbc3bc1240eaf4ca7cca577daa766ec84980c3f86d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_hermann, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 24 09:56:06 compute-0 podman[268585]: 2025-11-24 09:56:06.700232397 +0000 UTC m=+0.103520343 container attach 4b08ef4bec81370aea3c8fdbc3bc1240eaf4ca7cca577daa766ec84980c3f86d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:56:06 compute-0 quizzical_hermann[268601]: 167 167
Nov 24 09:56:06 compute-0 systemd[1]: libpod-4b08ef4bec81370aea3c8fdbc3bc1240eaf4ca7cca577daa766ec84980c3f86d.scope: Deactivated successfully.
Nov 24 09:56:06 compute-0 podman[268585]: 2025-11-24 09:56:06.703944968 +0000 UTC m=+0.107232894 container died 4b08ef4bec81370aea3c8fdbc3bc1240eaf4ca7cca577daa766ec84980c3f86d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:56:06 compute-0 podman[268585]: 2025-11-24 09:56:06.617751515 +0000 UTC m=+0.021039471 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:56:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a102f2e021d7f06cebf658384f2b3ea51d7a8bbd24ec8859bcb8b49e1de2a70-merged.mount: Deactivated successfully.
Nov 24 09:56:06 compute-0 podman[268585]: 2025-11-24 09:56:06.742916213 +0000 UTC m=+0.146204139 container remove 4b08ef4bec81370aea3c8fdbc3bc1240eaf4ca7cca577daa766ec84980c3f86d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_hermann, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 24 09:56:06 compute-0 systemd[1]: libpod-conmon-4b08ef4bec81370aea3c8fdbc3bc1240eaf4ca7cca577daa766ec84980c3f86d.scope: Deactivated successfully.
Nov 24 09:56:06 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:06 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:06 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:06.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:06 compute-0 podman[268624]: 2025-11-24 09:56:06.912844339 +0000 UTC m=+0.039882648 container create fd7c43e8f684ef55e58c6cd9d98e6df60bd911dece59d447345bba8943776e91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_shockley, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 09:56:06 compute-0 systemd[1]: Started libpod-conmon-fd7c43e8f684ef55e58c6cd9d98e6df60bd911dece59d447345bba8943776e91.scope.
Nov 24 09:56:06 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:56:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ac9ceb4689aa919c1b7f1770c9e9c282cfcd2f76a618f15e4e8d9cf1b3f82e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:56:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ac9ceb4689aa919c1b7f1770c9e9c282cfcd2f76a618f15e4e8d9cf1b3f82e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:56:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ac9ceb4689aa919c1b7f1770c9e9c282cfcd2f76a618f15e4e8d9cf1b3f82e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:56:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ac9ceb4689aa919c1b7f1770c9e9c282cfcd2f76a618f15e4e8d9cf1b3f82e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:56:06 compute-0 podman[268624]: 2025-11-24 09:56:06.989094136 +0000 UTC m=+0.116132425 container init fd7c43e8f684ef55e58c6cd9d98e6df60bd911dece59d447345bba8943776e91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_shockley, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 09:56:06 compute-0 podman[268624]: 2025-11-24 09:56:06.89716189 +0000 UTC m=+0.024200199 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:56:06 compute-0 podman[268624]: 2025-11-24 09:56:06.996643613 +0000 UTC m=+0.123681902 container start fd7c43e8f684ef55e58c6cd9d98e6df60bd911dece59d447345bba8943776e91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_shockley, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Nov 24 09:56:07 compute-0 podman[268624]: 2025-11-24 09:56:07.000218322 +0000 UTC m=+0.127256621 container attach fd7c43e8f684ef55e58c6cd9d98e6df60bd911dece59d447345bba8943776e91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_shockley, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:56:07 compute-0 ceph-mon[74331]: pgmap v876: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 3.2 KiB/s wr, 1 op/s
Nov 24 09:56:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:56:07.512Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 09:56:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:56:07.513Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:56:07 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v877: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 9.6 KiB/s rd, 3.2 KiB/s wr, 2 op/s
Nov 24 09:56:07 compute-0 lvm[268715]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 09:56:07 compute-0 lvm[268715]: VG ceph_vg0 finished
Nov 24 09:56:07 compute-0 reverent_shockley[268640]: {}
Nov 24 09:56:07 compute-0 systemd[1]: libpod-fd7c43e8f684ef55e58c6cd9d98e6df60bd911dece59d447345bba8943776e91.scope: Deactivated successfully.
Nov 24 09:56:07 compute-0 systemd[1]: libpod-fd7c43e8f684ef55e58c6cd9d98e6df60bd911dece59d447345bba8943776e91.scope: Consumed 1.004s CPU time.
Nov 24 09:56:07 compute-0 podman[268624]: 2025-11-24 09:56:07.65348642 +0000 UTC m=+0.780524709 container died fd7c43e8f684ef55e58c6cd9d98e6df60bd911dece59d447345bba8943776e91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_shockley, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid)
Nov 24 09:56:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ac9ceb4689aa919c1b7f1770c9e9c282cfcd2f76a618f15e4e8d9cf1b3f82e9-merged.mount: Deactivated successfully.
Nov 24 09:56:07 compute-0 podman[268624]: 2025-11-24 09:56:07.696143007 +0000 UTC m=+0.823181296 container remove fd7c43e8f684ef55e58c6cd9d98e6df60bd911dece59d447345bba8943776e91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_shockley, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:56:07 compute-0 systemd[1]: libpod-conmon-fd7c43e8f684ef55e58c6cd9d98e6df60bd911dece59d447345bba8943776e91.scope: Deactivated successfully.
Nov 24 09:56:07 compute-0 sudo[268494]: pam_unix(sudo:session): session closed for user root
Nov 24 09:56:07 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:56:07 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:56:07 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:56:07 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:56:07 compute-0 sudo[268729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:56:07 compute-0 sudo[268729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:56:07 compute-0 sudo[268729]: pam_unix(sudo:session): session closed for user root
Nov 24 09:56:07 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:07 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:07 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:07.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:08 compute-0 nova_compute[257700]: 2025-11-24 09:56:08.382 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:08 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:56:08 compute-0 ceph-mon[74331]: pgmap v877: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 9.6 KiB/s rd, 3.2 KiB/s wr, 2 op/s
Nov 24 09:56:08 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:56:08 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:56:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:56:08.869Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:56:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:56:08.870Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:56:08 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:08 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:56:08 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:08.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:56:09 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:56:09.199 165073 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:13:51', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4e:f0:a8:6f:5e:1b'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 09:56:09 compute-0 nova_compute[257700]: 2025-11-24 09:56:09.200 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:09 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:56:09.200 165073 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 09:56:09 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v878: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 9.1 KiB/s rd, 1.1 KiB/s wr, 2 op/s
Nov 24 09:56:09 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:09 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:56:09 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:09.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:56:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:56:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:56:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:56:10 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:56:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:56:10 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:56:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:56:10 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:56:10 compute-0 ceph-mon[74331]: pgmap v878: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 9.1 KiB/s rd, 1.1 KiB/s wr, 2 op/s
Nov 24 09:56:10 compute-0 nova_compute[257700]: 2025-11-24 09:56:10.769 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:10 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:10 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:10 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:10.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:56:10] "GET /metrics HTTP/1.1" 200 48471 "" "Prometheus/2.51.0"
Nov 24 09:56:10 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:56:10] "GET /metrics HTTP/1.1" 200 48471 "" "Prometheus/2.51.0"
Nov 24 09:56:11 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v879: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 9.1 KiB/s rd, 1.1 KiB/s wr, 2 op/s
Nov 24 09:56:11 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/785571714' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:56:11 compute-0 podman[268758]: 2025-11-24 09:56:11.781567766 +0000 UTC m=+0.057063374 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 24 09:56:11 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:11 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:11 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:11.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:12 compute-0 ceph-mon[74331]: pgmap v879: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 9.1 KiB/s rd, 1.1 KiB/s wr, 2 op/s
Nov 24 09:56:12 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:12 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:56:12 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:12.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:56:13 compute-0 nova_compute[257700]: 2025-11-24 09:56:13.385 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:13 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v880: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 5.4 KiB/s wr, 31 op/s
Nov 24 09:56:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:56:13 compute-0 nova_compute[257700]: 2025-11-24 09:56:13.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:56:13 compute-0 nova_compute[257700]: 2025-11-24 09:56:13.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:56:13 compute-0 nova_compute[257700]: 2025-11-24 09:56:13.921 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 09:56:13 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:13 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:13 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:13.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:14 compute-0 ceph-mon[74331]: pgmap v880: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 5.4 KiB/s wr, 31 op/s
Nov 24 09:56:14 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:14 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:14 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:14.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:56:14 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:56:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:56:14 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:56:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:56:14 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:56:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:56:15 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:56:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:56:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:56:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:56:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:56:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:56:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:56:15 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v881: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 5.2 KiB/s wr, 29 op/s
Nov 24 09:56:15 compute-0 nova_compute[257700]: 2025-11-24 09:56:15.771 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:15 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:56:15 compute-0 nova_compute[257700]: 2025-11-24 09:56:15.920 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:56:16 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:16 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:16 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:16.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:16 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:56:16.201 165073 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feb242b9-6422-4c37-bc7a-5c14a79beaf8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:56:16 compute-0 ceph-mon[74331]: pgmap v881: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 5.2 KiB/s wr, 29 op/s
Nov 24 09:56:16 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/3645227929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:56:16 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:16 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:16 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:16.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:16 compute-0 nova_compute[257700]: 2025-11-24 09:56:16.920 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:56:16 compute-0 nova_compute[257700]: 2025-11-24 09:56:16.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:56:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:56:17.513Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:56:17 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v882: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 5.2 KiB/s wr, 30 op/s
Nov 24 09:56:17 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/3862106592' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:56:17 compute-0 nova_compute[257700]: 2025-11-24 09:56:17.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:56:17 compute-0 nova_compute[257700]: 2025-11-24 09:56:17.921 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 09:56:17 compute-0 nova_compute[257700]: 2025-11-24 09:56:17.921 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 09:56:17 compute-0 nova_compute[257700]: 2025-11-24 09:56:17.934 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 09:56:17 compute-0 nova_compute[257700]: 2025-11-24 09:56:17.934 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:56:17 compute-0 nova_compute[257700]: 2025-11-24 09:56:17.956 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:56:17 compute-0 nova_compute[257700]: 2025-11-24 09:56:17.956 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:56:17 compute-0 nova_compute[257700]: 2025-11-24 09:56:17.956 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:56:17 compute-0 nova_compute[257700]: 2025-11-24 09:56:17.956 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 09:56:17 compute-0 nova_compute[257700]: 2025-11-24 09:56:17.957 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:56:18 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:18 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:18 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:18.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:18 compute-0 nova_compute[257700]: 2025-11-24 09:56:18.387 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:18 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:56:18 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1398718278' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:56:18 compute-0 nova_compute[257700]: 2025-11-24 09:56:18.409 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:56:18 compute-0 nova_compute[257700]: 2025-11-24 09:56:18.569 257704 WARNING nova.virt.libvirt.driver [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 09:56:18 compute-0 nova_compute[257700]: 2025-11-24 09:56:18.570 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4633MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 09:56:18 compute-0 nova_compute[257700]: 2025-11-24 09:56:18.570 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:56:18 compute-0 nova_compute[257700]: 2025-11-24 09:56:18.571 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:56:18 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:56:18 compute-0 nova_compute[257700]: 2025-11-24 09:56:18.645 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 09:56:18 compute-0 nova_compute[257700]: 2025-11-24 09:56:18.645 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 09:56:18 compute-0 nova_compute[257700]: 2025-11-24 09:56:18.663 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:56:18 compute-0 ceph-mon[74331]: pgmap v882: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 5.2 KiB/s wr, 30 op/s
Nov 24 09:56:18 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1398718278' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:56:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:56:18.870Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 09:56:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:56:18.871Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:56:18 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:18 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:56:18 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:18.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:56:19 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:56:19 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2192735560' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:56:19 compute-0 nova_compute[257700]: 2025-11-24 09:56:19.131 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:56:19 compute-0 nova_compute[257700]: 2025-11-24 09:56:19.140 257704 DEBUG nova.compute.provider_tree [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Inventory has not changed in ProviderTree for provider: a50ce3b5-7e9e-4263-a4aa-c35573ac7257 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 09:56:19 compute-0 nova_compute[257700]: 2025-11-24 09:56:19.153 257704 DEBUG nova.scheduler.client.report [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Inventory has not changed for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 09:56:19 compute-0 nova_compute[257700]: 2025-11-24 09:56:19.155 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 09:56:19 compute-0 nova_compute[257700]: 2025-11-24 09:56:19.155 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:56:19 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v883: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Nov 24 09:56:19 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/872817273' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:56:19 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2192735560' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:56:19 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/4070689135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:56:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:56:19 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:56:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:56:19 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:56:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:56:19 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:56:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:56:20 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:56:20 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:20 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:56:20 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:20.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:56:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:56:20.566 165073 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:56:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:56:20.566 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:56:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:56:20.566 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:56:20 compute-0 nova_compute[257700]: 2025-11-24 09:56:20.772 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:20 compute-0 ceph-mon[74331]: pgmap v883: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Nov 24 09:56:20 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:20 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:56:20 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:20.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:56:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:56:20] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Nov 24 09:56:20 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:56:20] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Nov 24 09:56:21 compute-0 nova_compute[257700]: 2025-11-24 09:56:21.150 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:56:21 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v884: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Nov 24 09:56:21 compute-0 nova_compute[257700]: 2025-11-24 09:56:21.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:56:22 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:22 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:22 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:22.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:22 compute-0 ceph-mon[74331]: pgmap v884: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Nov 24 09:56:22 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:22 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:56:22 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:22.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:56:23 compute-0 nova_compute[257700]: 2025-11-24 09:56:23.390 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:23 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v885: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 5.2 KiB/s wr, 29 op/s
Nov 24 09:56:23 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:56:24 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:24 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:24 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:24.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:24 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:24 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:24 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:24.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:24 compute-0 ceph-mon[74331]: pgmap v885: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 5.2 KiB/s wr, 29 op/s
Nov 24 09:56:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:56:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:56:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:56:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:56:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:56:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:56:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:56:25 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:56:25 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v886: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 09:56:25 compute-0 nova_compute[257700]: 2025-11-24 09:56:25.774 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:26 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:26 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:56:26 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:26.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:56:26 compute-0 nova_compute[257700]: 2025-11-24 09:56:26.134 257704 DEBUG oslo_concurrency.lockutils [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "b014f69e-04e5-4c5d-bb6c-e88b4410e6ab" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:56:26 compute-0 nova_compute[257700]: 2025-11-24 09:56:26.134 257704 DEBUG oslo_concurrency.lockutils [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "b014f69e-04e5-4c5d-bb6c-e88b4410e6ab" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:56:26 compute-0 nova_compute[257700]: 2025-11-24 09:56:26.151 257704 DEBUG nova.compute.manager [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 24 09:56:26 compute-0 nova_compute[257700]: 2025-11-24 09:56:26.214 257704 DEBUG oslo_concurrency.lockutils [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:56:26 compute-0 nova_compute[257700]: 2025-11-24 09:56:26.215 257704 DEBUG oslo_concurrency.lockutils [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:56:26 compute-0 nova_compute[257700]: 2025-11-24 09:56:26.220 257704 DEBUG nova.virt.hardware [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 24 09:56:26 compute-0 nova_compute[257700]: 2025-11-24 09:56:26.220 257704 INFO nova.compute.claims [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Claim successful on node compute-0.ctlplane.example.com
Nov 24 09:56:26 compute-0 nova_compute[257700]: 2025-11-24 09:56:26.319 257704 DEBUG oslo_concurrency.processutils [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:56:26 compute-0 sudo[268857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:56:26 compute-0 sudo[268857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:56:26 compute-0 sudo[268857]: pam_unix(sudo:session): session closed for user root
Nov 24 09:56:26 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:56:26 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/609421726' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:56:26 compute-0 nova_compute[257700]: 2025-11-24 09:56:26.757 257704 DEBUG oslo_concurrency.processutils [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:56:26 compute-0 nova_compute[257700]: 2025-11-24 09:56:26.765 257704 DEBUG nova.compute.provider_tree [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed in ProviderTree for provider: a50ce3b5-7e9e-4263-a4aa-c35573ac7257 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 09:56:26 compute-0 nova_compute[257700]: 2025-11-24 09:56:26.776 257704 DEBUG nova.scheduler.client.report [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 09:56:26 compute-0 nova_compute[257700]: 2025-11-24 09:56:26.793 257704 DEBUG oslo_concurrency.lockutils [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.579s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:56:26 compute-0 nova_compute[257700]: 2025-11-24 09:56:26.794 257704 DEBUG nova.compute.manager [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 24 09:56:26 compute-0 nova_compute[257700]: 2025-11-24 09:56:26.836 257704 DEBUG nova.compute.manager [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 24 09:56:26 compute-0 nova_compute[257700]: 2025-11-24 09:56:26.837 257704 DEBUG nova.network.neutron [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 24 09:56:26 compute-0 nova_compute[257700]: 2025-11-24 09:56:26.853 257704 INFO nova.virt.libvirt.driver [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 24 09:56:26 compute-0 nova_compute[257700]: 2025-11-24 09:56:26.867 257704 DEBUG nova.compute.manager [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 24 09:56:26 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:26 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:26 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:26.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:26 compute-0 ceph-mon[74331]: pgmap v886: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 09:56:26 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/609421726' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:56:26 compute-0 nova_compute[257700]: 2025-11-24 09:56:26.963 257704 DEBUG nova.compute.manager [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 24 09:56:26 compute-0 nova_compute[257700]: 2025-11-24 09:56:26.964 257704 DEBUG nova.virt.libvirt.driver [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 24 09:56:26 compute-0 nova_compute[257700]: 2025-11-24 09:56:26.964 257704 INFO nova.virt.libvirt.driver [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Creating image(s)
Nov 24 09:56:26 compute-0 nova_compute[257700]: 2025-11-24 09:56:26.995 257704 DEBUG nova.storage.rbd_utils [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image b014f69e-04e5-4c5d-bb6c-e88b4410e6ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 09:56:27 compute-0 nova_compute[257700]: 2025-11-24 09:56:27.032 257704 DEBUG nova.storage.rbd_utils [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image b014f69e-04e5-4c5d-bb6c-e88b4410e6ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 09:56:27 compute-0 nova_compute[257700]: 2025-11-24 09:56:27.067 257704 DEBUG nova.storage.rbd_utils [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image b014f69e-04e5-4c5d-bb6c-e88b4410e6ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 09:56:27 compute-0 nova_compute[257700]: 2025-11-24 09:56:27.072 257704 DEBUG oslo_concurrency.processutils [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:56:27 compute-0 nova_compute[257700]: 2025-11-24 09:56:27.155 257704 DEBUG oslo_concurrency.processutils [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:56:27 compute-0 nova_compute[257700]: 2025-11-24 09:56:27.157 257704 DEBUG oslo_concurrency.lockutils [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "2ed5c667523487159c4c4503c82babbc95dbae40" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:56:27 compute-0 nova_compute[257700]: 2025-11-24 09:56:27.157 257704 DEBUG oslo_concurrency.lockutils [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "2ed5c667523487159c4c4503c82babbc95dbae40" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:56:27 compute-0 nova_compute[257700]: 2025-11-24 09:56:27.158 257704 DEBUG oslo_concurrency.lockutils [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "2ed5c667523487159c4c4503c82babbc95dbae40" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:56:27 compute-0 nova_compute[257700]: 2025-11-24 09:56:27.188 257704 DEBUG nova.storage.rbd_utils [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image b014f69e-04e5-4c5d-bb6c-e88b4410e6ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 09:56:27 compute-0 nova_compute[257700]: 2025-11-24 09:56:27.192 257704 DEBUG oslo_concurrency.processutils [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 b014f69e-04e5-4c5d-bb6c-e88b4410e6ab_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:56:27 compute-0 nova_compute[257700]: 2025-11-24 09:56:27.361 257704 DEBUG nova.policy [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '43f79ff3105e4372a3c095e8057d4f1f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '94d069fc040647d5a6e54894eec915fe', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 24 09:56:27 compute-0 nova_compute[257700]: 2025-11-24 09:56:27.455 257704 DEBUG oslo_concurrency.processutils [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 b014f69e-04e5-4c5d-bb6c-e88b4410e6ab_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.263s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:56:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:56:27.514Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:56:27 compute-0 nova_compute[257700]: 2025-11-24 09:56:27.516 257704 DEBUG nova.storage.rbd_utils [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] resizing rbd image b014f69e-04e5-4c5d-bb6c-e88b4410e6ab_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 24 09:56:27 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v887: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 09:56:27 compute-0 nova_compute[257700]: 2025-11-24 09:56:27.610 257704 DEBUG nova.objects.instance [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'migration_context' on Instance uuid b014f69e-04e5-4c5d-bb6c-e88b4410e6ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 09:56:27 compute-0 nova_compute[257700]: 2025-11-24 09:56:27.621 257704 DEBUG nova.virt.libvirt.driver [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 24 09:56:27 compute-0 nova_compute[257700]: 2025-11-24 09:56:27.621 257704 DEBUG nova.virt.libvirt.driver [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Ensure instance console log exists: /var/lib/nova/instances/b014f69e-04e5-4c5d-bb6c-e88b4410e6ab/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 24 09:56:27 compute-0 nova_compute[257700]: 2025-11-24 09:56:27.622 257704 DEBUG oslo_concurrency.lockutils [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:56:27 compute-0 nova_compute[257700]: 2025-11-24 09:56:27.622 257704 DEBUG oslo_concurrency.lockutils [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:56:27 compute-0 nova_compute[257700]: 2025-11-24 09:56:27.623 257704 DEBUG oslo_concurrency.lockutils [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:56:28 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:28 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:28 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:28.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:28 compute-0 nova_compute[257700]: 2025-11-24 09:56:28.268 257704 DEBUG nova.network.neutron [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Successfully created port: c6d0f148-a8f4-467e-8be3-1a120663dc95 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 24 09:56:28 compute-0 nova_compute[257700]: 2025-11-24 09:56:28.391 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:28 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:56:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:56:28.872Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:56:28 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:28 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:28 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:28.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:28 compute-0 ceph-mon[74331]: pgmap v887: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 09:56:28 compute-0 nova_compute[257700]: 2025-11-24 09:56:28.940 257704 DEBUG nova.network.neutron [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Successfully updated port: c6d0f148-a8f4-467e-8be3-1a120663dc95 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 24 09:56:28 compute-0 nova_compute[257700]: 2025-11-24 09:56:28.951 257704 DEBUG oslo_concurrency.lockutils [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "refresh_cache-b014f69e-04e5-4c5d-bb6c-e88b4410e6ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 09:56:28 compute-0 nova_compute[257700]: 2025-11-24 09:56:28.952 257704 DEBUG oslo_concurrency.lockutils [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquired lock "refresh_cache-b014f69e-04e5-4c5d-bb6c-e88b4410e6ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 09:56:28 compute-0 nova_compute[257700]: 2025-11-24 09:56:28.952 257704 DEBUG nova.network.neutron [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 24 09:56:29 compute-0 nova_compute[257700]: 2025-11-24 09:56:29.025 257704 DEBUG nova.compute.manager [req-ac83ba15-4823-47f5-9e9a-31625f09a0b6 req-32279c50-f56d-4d8e-ac2a-34e1b300eb6e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Received event network-changed-c6d0f148-a8f4-467e-8be3-1a120663dc95 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 09:56:29 compute-0 nova_compute[257700]: 2025-11-24 09:56:29.025 257704 DEBUG nova.compute.manager [req-ac83ba15-4823-47f5-9e9a-31625f09a0b6 req-32279c50-f56d-4d8e-ac2a-34e1b300eb6e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Refreshing instance network info cache due to event network-changed-c6d0f148-a8f4-467e-8be3-1a120663dc95. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 09:56:29 compute-0 nova_compute[257700]: 2025-11-24 09:56:29.025 257704 DEBUG oslo_concurrency.lockutils [req-ac83ba15-4823-47f5-9e9a-31625f09a0b6 req-32279c50-f56d-4d8e-ac2a-34e1b300eb6e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "refresh_cache-b014f69e-04e5-4c5d-bb6c-e88b4410e6ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 09:56:29 compute-0 nova_compute[257700]: 2025-11-24 09:56:29.276 257704 DEBUG nova.network.neutron [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 24 09:56:29 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v888: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 09:56:29 compute-0 nova_compute[257700]: 2025-11-24 09:56:29.847 257704 DEBUG nova.network.neutron [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Updating instance_info_cache with network_info: [{"id": "c6d0f148-a8f4-467e-8be3-1a120663dc95", "address": "fa:16:3e:88:13:8b", "network": {"id": "4a54e00b-2ddf-4829-be22-9a556b586781", "bridge": "br-int", "label": "tempest-network-smoke--280510625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6d0f148-a8", "ovs_interfaceid": "c6d0f148-a8f4-467e-8be3-1a120663dc95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 09:56:29 compute-0 nova_compute[257700]: 2025-11-24 09:56:29.865 257704 DEBUG oslo_concurrency.lockutils [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Releasing lock "refresh_cache-b014f69e-04e5-4c5d-bb6c-e88b4410e6ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 09:56:29 compute-0 nova_compute[257700]: 2025-11-24 09:56:29.866 257704 DEBUG nova.compute.manager [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Instance network_info: |[{"id": "c6d0f148-a8f4-467e-8be3-1a120663dc95", "address": "fa:16:3e:88:13:8b", "network": {"id": "4a54e00b-2ddf-4829-be22-9a556b586781", "bridge": "br-int", "label": "tempest-network-smoke--280510625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6d0f148-a8", "ovs_interfaceid": "c6d0f148-a8f4-467e-8be3-1a120663dc95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 24 09:56:29 compute-0 nova_compute[257700]: 2025-11-24 09:56:29.866 257704 DEBUG oslo_concurrency.lockutils [req-ac83ba15-4823-47f5-9e9a-31625f09a0b6 req-32279c50-f56d-4d8e-ac2a-34e1b300eb6e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquired lock "refresh_cache-b014f69e-04e5-4c5d-bb6c-e88b4410e6ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 09:56:29 compute-0 nova_compute[257700]: 2025-11-24 09:56:29.866 257704 DEBUG nova.network.neutron [req-ac83ba15-4823-47f5-9e9a-31625f09a0b6 req-32279c50-f56d-4d8e-ac2a-34e1b300eb6e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Refreshing network info cache for port c6d0f148-a8f4-467e-8be3-1a120663dc95 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 09:56:29 compute-0 nova_compute[257700]: 2025-11-24 09:56:29.869 257704 DEBUG nova.virt.libvirt.driver [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Start _get_guest_xml network_info=[{"id": "c6d0f148-a8f4-467e-8be3-1a120663dc95", "address": "fa:16:3e:88:13:8b", "network": {"id": "4a54e00b-2ddf-4829-be22-9a556b586781", "bridge": "br-int", "label": "tempest-network-smoke--280510625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6d0f148-a8", "ovs_interfaceid": "c6d0f148-a8f4-467e-8be3-1a120663dc95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T09:52:37Z,direct_url=<?>,disk_format='qcow2',id=6ef14bdf-4f04-4400-8040-4409d9d5271e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='cf636babb68a4ebe9bf137d3fe0e4c0c',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T09:52:41Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'device_name': '/dev/vda', 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_format': None, 'boot_index': 0, 'device_type': 'disk', 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': '6ef14bdf-4f04-4400-8040-4409d9d5271e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 24 09:56:29 compute-0 nova_compute[257700]: 2025-11-24 09:56:29.874 257704 WARNING nova.virt.libvirt.driver [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 09:56:29 compute-0 nova_compute[257700]: 2025-11-24 09:56:29.878 257704 DEBUG nova.virt.libvirt.host [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 24 09:56:29 compute-0 nova_compute[257700]: 2025-11-24 09:56:29.879 257704 DEBUG nova.virt.libvirt.host [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 24 09:56:29 compute-0 nova_compute[257700]: 2025-11-24 09:56:29.882 257704 DEBUG nova.virt.libvirt.host [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 24 09:56:29 compute-0 nova_compute[257700]: 2025-11-24 09:56:29.883 257704 DEBUG nova.virt.libvirt.host [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 24 09:56:29 compute-0 nova_compute[257700]: 2025-11-24 09:56:29.883 257704 DEBUG nova.virt.libvirt.driver [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 24 09:56:29 compute-0 nova_compute[257700]: 2025-11-24 09:56:29.884 257704 DEBUG nova.virt.hardware [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-24T09:52:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='4a5d03ad-925b-45f1-89bd-f1325f9f3292',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T09:52:37Z,direct_url=<?>,disk_format='qcow2',id=6ef14bdf-4f04-4400-8040-4409d9d5271e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='cf636babb68a4ebe9bf137d3fe0e4c0c',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T09:52:41Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 24 09:56:29 compute-0 nova_compute[257700]: 2025-11-24 09:56:29.884 257704 DEBUG nova.virt.hardware [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 24 09:56:29 compute-0 nova_compute[257700]: 2025-11-24 09:56:29.884 257704 DEBUG nova.virt.hardware [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 24 09:56:29 compute-0 nova_compute[257700]: 2025-11-24 09:56:29.884 257704 DEBUG nova.virt.hardware [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 24 09:56:29 compute-0 nova_compute[257700]: 2025-11-24 09:56:29.884 257704 DEBUG nova.virt.hardware [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 24 09:56:29 compute-0 nova_compute[257700]: 2025-11-24 09:56:29.885 257704 DEBUG nova.virt.hardware [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 24 09:56:29 compute-0 nova_compute[257700]: 2025-11-24 09:56:29.885 257704 DEBUG nova.virt.hardware [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 24 09:56:29 compute-0 nova_compute[257700]: 2025-11-24 09:56:29.885 257704 DEBUG nova.virt.hardware [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 24 09:56:29 compute-0 nova_compute[257700]: 2025-11-24 09:56:29.885 257704 DEBUG nova.virt.hardware [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 24 09:56:29 compute-0 nova_compute[257700]: 2025-11-24 09:56:29.885 257704 DEBUG nova.virt.hardware [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 24 09:56:29 compute-0 nova_compute[257700]: 2025-11-24 09:56:29.886 257704 DEBUG nova.virt.hardware [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 24 09:56:29 compute-0 nova_compute[257700]: 2025-11-24 09:56:29.888 257704 DEBUG oslo_concurrency.processutils [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:56:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:56:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:56:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:56:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:56:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:56:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:56:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:56:30 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:56:30 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:30 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:30 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:30.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:30 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 24 09:56:30 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1983036459' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 09:56:30 compute-0 nova_compute[257700]: 2025-11-24 09:56:30.339 257704 DEBUG oslo_concurrency.processutils [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:56:30 compute-0 nova_compute[257700]: 2025-11-24 09:56:30.372 257704 DEBUG nova.storage.rbd_utils [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image b014f69e-04e5-4c5d-bb6c-e88b4410e6ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 09:56:30 compute-0 nova_compute[257700]: 2025-11-24 09:56:30.377 257704 DEBUG oslo_concurrency.processutils [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:56:30 compute-0 nova_compute[257700]: 2025-11-24 09:56:30.779 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:30 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 24 09:56:30 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4078391709' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 09:56:30 compute-0 nova_compute[257700]: 2025-11-24 09:56:30.812 257704 DEBUG oslo_concurrency.processutils [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:56:30 compute-0 nova_compute[257700]: 2025-11-24 09:56:30.814 257704 DEBUG nova.virt.libvirt.vif [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T09:56:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1166261396',display_name='tempest-TestNetworkBasicOps-server-1166261396',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1166261396',id=4,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOlcnO+6RLSVuTHVahBA4wvrglLYJUElJsxDY5VFlil+CW8gtXSqW2DbLbNmozC6q2P0tVa4tVNMNBCnQGiQcIUf/IVfWp1zQZ3KhjDNVA3XKoC7hMjVrSu87SHoeev9EQ==',key_name='tempest-TestNetworkBasicOps-1920594917',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-57ahusuc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T09:56:26Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=b014f69e-04e5-4c5d-bb6c-e88b4410e6ab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c6d0f148-a8f4-467e-8be3-1a120663dc95", "address": "fa:16:3e:88:13:8b", "network": {"id": "4a54e00b-2ddf-4829-be22-9a556b586781", "bridge": "br-int", "label": "tempest-network-smoke--280510625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6d0f148-a8", "ovs_interfaceid": "c6d0f148-a8f4-467e-8be3-1a120663dc95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 24 09:56:30 compute-0 nova_compute[257700]: 2025-11-24 09:56:30.814 257704 DEBUG nova.network.os_vif_util [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "c6d0f148-a8f4-467e-8be3-1a120663dc95", "address": "fa:16:3e:88:13:8b", "network": {"id": "4a54e00b-2ddf-4829-be22-9a556b586781", "bridge": "br-int", "label": "tempest-network-smoke--280510625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6d0f148-a8", "ovs_interfaceid": "c6d0f148-a8f4-467e-8be3-1a120663dc95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 09:56:30 compute-0 nova_compute[257700]: 2025-11-24 09:56:30.815 257704 DEBUG nova.network.os_vif_util [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:13:8b,bridge_name='br-int',has_traffic_filtering=True,id=c6d0f148-a8f4-467e-8be3-1a120663dc95,network=Network(4a54e00b-2ddf-4829-be22-9a556b586781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6d0f148-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 09:56:30 compute-0 nova_compute[257700]: 2025-11-24 09:56:30.816 257704 DEBUG nova.objects.instance [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'pci_devices' on Instance uuid b014f69e-04e5-4c5d-bb6c-e88b4410e6ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 09:56:30 compute-0 nova_compute[257700]: 2025-11-24 09:56:30.877 257704 DEBUG nova.virt.libvirt.driver [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] End _get_guest_xml xml=<domain type="kvm">
Nov 24 09:56:30 compute-0 nova_compute[257700]:   <uuid>b014f69e-04e5-4c5d-bb6c-e88b4410e6ab</uuid>
Nov 24 09:56:30 compute-0 nova_compute[257700]:   <name>instance-00000004</name>
Nov 24 09:56:30 compute-0 nova_compute[257700]:   <memory>131072</memory>
Nov 24 09:56:30 compute-0 nova_compute[257700]:   <vcpu>1</vcpu>
Nov 24 09:56:30 compute-0 nova_compute[257700]:   <metadata>
Nov 24 09:56:30 compute-0 nova_compute[257700]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 09:56:30 compute-0 nova_compute[257700]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:       <nova:name>tempest-TestNetworkBasicOps-server-1166261396</nova:name>
Nov 24 09:56:30 compute-0 nova_compute[257700]:       <nova:creationTime>2025-11-24 09:56:29</nova:creationTime>
Nov 24 09:56:30 compute-0 nova_compute[257700]:       <nova:flavor name="m1.nano">
Nov 24 09:56:30 compute-0 nova_compute[257700]:         <nova:memory>128</nova:memory>
Nov 24 09:56:30 compute-0 nova_compute[257700]:         <nova:disk>1</nova:disk>
Nov 24 09:56:30 compute-0 nova_compute[257700]:         <nova:swap>0</nova:swap>
Nov 24 09:56:30 compute-0 nova_compute[257700]:         <nova:ephemeral>0</nova:ephemeral>
Nov 24 09:56:30 compute-0 nova_compute[257700]:         <nova:vcpus>1</nova:vcpus>
Nov 24 09:56:30 compute-0 nova_compute[257700]:       </nova:flavor>
Nov 24 09:56:30 compute-0 nova_compute[257700]:       <nova:owner>
Nov 24 09:56:30 compute-0 nova_compute[257700]:         <nova:user uuid="43f79ff3105e4372a3c095e8057d4f1f">tempest-TestNetworkBasicOps-1844071378-project-member</nova:user>
Nov 24 09:56:30 compute-0 nova_compute[257700]:         <nova:project uuid="94d069fc040647d5a6e54894eec915fe">tempest-TestNetworkBasicOps-1844071378</nova:project>
Nov 24 09:56:30 compute-0 nova_compute[257700]:       </nova:owner>
Nov 24 09:56:30 compute-0 nova_compute[257700]:       <nova:root type="image" uuid="6ef14bdf-4f04-4400-8040-4409d9d5271e"/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:       <nova:ports>
Nov 24 09:56:30 compute-0 nova_compute[257700]:         <nova:port uuid="c6d0f148-a8f4-467e-8be3-1a120663dc95">
Nov 24 09:56:30 compute-0 nova_compute[257700]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:         </nova:port>
Nov 24 09:56:30 compute-0 nova_compute[257700]:       </nova:ports>
Nov 24 09:56:30 compute-0 nova_compute[257700]:     </nova:instance>
Nov 24 09:56:30 compute-0 nova_compute[257700]:   </metadata>
Nov 24 09:56:30 compute-0 nova_compute[257700]:   <sysinfo type="smbios">
Nov 24 09:56:30 compute-0 nova_compute[257700]:     <system>
Nov 24 09:56:30 compute-0 nova_compute[257700]:       <entry name="manufacturer">RDO</entry>
Nov 24 09:56:30 compute-0 nova_compute[257700]:       <entry name="product">OpenStack Compute</entry>
Nov 24 09:56:30 compute-0 nova_compute[257700]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 09:56:30 compute-0 nova_compute[257700]:       <entry name="serial">b014f69e-04e5-4c5d-bb6c-e88b4410e6ab</entry>
Nov 24 09:56:30 compute-0 nova_compute[257700]:       <entry name="uuid">b014f69e-04e5-4c5d-bb6c-e88b4410e6ab</entry>
Nov 24 09:56:30 compute-0 nova_compute[257700]:       <entry name="family">Virtual Machine</entry>
Nov 24 09:56:30 compute-0 nova_compute[257700]:     </system>
Nov 24 09:56:30 compute-0 nova_compute[257700]:   </sysinfo>
Nov 24 09:56:30 compute-0 nova_compute[257700]:   <os>
Nov 24 09:56:30 compute-0 nova_compute[257700]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 24 09:56:30 compute-0 nova_compute[257700]:     <boot dev="hd"/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:     <smbios mode="sysinfo"/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:   </os>
Nov 24 09:56:30 compute-0 nova_compute[257700]:   <features>
Nov 24 09:56:30 compute-0 nova_compute[257700]:     <acpi/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:     <apic/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:     <vmcoreinfo/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:   </features>
Nov 24 09:56:30 compute-0 nova_compute[257700]:   <clock offset="utc">
Nov 24 09:56:30 compute-0 nova_compute[257700]:     <timer name="pit" tickpolicy="delay"/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:     <timer name="hpet" present="no"/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:   </clock>
Nov 24 09:56:30 compute-0 nova_compute[257700]:   <cpu mode="host-model" match="exact">
Nov 24 09:56:30 compute-0 nova_compute[257700]:     <topology sockets="1" cores="1" threads="1"/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:   </cpu>
Nov 24 09:56:30 compute-0 nova_compute[257700]:   <devices>
Nov 24 09:56:30 compute-0 nova_compute[257700]:     <disk type="network" device="disk">
Nov 24 09:56:30 compute-0 nova_compute[257700]:       <driver type="raw" cache="none"/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:       <source protocol="rbd" name="vms/b014f69e-04e5-4c5d-bb6c-e88b4410e6ab_disk">
Nov 24 09:56:30 compute-0 nova_compute[257700]:         <host name="192.168.122.100" port="6789"/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:         <host name="192.168.122.102" port="6789"/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:         <host name="192.168.122.101" port="6789"/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:       </source>
Nov 24 09:56:30 compute-0 nova_compute[257700]:       <auth username="openstack">
Nov 24 09:56:30 compute-0 nova_compute[257700]:         <secret type="ceph" uuid="84a084c3-61a7-5de7-8207-1f88efa59a64"/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:       </auth>
Nov 24 09:56:30 compute-0 nova_compute[257700]:       <target dev="vda" bus="virtio"/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:     </disk>
Nov 24 09:56:30 compute-0 nova_compute[257700]:     <disk type="network" device="cdrom">
Nov 24 09:56:30 compute-0 nova_compute[257700]:       <driver type="raw" cache="none"/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:       <source protocol="rbd" name="vms/b014f69e-04e5-4c5d-bb6c-e88b4410e6ab_disk.config">
Nov 24 09:56:30 compute-0 nova_compute[257700]:         <host name="192.168.122.100" port="6789"/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:         <host name="192.168.122.102" port="6789"/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:         <host name="192.168.122.101" port="6789"/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:       </source>
Nov 24 09:56:30 compute-0 nova_compute[257700]:       <auth username="openstack">
Nov 24 09:56:30 compute-0 nova_compute[257700]:         <secret type="ceph" uuid="84a084c3-61a7-5de7-8207-1f88efa59a64"/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:       </auth>
Nov 24 09:56:30 compute-0 nova_compute[257700]:       <target dev="sda" bus="sata"/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:     </disk>
Nov 24 09:56:30 compute-0 nova_compute[257700]:     <interface type="ethernet">
Nov 24 09:56:30 compute-0 nova_compute[257700]:       <mac address="fa:16:3e:88:13:8b"/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:       <model type="virtio"/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:       <driver name="vhost" rx_queue_size="512"/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:       <mtu size="1442"/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:       <target dev="tapc6d0f148-a8"/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:     </interface>
Nov 24 09:56:30 compute-0 nova_compute[257700]:     <serial type="pty">
Nov 24 09:56:30 compute-0 nova_compute[257700]:       <log file="/var/lib/nova/instances/b014f69e-04e5-4c5d-bb6c-e88b4410e6ab/console.log" append="off"/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:     </serial>
Nov 24 09:56:30 compute-0 nova_compute[257700]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:     <video>
Nov 24 09:56:30 compute-0 nova_compute[257700]:       <model type="virtio"/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:     </video>
Nov 24 09:56:30 compute-0 nova_compute[257700]:     <input type="tablet" bus="usb"/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:     <rng model="virtio">
Nov 24 09:56:30 compute-0 nova_compute[257700]:       <backend model="random">/dev/urandom</backend>
Nov 24 09:56:30 compute-0 nova_compute[257700]:     </rng>
Nov 24 09:56:30 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root"/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:     <controller type="usb" index="0"/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:     <memballoon model="virtio">
Nov 24 09:56:30 compute-0 nova_compute[257700]:       <stats period="10"/>
Nov 24 09:56:30 compute-0 nova_compute[257700]:     </memballoon>
Nov 24 09:56:30 compute-0 nova_compute[257700]:   </devices>
Nov 24 09:56:30 compute-0 nova_compute[257700]: </domain>
Nov 24 09:56:30 compute-0 nova_compute[257700]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 24 09:56:30 compute-0 nova_compute[257700]: 2025-11-24 09:56:30.878 257704 DEBUG nova.compute.manager [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Preparing to wait for external event network-vif-plugged-c6d0f148-a8f4-467e-8be3-1a120663dc95 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 24 09:56:30 compute-0 nova_compute[257700]: 2025-11-24 09:56:30.878 257704 DEBUG oslo_concurrency.lockutils [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "b014f69e-04e5-4c5d-bb6c-e88b4410e6ab-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:56:30 compute-0 nova_compute[257700]: 2025-11-24 09:56:30.879 257704 DEBUG oslo_concurrency.lockutils [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "b014f69e-04e5-4c5d-bb6c-e88b4410e6ab-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:56:30 compute-0 nova_compute[257700]: 2025-11-24 09:56:30.879 257704 DEBUG oslo_concurrency.lockutils [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "b014f69e-04e5-4c5d-bb6c-e88b4410e6ab-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:56:30 compute-0 nova_compute[257700]: 2025-11-24 09:56:30.880 257704 DEBUG nova.virt.libvirt.vif [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T09:56:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1166261396',display_name='tempest-TestNetworkBasicOps-server-1166261396',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1166261396',id=4,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOlcnO+6RLSVuTHVahBA4wvrglLYJUElJsxDY5VFlil+CW8gtXSqW2DbLbNmozC6q2P0tVa4tVNMNBCnQGiQcIUf/IVfWp1zQZ3KhjDNVA3XKoC7hMjVrSu87SHoeev9EQ==',key_name='tempest-TestNetworkBasicOps-1920594917',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-57ahusuc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T09:56:26Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=b014f69e-04e5-4c5d-bb6c-e88b4410e6ab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c6d0f148-a8f4-467e-8be3-1a120663dc95", "address": "fa:16:3e:88:13:8b", "network": {"id": "4a54e00b-2ddf-4829-be22-9a556b586781", "bridge": "br-int", "label": "tempest-network-smoke--280510625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6d0f148-a8", "ovs_interfaceid": "c6d0f148-a8f4-467e-8be3-1a120663dc95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 24 09:56:30 compute-0 nova_compute[257700]: 2025-11-24 09:56:30.880 257704 DEBUG nova.network.os_vif_util [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "c6d0f148-a8f4-467e-8be3-1a120663dc95", "address": "fa:16:3e:88:13:8b", "network": {"id": "4a54e00b-2ddf-4829-be22-9a556b586781", "bridge": "br-int", "label": "tempest-network-smoke--280510625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6d0f148-a8", "ovs_interfaceid": "c6d0f148-a8f4-467e-8be3-1a120663dc95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 09:56:30 compute-0 nova_compute[257700]: 2025-11-24 09:56:30.880 257704 DEBUG nova.network.os_vif_util [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:13:8b,bridge_name='br-int',has_traffic_filtering=True,id=c6d0f148-a8f4-467e-8be3-1a120663dc95,network=Network(4a54e00b-2ddf-4829-be22-9a556b586781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6d0f148-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 09:56:30 compute-0 nova_compute[257700]: 2025-11-24 09:56:30.881 257704 DEBUG os_vif [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:13:8b,bridge_name='br-int',has_traffic_filtering=True,id=c6d0f148-a8f4-467e-8be3-1a120663dc95,network=Network(4a54e00b-2ddf-4829-be22-9a556b586781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6d0f148-a8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 24 09:56:30 compute-0 nova_compute[257700]: 2025-11-24 09:56:30.881 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:30 compute-0 nova_compute[257700]: 2025-11-24 09:56:30.882 257704 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:56:30 compute-0 nova_compute[257700]: 2025-11-24 09:56:30.882 257704 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 09:56:30 compute-0 nova_compute[257700]: 2025-11-24 09:56:30.885 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:30 compute-0 nova_compute[257700]: 2025-11-24 09:56:30.885 257704 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc6d0f148-a8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:56:30 compute-0 nova_compute[257700]: 2025-11-24 09:56:30.886 257704 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc6d0f148-a8, col_values=(('external_ids', {'iface-id': 'c6d0f148-a8f4-467e-8be3-1a120663dc95', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:88:13:8b', 'vm-uuid': 'b014f69e-04e5-4c5d-bb6c-e88b4410e6ab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:56:30 compute-0 nova_compute[257700]: 2025-11-24 09:56:30.887 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:30 compute-0 NetworkManager[48883]: <info>  [1763978190.8882] manager: (tapc6d0f148-a8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/32)
Nov 24 09:56:30 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:30 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:30 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:30.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:30 compute-0 ceph-mon[74331]: pgmap v888: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 09:56:30 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1983036459' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 09:56:30 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:56:30 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/4078391709' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 09:56:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:56:30] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 24 09:56:30 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:56:30] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 24 09:56:30 compute-0 nova_compute[257700]: 2025-11-24 09:56:30.998 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 09:56:30 compute-0 nova_compute[257700]: 2025-11-24 09:56:30.999 257704 INFO os_vif [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:13:8b,bridge_name='br-int',has_traffic_filtering=True,id=c6d0f148-a8f4-467e-8be3-1a120663dc95,network=Network(4a54e00b-2ddf-4829-be22-9a556b586781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6d0f148-a8')
Nov 24 09:56:31 compute-0 nova_compute[257700]: 2025-11-24 09:56:31.024 257704 DEBUG nova.network.neutron [req-ac83ba15-4823-47f5-9e9a-31625f09a0b6 req-32279c50-f56d-4d8e-ac2a-34e1b300eb6e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Updated VIF entry in instance network info cache for port c6d0f148-a8f4-467e-8be3-1a120663dc95. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 09:56:31 compute-0 nova_compute[257700]: 2025-11-24 09:56:31.024 257704 DEBUG nova.network.neutron [req-ac83ba15-4823-47f5-9e9a-31625f09a0b6 req-32279c50-f56d-4d8e-ac2a-34e1b300eb6e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Updating instance_info_cache with network_info: [{"id": "c6d0f148-a8f4-467e-8be3-1a120663dc95", "address": "fa:16:3e:88:13:8b", "network": {"id": "4a54e00b-2ddf-4829-be22-9a556b586781", "bridge": "br-int", "label": "tempest-network-smoke--280510625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6d0f148-a8", "ovs_interfaceid": "c6d0f148-a8f4-467e-8be3-1a120663dc95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 09:56:31 compute-0 nova_compute[257700]: 2025-11-24 09:56:31.038 257704 DEBUG oslo_concurrency.lockutils [req-ac83ba15-4823-47f5-9e9a-31625f09a0b6 req-32279c50-f56d-4d8e-ac2a-34e1b300eb6e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Releasing lock "refresh_cache-b014f69e-04e5-4c5d-bb6c-e88b4410e6ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 09:56:31 compute-0 nova_compute[257700]: 2025-11-24 09:56:31.041 257704 DEBUG nova.virt.libvirt.driver [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 09:56:31 compute-0 nova_compute[257700]: 2025-11-24 09:56:31.042 257704 DEBUG nova.virt.libvirt.driver [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 09:56:31 compute-0 nova_compute[257700]: 2025-11-24 09:56:31.042 257704 DEBUG nova.virt.libvirt.driver [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No VIF found with MAC fa:16:3e:88:13:8b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 24 09:56:31 compute-0 nova_compute[257700]: 2025-11-24 09:56:31.042 257704 INFO nova.virt.libvirt.driver [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Using config drive
Nov 24 09:56:31 compute-0 nova_compute[257700]: 2025-11-24 09:56:31.063 257704 DEBUG nova.storage.rbd_utils [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image b014f69e-04e5-4c5d-bb6c-e88b4410e6ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 09:56:31 compute-0 nova_compute[257700]: 2025-11-24 09:56:31.427 257704 INFO nova.virt.libvirt.driver [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Creating config drive at /var/lib/nova/instances/b014f69e-04e5-4c5d-bb6c-e88b4410e6ab/disk.config
Nov 24 09:56:31 compute-0 nova_compute[257700]: 2025-11-24 09:56:31.432 257704 DEBUG oslo_concurrency.processutils [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b014f69e-04e5-4c5d-bb6c-e88b4410e6ab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp90vsgy7_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:56:31 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v889: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 09:56:31 compute-0 nova_compute[257700]: 2025-11-24 09:56:31.557 257704 DEBUG oslo_concurrency.processutils [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b014f69e-04e5-4c5d-bb6c-e88b4410e6ab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp90vsgy7_" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:56:31 compute-0 nova_compute[257700]: 2025-11-24 09:56:31.579 257704 DEBUG nova.storage.rbd_utils [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image b014f69e-04e5-4c5d-bb6c-e88b4410e6ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 09:56:31 compute-0 nova_compute[257700]: 2025-11-24 09:56:31.582 257704 DEBUG oslo_concurrency.processutils [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b014f69e-04e5-4c5d-bb6c-e88b4410e6ab/disk.config b014f69e-04e5-4c5d-bb6c-e88b4410e6ab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:56:31 compute-0 nova_compute[257700]: 2025-11-24 09:56:31.736 257704 DEBUG oslo_concurrency.processutils [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b014f69e-04e5-4c5d-bb6c-e88b4410e6ab/disk.config b014f69e-04e5-4c5d-bb6c-e88b4410e6ab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:56:31 compute-0 nova_compute[257700]: 2025-11-24 09:56:31.737 257704 INFO nova.virt.libvirt.driver [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Deleting local config drive /var/lib/nova/instances/b014f69e-04e5-4c5d-bb6c-e88b4410e6ab/disk.config because it was imported into RBD.
Nov 24 09:56:31 compute-0 systemd[1]: Starting libvirt secret daemon...
Nov 24 09:56:31 compute-0 systemd[1]: Started libvirt secret daemon.
Nov 24 09:56:31 compute-0 kernel: tapc6d0f148-a8: entered promiscuous mode
Nov 24 09:56:31 compute-0 NetworkManager[48883]: <info>  [1763978191.8441] manager: (tapc6d0f148-a8): new Tun device (/org/freedesktop/NetworkManager/Devices/33)
Nov 24 09:56:31 compute-0 ovn_controller[155123]: 2025-11-24T09:56:31Z|00039|binding|INFO|Claiming lport c6d0f148-a8f4-467e-8be3-1a120663dc95 for this chassis.
Nov 24 09:56:31 compute-0 ovn_controller[155123]: 2025-11-24T09:56:31Z|00040|binding|INFO|c6d0f148-a8f4-467e-8be3-1a120663dc95: Claiming fa:16:3e:88:13:8b 10.100.0.6
Nov 24 09:56:31 compute-0 nova_compute[257700]: 2025-11-24 09:56:31.883 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:31 compute-0 systemd-udevd[269205]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 09:56:31 compute-0 nova_compute[257700]: 2025-11-24 09:56:31.890 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:31 compute-0 NetworkManager[48883]: <info>  [1763978191.8981] device (tapc6d0f148-a8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 09:56:31 compute-0 NetworkManager[48883]: <info>  [1763978191.8989] device (tapc6d0f148-a8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 24 09:56:31 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:56:31.900 165073 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:13:8b 10.100.0.6'], port_security=['fa:16:3e:88:13:8b 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'b014f69e-04e5-4c5d-bb6c-e88b4410e6ab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4a54e00b-2ddf-4829-be22-9a556b586781', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94d069fc040647d5a6e54894eec915fe', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fae8e741-c53a-4962-8907-2f1b9659e2f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cefc33a4-ddb4-430f-bd3b-965ffc7d2eca, chassis=[<ovs.db.idl.Row object at 0x7f45b2855760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f45b2855760>], logical_port=c6d0f148-a8f4-467e-8be3-1a120663dc95) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 09:56:31 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:56:31.902 165073 INFO neutron.agent.ovn.metadata.agent [-] Port c6d0f148-a8f4-467e-8be3-1a120663dc95 in datapath 4a54e00b-2ddf-4829-be22-9a556b586781 bound to our chassis
Nov 24 09:56:31 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:56:31.904 165073 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4a54e00b-2ddf-4829-be22-9a556b586781
Nov 24 09:56:31 compute-0 systemd-machined[219130]: New machine qemu-2-instance-00000004.
Nov 24 09:56:31 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:56:31.917 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[892f0b7c-0203-486a-9aa2-a8c39f0b69c1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:56:31 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:56:31.918 165073 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4a54e00b-21 in ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 24 09:56:31 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:56:31.919 264910 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4a54e00b-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 24 09:56:31 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:56:31.919 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[9c4a2aa8-4f67-46fe-ac6a-9e1271ab9886]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:56:31 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:56:31.920 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[d62eea5a-156b-4862-a499-43c8209b4a61]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:56:31 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:56:31.933 165227 DEBUG oslo.privsep.daemon [-] privsep: reply[1cd10818-548c-4bc8-a1ec-f749999ed11d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:56:31 compute-0 ovn_controller[155123]: 2025-11-24T09:56:31Z|00041|binding|INFO|Setting lport c6d0f148-a8f4-467e-8be3-1a120663dc95 ovn-installed in OVS
Nov 24 09:56:31 compute-0 ovn_controller[155123]: 2025-11-24T09:56:31Z|00042|binding|INFO|Setting lport c6d0f148-a8f4-467e-8be3-1a120663dc95 up in Southbound
Nov 24 09:56:31 compute-0 nova_compute[257700]: 2025-11-24 09:56:31.956 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:31 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000004.
Nov 24 09:56:31 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:56:31.961 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[f4b33fdc-c415-4bdc-8d0a-8297ad43ae5c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:56:31 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:56:31.992 264951 DEBUG oslo.privsep.daemon [-] privsep: reply[43251c02-a1af-4c3f-8ddb-283543e6700d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:56:31 compute-0 NetworkManager[48883]: <info>  [1763978191.9992] manager: (tap4a54e00b-20): new Veth device (/org/freedesktop/NetworkManager/Devices/34)
Nov 24 09:56:32 compute-0 systemd-udevd[269209]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 09:56:32 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:56:31.998 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[6f44c5eb-d792-4bf7-bb84-982df5408fbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:56:32 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:32 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:56:32 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:32.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:56:32 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:56:32.032 264951 DEBUG oslo.privsep.daemon [-] privsep: reply[caf6decd-8a33-43d5-b45e-0415bb8967a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:56:32 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:56:32.035 264951 DEBUG oslo.privsep.daemon [-] privsep: reply[fe580e8d-8dbc-4a73-84e2-7302af4bdab3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:56:32 compute-0 NetworkManager[48883]: <info>  [1763978192.0584] device (tap4a54e00b-20): carrier: link connected
Nov 24 09:56:32 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:56:32.065 264951 DEBUG oslo.privsep.daemon [-] privsep: reply[bcda53fb-0152-4888-bc53-0d69cc4b3ac1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:56:32 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:56:32.084 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[1a2af6e0-7683-402a-ae26-bf45502d6c36]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4a54e00b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ec:bd:d5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 413962, 'reachable_time': 15321, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269241, 'error': None, 'target': 'ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:56:32 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:56:32.103 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[2b91d544-2b21-4e1a-bc30-572d3e72b1e1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feec:bdd5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 413962, 'tstamp': 413962}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269243, 'error': None, 'target': 'ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:56:32 compute-0 nova_compute[257700]: 2025-11-24 09:56:32.115 257704 DEBUG nova.compute.manager [req-f72abe59-5f2a-4692-9a97-2f2c8ee19dde req-f6d98775-cd5e-4d51-b6b9-2b1c61bd24a5 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Received event network-vif-plugged-c6d0f148-a8f4-467e-8be3-1a120663dc95 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 09:56:32 compute-0 nova_compute[257700]: 2025-11-24 09:56:32.116 257704 DEBUG oslo_concurrency.lockutils [req-f72abe59-5f2a-4692-9a97-2f2c8ee19dde req-f6d98775-cd5e-4d51-b6b9-2b1c61bd24a5 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "b014f69e-04e5-4c5d-bb6c-e88b4410e6ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:56:32 compute-0 nova_compute[257700]: 2025-11-24 09:56:32.116 257704 DEBUG oslo_concurrency.lockutils [req-f72abe59-5f2a-4692-9a97-2f2c8ee19dde req-f6d98775-cd5e-4d51-b6b9-2b1c61bd24a5 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "b014f69e-04e5-4c5d-bb6c-e88b4410e6ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:56:32 compute-0 nova_compute[257700]: 2025-11-24 09:56:32.116 257704 DEBUG oslo_concurrency.lockutils [req-f72abe59-5f2a-4692-9a97-2f2c8ee19dde req-f6d98775-cd5e-4d51-b6b9-2b1c61bd24a5 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "b014f69e-04e5-4c5d-bb6c-e88b4410e6ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:56:32 compute-0 nova_compute[257700]: 2025-11-24 09:56:32.117 257704 DEBUG nova.compute.manager [req-f72abe59-5f2a-4692-9a97-2f2c8ee19dde req-f6d98775-cd5e-4d51-b6b9-2b1c61bd24a5 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Processing event network-vif-plugged-c6d0f148-a8f4-467e-8be3-1a120663dc95 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 24 09:56:32 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:56:32.123 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[267197cb-bd05-4876-aa85-cc9865f5ec1a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4a54e00b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ec:bd:d5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 413962, 'reachable_time': 15321, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 269244, 'error': None, 'target': 'ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:56:32 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:56:32.160 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[511bcbd3-5cec-41a5-935d-87087d862af9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:56:32 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:56:32.209 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[dd2d9e72-eaa8-4c9d-94ec-d538b06c4862]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:56:32 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:56:32.211 165073 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a54e00b-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:56:32 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:56:32.211 165073 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 09:56:32 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:56:32.212 165073 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4a54e00b-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:56:32 compute-0 nova_compute[257700]: 2025-11-24 09:56:32.213 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:32 compute-0 NetworkManager[48883]: <info>  [1763978192.2143] manager: (tap4a54e00b-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35)
Nov 24 09:56:32 compute-0 kernel: tap4a54e00b-20: entered promiscuous mode
Nov 24 09:56:32 compute-0 nova_compute[257700]: 2025-11-24 09:56:32.217 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:32 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:56:32.218 165073 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4a54e00b-20, col_values=(('external_ids', {'iface-id': '825c51a9-1ab7-4d33-9d7f-c9278b05a734'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:56:32 compute-0 nova_compute[257700]: 2025-11-24 09:56:32.219 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:32 compute-0 ovn_controller[155123]: 2025-11-24T09:56:32Z|00043|binding|INFO|Releasing lport 825c51a9-1ab7-4d33-9d7f-c9278b05a734 from this chassis (sb_readonly=0)
Nov 24 09:56:32 compute-0 nova_compute[257700]: 2025-11-24 09:56:32.233 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:32 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:56:32.234 165073 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4a54e00b-2ddf-4829-be22-9a556b586781.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4a54e00b-2ddf-4829-be22-9a556b586781.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 24 09:56:32 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:56:32.235 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[834b8d2c-8b86-4c5d-83a5-4f6571da3625]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:56:32 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:56:32.235 165073 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 24 09:56:32 compute-0 ovn_metadata_agent[165067]: global
Nov 24 09:56:32 compute-0 ovn_metadata_agent[165067]:     log         /dev/log local0 debug
Nov 24 09:56:32 compute-0 ovn_metadata_agent[165067]:     log-tag     haproxy-metadata-proxy-4a54e00b-2ddf-4829-be22-9a556b586781
Nov 24 09:56:32 compute-0 ovn_metadata_agent[165067]:     user        root
Nov 24 09:56:32 compute-0 ovn_metadata_agent[165067]:     group       root
Nov 24 09:56:32 compute-0 ovn_metadata_agent[165067]:     maxconn     1024
Nov 24 09:56:32 compute-0 ovn_metadata_agent[165067]:     pidfile     /var/lib/neutron/external/pids/4a54e00b-2ddf-4829-be22-9a556b586781.pid.haproxy
Nov 24 09:56:32 compute-0 ovn_metadata_agent[165067]:     daemon
Nov 24 09:56:32 compute-0 ovn_metadata_agent[165067]: 
Nov 24 09:56:32 compute-0 ovn_metadata_agent[165067]: defaults
Nov 24 09:56:32 compute-0 ovn_metadata_agent[165067]:     log global
Nov 24 09:56:32 compute-0 ovn_metadata_agent[165067]:     mode http
Nov 24 09:56:32 compute-0 ovn_metadata_agent[165067]:     option httplog
Nov 24 09:56:32 compute-0 ovn_metadata_agent[165067]:     option dontlognull
Nov 24 09:56:32 compute-0 ovn_metadata_agent[165067]:     option http-server-close
Nov 24 09:56:32 compute-0 ovn_metadata_agent[165067]:     option forwardfor
Nov 24 09:56:32 compute-0 ovn_metadata_agent[165067]:     retries                 3
Nov 24 09:56:32 compute-0 ovn_metadata_agent[165067]:     timeout http-request    30s
Nov 24 09:56:32 compute-0 ovn_metadata_agent[165067]:     timeout connect         30s
Nov 24 09:56:32 compute-0 ovn_metadata_agent[165067]:     timeout client          32s
Nov 24 09:56:32 compute-0 ovn_metadata_agent[165067]:     timeout server          32s
Nov 24 09:56:32 compute-0 ovn_metadata_agent[165067]:     timeout http-keep-alive 30s
Nov 24 09:56:32 compute-0 ovn_metadata_agent[165067]: 
Nov 24 09:56:32 compute-0 ovn_metadata_agent[165067]: 
Nov 24 09:56:32 compute-0 ovn_metadata_agent[165067]: listen listener
Nov 24 09:56:32 compute-0 ovn_metadata_agent[165067]:     bind 169.254.169.254:80
Nov 24 09:56:32 compute-0 ovn_metadata_agent[165067]:     server metadata /var/lib/neutron/metadata_proxy
Nov 24 09:56:32 compute-0 ovn_metadata_agent[165067]:     http-request add-header X-OVN-Network-ID 4a54e00b-2ddf-4829-be22-9a556b586781
Nov 24 09:56:32 compute-0 ovn_metadata_agent[165067]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 24 09:56:32 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:56:32.236 165073 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781', 'env', 'PROCESS_TAG=haproxy-4a54e00b-2ddf-4829-be22-9a556b586781', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4a54e00b-2ddf-4829-be22-9a556b586781.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 24 09:56:32 compute-0 podman[269279]: 2025-11-24 09:56:32.584831412 +0000 UTC m=+0.059740440 container create 4b6e281a6fc71ea21b7ef329aa51ba71efd6dd920f7003f7cb9ba194f515d509 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 09:56:32 compute-0 systemd[1]: Started libpod-conmon-4b6e281a6fc71ea21b7ef329aa51ba71efd6dd920f7003f7cb9ba194f515d509.scope.
Nov 24 09:56:32 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:56:32 compute-0 podman[269279]: 2025-11-24 09:56:32.552483531 +0000 UTC m=+0.027392589 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 24 09:56:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa8138a8c70e7d0457e244c836822f524085b4e78fa99c62495f104a05341cf4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 24 09:56:32 compute-0 podman[269279]: 2025-11-24 09:56:32.664877383 +0000 UTC m=+0.139786431 container init 4b6e281a6fc71ea21b7ef329aa51ba71efd6dd920f7003f7cb9ba194f515d509 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, io.buildah.version=1.41.3)
Nov 24 09:56:32 compute-0 podman[269279]: 2025-11-24 09:56:32.669763923 +0000 UTC m=+0.144672951 container start 4b6e281a6fc71ea21b7ef329aa51ba71efd6dd920f7003f7cb9ba194f515d509 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 09:56:32 compute-0 neutron-haproxy-ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781[269295]: [NOTICE]   (269299) : New worker (269301) forked
Nov 24 09:56:32 compute-0 neutron-haproxy-ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781[269295]: [NOTICE]   (269299) : Loading success.
Nov 24 09:56:32 compute-0 nova_compute[257700]: 2025-11-24 09:56:32.894 257704 DEBUG nova.virt.driver [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Emitting event <LifecycleEvent: 1763978192.8935652, b014f69e-04e5-4c5d-bb6c-e88b4410e6ab => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 09:56:32 compute-0 nova_compute[257700]: 2025-11-24 09:56:32.894 257704 INFO nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] VM Started (Lifecycle Event)
Nov 24 09:56:32 compute-0 nova_compute[257700]: 2025-11-24 09:56:32.898 257704 DEBUG nova.compute.manager [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 24 09:56:32 compute-0 nova_compute[257700]: 2025-11-24 09:56:32.903 257704 DEBUG nova.virt.libvirt.driver [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 24 09:56:32 compute-0 nova_compute[257700]: 2025-11-24 09:56:32.908 257704 INFO nova.virt.libvirt.driver [-] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Instance spawned successfully.
Nov 24 09:56:32 compute-0 nova_compute[257700]: 2025-11-24 09:56:32.908 257704 DEBUG nova.virt.libvirt.driver [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 24 09:56:32 compute-0 nova_compute[257700]: 2025-11-24 09:56:32.916 257704 DEBUG nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 09:56:32 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:32 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:32 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:32.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:32 compute-0 nova_compute[257700]: 2025-11-24 09:56:32.922 257704 DEBUG nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 09:56:32 compute-0 nova_compute[257700]: 2025-11-24 09:56:32.931 257704 DEBUG nova.virt.libvirt.driver [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 09:56:32 compute-0 nova_compute[257700]: 2025-11-24 09:56:32.932 257704 DEBUG nova.virt.libvirt.driver [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 09:56:32 compute-0 nova_compute[257700]: 2025-11-24 09:56:32.932 257704 DEBUG nova.virt.libvirt.driver [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 09:56:32 compute-0 nova_compute[257700]: 2025-11-24 09:56:32.932 257704 DEBUG nova.virt.libvirt.driver [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 09:56:32 compute-0 nova_compute[257700]: 2025-11-24 09:56:32.933 257704 DEBUG nova.virt.libvirt.driver [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 09:56:32 compute-0 nova_compute[257700]: 2025-11-24 09:56:32.933 257704 DEBUG nova.virt.libvirt.driver [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 09:56:32 compute-0 nova_compute[257700]: 2025-11-24 09:56:32.938 257704 INFO nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 09:56:32 compute-0 nova_compute[257700]: 2025-11-24 09:56:32.939 257704 DEBUG nova.virt.driver [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Emitting event <LifecycleEvent: 1763978192.893837, b014f69e-04e5-4c5d-bb6c-e88b4410e6ab => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 09:56:32 compute-0 nova_compute[257700]: 2025-11-24 09:56:32.939 257704 INFO nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] VM Paused (Lifecycle Event)
Nov 24 09:56:32 compute-0 nova_compute[257700]: 2025-11-24 09:56:32.958 257704 DEBUG nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 09:56:32 compute-0 nova_compute[257700]: 2025-11-24 09:56:32.961 257704 DEBUG nova.virt.driver [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Emitting event <LifecycleEvent: 1763978192.9018168, b014f69e-04e5-4c5d-bb6c-e88b4410e6ab => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 09:56:32 compute-0 nova_compute[257700]: 2025-11-24 09:56:32.961 257704 INFO nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] VM Resumed (Lifecycle Event)
Nov 24 09:56:32 compute-0 ceph-mon[74331]: pgmap v889: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 09:56:32 compute-0 nova_compute[257700]: 2025-11-24 09:56:32.976 257704 DEBUG nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 09:56:32 compute-0 nova_compute[257700]: 2025-11-24 09:56:32.980 257704 DEBUG nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 09:56:32 compute-0 nova_compute[257700]: 2025-11-24 09:56:32.996 257704 INFO nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 09:56:33 compute-0 nova_compute[257700]: 2025-11-24 09:56:33.089 257704 INFO nova.compute.manager [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Took 6.13 seconds to spawn the instance on the hypervisor.
Nov 24 09:56:33 compute-0 nova_compute[257700]: 2025-11-24 09:56:33.090 257704 DEBUG nova.compute.manager [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 09:56:33 compute-0 nova_compute[257700]: 2025-11-24 09:56:33.206 257704 INFO nova.compute.manager [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Took 7.01 seconds to build instance.
Nov 24 09:56:33 compute-0 nova_compute[257700]: 2025-11-24 09:56:33.278 257704 DEBUG oslo_concurrency.lockutils [None req-64ea9938-e265-4733-aef7-e8846c4143b6 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "b014f69e-04e5-4c5d-bb6c-e88b4410e6ab" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.143s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:56:33 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v890: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Nov 24 09:56:33 compute-0 sshd-session[269251]: Received disconnect from 36.255.3.203 port 59929:11: Bye Bye [preauth]
Nov 24 09:56:33 compute-0 sshd-session[269251]: Disconnected from authenticating user root 36.255.3.203 port 59929 [preauth]
Nov 24 09:56:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:56:33 compute-0 podman[269353]: 2025-11-24 09:56:33.786011603 +0000 UTC m=+0.060590062 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 24 09:56:33 compute-0 podman[269354]: 2025-11-24 09:56:33.816262221 +0000 UTC m=+0.084598955 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 09:56:34 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:34 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:34 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:34.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:34 compute-0 ceph-osd[82549]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 09:56:34 compute-0 ceph-osd[82549]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 10K writes, 39K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 10K writes, 2620 syncs, 4.03 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1371 writes, 3660 keys, 1371 commit groups, 1.0 writes per commit group, ingest: 2.75 MB, 0.00 MB/s
                                           Interval WAL: 1371 writes, 611 syncs, 2.24 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 24 09:56:34 compute-0 nova_compute[257700]: 2025-11-24 09:56:34.187 257704 DEBUG nova.compute.manager [req-c7d8604c-e7c9-42e5-a5e0-38dfaed356e4 req-881bd663-3000-438f-a390-33bbff03d6ad 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Received event network-vif-plugged-c6d0f148-a8f4-467e-8be3-1a120663dc95 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 09:56:34 compute-0 nova_compute[257700]: 2025-11-24 09:56:34.189 257704 DEBUG oslo_concurrency.lockutils [req-c7d8604c-e7c9-42e5-a5e0-38dfaed356e4 req-881bd663-3000-438f-a390-33bbff03d6ad 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "b014f69e-04e5-4c5d-bb6c-e88b4410e6ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:56:34 compute-0 nova_compute[257700]: 2025-11-24 09:56:34.189 257704 DEBUG oslo_concurrency.lockutils [req-c7d8604c-e7c9-42e5-a5e0-38dfaed356e4 req-881bd663-3000-438f-a390-33bbff03d6ad 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "b014f69e-04e5-4c5d-bb6c-e88b4410e6ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:56:34 compute-0 nova_compute[257700]: 2025-11-24 09:56:34.189 257704 DEBUG oslo_concurrency.lockutils [req-c7d8604c-e7c9-42e5-a5e0-38dfaed356e4 req-881bd663-3000-438f-a390-33bbff03d6ad 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "b014f69e-04e5-4c5d-bb6c-e88b4410e6ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:56:34 compute-0 nova_compute[257700]: 2025-11-24 09:56:34.189 257704 DEBUG nova.compute.manager [req-c7d8604c-e7c9-42e5-a5e0-38dfaed356e4 req-881bd663-3000-438f-a390-33bbff03d6ad 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] No waiting events found dispatching network-vif-plugged-c6d0f148-a8f4-467e-8be3-1a120663dc95 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 09:56:34 compute-0 nova_compute[257700]: 2025-11-24 09:56:34.190 257704 WARNING nova.compute.manager [req-c7d8604c-e7c9-42e5-a5e0-38dfaed356e4 req-881bd663-3000-438f-a390-33bbff03d6ad 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Received unexpected event network-vif-plugged-c6d0f148-a8f4-467e-8be3-1a120663dc95 for instance with vm_state active and task_state None.
Nov 24 09:56:34 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:34 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:34 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:34.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:34 compute-0 ceph-mon[74331]: pgmap v890: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Nov 24 09:56:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:56:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:56:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:56:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:56:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:56:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:56:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:56:35 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:56:35 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v891: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Nov 24 09:56:35 compute-0 nova_compute[257700]: 2025-11-24 09:56:35.780 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:35 compute-0 nova_compute[257700]: 2025-11-24 09:56:35.886 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:36 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:36 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:36 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:36.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:36 compute-0 ovn_controller[155123]: 2025-11-24T09:56:36Z|00044|binding|INFO|Releasing lport 825c51a9-1ab7-4d33-9d7f-c9278b05a734 from this chassis (sb_readonly=0)
Nov 24 09:56:36 compute-0 nova_compute[257700]: 2025-11-24 09:56:36.674 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:36 compute-0 NetworkManager[48883]: <info>  [1763978196.6774] manager: (patch-provnet-aec09a4d-39ae-42d2-80ba-0cd5b53fed5d-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Nov 24 09:56:36 compute-0 NetworkManager[48883]: <info>  [1763978196.6788] manager: (patch-br-int-to-provnet-aec09a4d-39ae-42d2-80ba-0cd5b53fed5d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Nov 24 09:56:36 compute-0 ovn_controller[155123]: 2025-11-24T09:56:36Z|00045|binding|INFO|Releasing lport 825c51a9-1ab7-4d33-9d7f-c9278b05a734 from this chassis (sb_readonly=0)
Nov 24 09:56:36 compute-0 nova_compute[257700]: 2025-11-24 09:56:36.712 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:36 compute-0 nova_compute[257700]: 2025-11-24 09:56:36.717 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:36 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:36 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:36 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:36.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:37 compute-0 ceph-mon[74331]: pgmap v891: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Nov 24 09:56:37 compute-0 nova_compute[257700]: 2025-11-24 09:56:37.146 257704 DEBUG nova.compute.manager [req-966b4909-f676-4232-91db-49ac3c2dd4a2 req-f80b90ec-5228-47e8-99ef-c910b96f1d70 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Received event network-changed-c6d0f148-a8f4-467e-8be3-1a120663dc95 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 09:56:37 compute-0 nova_compute[257700]: 2025-11-24 09:56:37.147 257704 DEBUG nova.compute.manager [req-966b4909-f676-4232-91db-49ac3c2dd4a2 req-f80b90ec-5228-47e8-99ef-c910b96f1d70 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Refreshing instance network info cache due to event network-changed-c6d0f148-a8f4-467e-8be3-1a120663dc95. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 09:56:37 compute-0 nova_compute[257700]: 2025-11-24 09:56:37.147 257704 DEBUG oslo_concurrency.lockutils [req-966b4909-f676-4232-91db-49ac3c2dd4a2 req-f80b90ec-5228-47e8-99ef-c910b96f1d70 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "refresh_cache-b014f69e-04e5-4c5d-bb6c-e88b4410e6ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 09:56:37 compute-0 nova_compute[257700]: 2025-11-24 09:56:37.147 257704 DEBUG oslo_concurrency.lockutils [req-966b4909-f676-4232-91db-49ac3c2dd4a2 req-f80b90ec-5228-47e8-99ef-c910b96f1d70 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquired lock "refresh_cache-b014f69e-04e5-4c5d-bb6c-e88b4410e6ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 09:56:37 compute-0 nova_compute[257700]: 2025-11-24 09:56:37.147 257704 DEBUG nova.network.neutron [req-966b4909-f676-4232-91db-49ac3c2dd4a2 req-f80b90ec-5228-47e8-99ef-c910b96f1d70 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Refreshing network info cache for port c6d0f148-a8f4-467e-8be3-1a120663dc95 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 09:56:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:56:37.516Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:56:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:56:37.516Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 09:56:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:56:37.517Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 09:56:37 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v892: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Nov 24 09:56:38 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:38 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:38 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:38.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:38 compute-0 nova_compute[257700]: 2025-11-24 09:56:38.290 257704 DEBUG nova.network.neutron [req-966b4909-f676-4232-91db-49ac3c2dd4a2 req-f80b90ec-5228-47e8-99ef-c910b96f1d70 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Updated VIF entry in instance network info cache for port c6d0f148-a8f4-467e-8be3-1a120663dc95. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 09:56:38 compute-0 nova_compute[257700]: 2025-11-24 09:56:38.291 257704 DEBUG nova.network.neutron [req-966b4909-f676-4232-91db-49ac3c2dd4a2 req-f80b90ec-5228-47e8-99ef-c910b96f1d70 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Updating instance_info_cache with network_info: [{"id": "c6d0f148-a8f4-467e-8be3-1a120663dc95", "address": "fa:16:3e:88:13:8b", "network": {"id": "4a54e00b-2ddf-4829-be22-9a556b586781", "bridge": "br-int", "label": "tempest-network-smoke--280510625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6d0f148-a8", "ovs_interfaceid": "c6d0f148-a8f4-467e-8be3-1a120663dc95", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 09:56:38 compute-0 nova_compute[257700]: 2025-11-24 09:56:38.331 257704 DEBUG oslo_concurrency.lockutils [req-966b4909-f676-4232-91db-49ac3c2dd4a2 req-f80b90ec-5228-47e8-99ef-c910b96f1d70 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Releasing lock "refresh_cache-b014f69e-04e5-4c5d-bb6c-e88b4410e6ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 09:56:38 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:56:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:56:38.873Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:56:38 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:38 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:38 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:38.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:39 compute-0 ceph-mon[74331]: pgmap v892: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Nov 24 09:56:39 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v893: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 24 09:56:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:56:39 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:56:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:56:39 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:56:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:56:39 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:56:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:56:40 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:56:40 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:40 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:40 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:40.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:40 compute-0 nova_compute[257700]: 2025-11-24 09:56:40.783 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:40 compute-0 nova_compute[257700]: 2025-11-24 09:56:40.888 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:40 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:40 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:40 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:40.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:56:40] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 24 09:56:40 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:56:40] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 24 09:56:41 compute-0 ceph-mon[74331]: pgmap v893: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 24 09:56:41 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v894: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 24 09:56:42 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:42 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:42 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:42.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:42 compute-0 podman[269407]: 2025-11-24 09:56:42.763607729 +0000 UTC m=+0.041740594 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 24 09:56:42 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:42 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:42 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:42.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:43 compute-0 ceph-mon[74331]: pgmap v894: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 24 09:56:43 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v895: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Nov 24 09:56:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:56:44 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:44 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:56:44 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:44.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:56:44 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:44 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:44 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:44.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:56:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:56:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:56:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:56:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:56:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:56:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:56:45 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:56:45 compute-0 ceph-mon[74331]: pgmap v895: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Nov 24 09:56:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Optimize plan auto_2025-11-24_09:56:45
Nov 24 09:56:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 09:56:45 compute-0 ceph-mgr[74626]: [balancer INFO root] do_upmap
Nov 24 09:56:45 compute-0 ceph-mgr[74626]: [balancer INFO root] pools ['default.rgw.control', 'images', '.mgr', 'default.rgw.log', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', '.nfs', 'backups', 'vms']
Nov 24 09:56:45 compute-0 ceph-mgr[74626]: [balancer INFO root] prepared 0/10 upmap changes
Nov 24 09:56:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:56:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:56:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:56:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:56:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:56:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:56:45 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v896: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 70 op/s
Nov 24 09:56:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 09:56:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:56:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 09:56:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:56:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Nov 24 09:56:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:56:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:56:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:56:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:56:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:56:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 24 09:56:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:56:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 09:56:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:56:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:56:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:56:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 24 09:56:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:56:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 09:56:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:56:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:56:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:56:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 09:56:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:56:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 09:56:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 09:56:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:56:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 09:56:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:56:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:56:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:56:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:56:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:56:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:56:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:56:45 compute-0 nova_compute[257700]: 2025-11-24 09:56:45.787 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:45 compute-0 nova_compute[257700]: 2025-11-24 09:56:45.889 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:46 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:46 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:46 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:46.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:46 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:56:46 compute-0 sudo[269430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:56:46 compute-0 sudo[269430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:56:46 compute-0 sudo[269430]: pam_unix(sudo:session): session closed for user root
Nov 24 09:56:46 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:46 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:46 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:46.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:47 compute-0 ovn_controller[155123]: 2025-11-24T09:56:47Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:88:13:8b 10.100.0.6
Nov 24 09:56:47 compute-0 ovn_controller[155123]: 2025-11-24T09:56:47Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:88:13:8b 10.100.0.6
Nov 24 09:56:47 compute-0 ceph-mon[74331]: pgmap v896: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 70 op/s
Nov 24 09:56:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:56:47.517Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 09:56:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:56:47.518Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:56:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:56:47.518Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:56:47 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v897: 353 pgs: 353 active+clean; 113 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 123 op/s
Nov 24 09:56:48 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:48 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:48 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:48.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:48 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:56:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:56:48.874Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:56:48 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:48 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:48 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:48.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:49 compute-0 ceph-mon[74331]: pgmap v897: 353 pgs: 353 active+clean; 113 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 123 op/s
Nov 24 09:56:49 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v898: 353 pgs: 353 active+clean; 113 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 353 KiB/s rd, 2.0 MiB/s wr, 53 op/s
Nov 24 09:56:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:56:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:56:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:56:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:56:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:56:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:56:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:56:50 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:56:50 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:50 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:50 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:50.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:50 compute-0 nova_compute[257700]: 2025-11-24 09:56:50.788 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:50 compute-0 nova_compute[257700]: 2025-11-24 09:56:50.891 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:50 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:50 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:50 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:50.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:56:50] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Nov 24 09:56:50 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:56:50] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Nov 24 09:56:51 compute-0 ceph-mon[74331]: pgmap v898: 353 pgs: 353 active+clean; 113 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 353 KiB/s rd, 2.0 MiB/s wr, 53 op/s
Nov 24 09:56:51 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v899: 353 pgs: 353 active+clean; 113 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 353 KiB/s rd, 2.0 MiB/s wr, 53 op/s
Nov 24 09:56:52 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:52 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:52 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:52.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:52 compute-0 sshd-session[269461]: Invalid user ntps from 83.229.122.23 port 39408
Nov 24 09:56:52 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:52 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:52 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:52.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:53 compute-0 sshd-session[269461]: Received disconnect from 83.229.122.23 port 39408:11: Bye Bye [preauth]
Nov 24 09:56:53 compute-0 sshd-session[269461]: Disconnected from invalid user ntps 83.229.122.23 port 39408 [preauth]
Nov 24 09:56:53 compute-0 ceph-mon[74331]: pgmap v899: 353 pgs: 353 active+clean; 113 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 353 KiB/s rd, 2.0 MiB/s wr, 53 op/s
Nov 24 09:56:53 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v900: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 415 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Nov 24 09:56:53 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:56:53 compute-0 nova_compute[257700]: 2025-11-24 09:56:53.732 257704 INFO nova.compute.manager [None req-f15d3179-a04e-4719-a5ab-e156a714cdcf 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Get console output
Nov 24 09:56:53 compute-0 nova_compute[257700]: 2025-11-24 09:56:53.737 266539 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 24 09:56:54 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:54 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:54 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:54.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:54 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:54 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:54 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:54.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:56:54 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:56:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:56:54 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:56:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:56:54 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:56:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:56:55 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:56:55 compute-0 ceph-mon[74331]: pgmap v900: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 415 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Nov 24 09:56:55 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v901: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 415 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Nov 24 09:56:55 compute-0 nova_compute[257700]: 2025-11-24 09:56:55.791 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:55 compute-0 nova_compute[257700]: 2025-11-24 09:56:55.893 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:56:56 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:56 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:56 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:56.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:56 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Nov 24 09:56:56 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:56:56.476167) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 09:56:56 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Nov 24 09:56:56 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978216476216, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2128, "num_deletes": 251, "total_data_size": 4102692, "memory_usage": 4160592, "flush_reason": "Manual Compaction"}
Nov 24 09:56:56 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Nov 24 09:56:56 compute-0 nova_compute[257700]: 2025-11-24 09:56:56.500 257704 DEBUG nova.compute.manager [req-002d94f6-7b46-4a46-ae91-1ef9c578cba8 req-0f66b74c-41d2-41ee-9bbf-8409a80af13b 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Received event network-changed-c6d0f148-a8f4-467e-8be3-1a120663dc95 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 09:56:56 compute-0 nova_compute[257700]: 2025-11-24 09:56:56.500 257704 DEBUG nova.compute.manager [req-002d94f6-7b46-4a46-ae91-1ef9c578cba8 req-0f66b74c-41d2-41ee-9bbf-8409a80af13b 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Refreshing instance network info cache due to event network-changed-c6d0f148-a8f4-467e-8be3-1a120663dc95. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 09:56:56 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978216501400, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 3984471, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24799, "largest_seqno": 26925, "table_properties": {"data_size": 3975086, "index_size": 5879, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19668, "raw_average_key_size": 20, "raw_value_size": 3956079, "raw_average_value_size": 4078, "num_data_blocks": 259, "num_entries": 970, "num_filter_entries": 970, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763978006, "oldest_key_time": 1763978006, "file_creation_time": 1763978216, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "42aa12d2-c531-4ddc-8c4c-bc0b5971346b", "db_session_id": "RORHLERH15LC1QL8D0I4", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:56:56 compute-0 nova_compute[257700]: 2025-11-24 09:56:56.501 257704 DEBUG oslo_concurrency.lockutils [req-002d94f6-7b46-4a46-ae91-1ef9c578cba8 req-0f66b74c-41d2-41ee-9bbf-8409a80af13b 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "refresh_cache-b014f69e-04e5-4c5d-bb6c-e88b4410e6ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 09:56:56 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 25360 microseconds, and 8004 cpu microseconds.
Nov 24 09:56:56 compute-0 ceph-mon[74331]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:56:56 compute-0 nova_compute[257700]: 2025-11-24 09:56:56.501 257704 DEBUG oslo_concurrency.lockutils [req-002d94f6-7b46-4a46-ae91-1ef9c578cba8 req-0f66b74c-41d2-41ee-9bbf-8409a80af13b 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquired lock "refresh_cache-b014f69e-04e5-4c5d-bb6c-e88b4410e6ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 09:56:56 compute-0 nova_compute[257700]: 2025-11-24 09:56:56.501 257704 DEBUG nova.network.neutron [req-002d94f6-7b46-4a46-ae91-1ef9c578cba8 req-0f66b74c-41d2-41ee-9bbf-8409a80af13b 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Refreshing network info cache for port c6d0f148-a8f4-467e-8be3-1a120663dc95 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 09:56:56 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:56:56.501531) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 3984471 bytes OK
Nov 24 09:56:56 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:56:56.501574) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Nov 24 09:56:56 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:56:56.503261) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Nov 24 09:56:56 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:56:56.503277) EVENT_LOG_v1 {"time_micros": 1763978216503271, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 09:56:56 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:56:56.503296) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 09:56:56 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 4094109, prev total WAL file size 4094109, number of live WAL files 2.
Nov 24 09:56:56 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:56:56 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:56:56.504649) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Nov 24 09:56:56 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 09:56:56 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(3891KB)], [56(11MB)]
Nov 24 09:56:56 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978216504717, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 16531445, "oldest_snapshot_seqno": -1}
Nov 24 09:56:56 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5848 keys, 14385339 bytes, temperature: kUnknown
Nov 24 09:56:56 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978216579013, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 14385339, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14345488, "index_size": 24125, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14661, "raw_key_size": 148749, "raw_average_key_size": 25, "raw_value_size": 14239135, "raw_average_value_size": 2434, "num_data_blocks": 982, "num_entries": 5848, "num_filter_entries": 5848, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976305, "oldest_key_time": 0, "file_creation_time": 1763978216, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "42aa12d2-c531-4ddc-8c4c-bc0b5971346b", "db_session_id": "RORHLERH15LC1QL8D0I4", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:56:56 compute-0 ceph-mon[74331]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:56:56 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:56:56.579342) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 14385339 bytes
Nov 24 09:56:56 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:56:56.581415) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 222.4 rd, 193.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.8, 12.0 +0.0 blob) out(13.7 +0.0 blob), read-write-amplify(7.8) write-amplify(3.6) OK, records in: 6368, records dropped: 520 output_compression: NoCompression
Nov 24 09:56:56 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:56:56.581453) EVENT_LOG_v1 {"time_micros": 1763978216581437, "job": 30, "event": "compaction_finished", "compaction_time_micros": 74340, "compaction_time_cpu_micros": 27499, "output_level": 6, "num_output_files": 1, "total_output_size": 14385339, "num_input_records": 6368, "num_output_records": 5848, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 09:56:56 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:56:56 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978216582626, "job": 30, "event": "table_file_deletion", "file_number": 58}
Nov 24 09:56:56 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:56:56 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978216584907, "job": 30, "event": "table_file_deletion", "file_number": 56}
Nov 24 09:56:56 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:56:56.504578) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:56:56 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:56:56.585004) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:56:56 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:56:56.585008) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:56:56 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:56:56.585010) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:56:56 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:56:56.585011) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:56:56 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:56:56.585013) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:56:56 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:56 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:56:56 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:56.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:56:57 compute-0 ceph-mon[74331]: pgmap v901: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 415 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Nov 24 09:56:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:56:57.518Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:56:57 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v902: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 421 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Nov 24 09:56:58 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:58 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:56:58 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:56:58.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:56:58 compute-0 nova_compute[257700]: 2025-11-24 09:56:58.328 257704 DEBUG nova.network.neutron [req-002d94f6-7b46-4a46-ae91-1ef9c578cba8 req-0f66b74c-41d2-41ee-9bbf-8409a80af13b 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Updated VIF entry in instance network info cache for port c6d0f148-a8f4-467e-8be3-1a120663dc95. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 09:56:58 compute-0 nova_compute[257700]: 2025-11-24 09:56:58.328 257704 DEBUG nova.network.neutron [req-002d94f6-7b46-4a46-ae91-1ef9c578cba8 req-0f66b74c-41d2-41ee-9bbf-8409a80af13b 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Updating instance_info_cache with network_info: [{"id": "c6d0f148-a8f4-467e-8be3-1a120663dc95", "address": "fa:16:3e:88:13:8b", "network": {"id": "4a54e00b-2ddf-4829-be22-9a556b586781", "bridge": "br-int", "label": "tempest-network-smoke--280510625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6d0f148-a8", "ovs_interfaceid": "c6d0f148-a8f4-467e-8be3-1a120663dc95", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 09:56:58 compute-0 nova_compute[257700]: 2025-11-24 09:56:58.346 257704 DEBUG oslo_concurrency.lockutils [req-002d94f6-7b46-4a46-ae91-1ef9c578cba8 req-0f66b74c-41d2-41ee-9bbf-8409a80af13b 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Releasing lock "refresh_cache-b014f69e-04e5-4c5d-bb6c-e88b4410e6ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 09:56:58 compute-0 ceph-mon[74331]: pgmap v902: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 421 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Nov 24 09:56:58 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:56:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:56:58.875Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:56:58 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:56:58 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:56:58 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:56:58.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:56:59 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v903: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 104 KiB/s wr, 16 op/s
Nov 24 09:57:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:56:59 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:57:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:56:59 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:57:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:56:59 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:57:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:57:00 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:57:00 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:00 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:00 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:00.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:00 compute-0 ceph-mon[74331]: pgmap v903: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 104 KiB/s wr, 16 op/s
Nov 24 09:57:00 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:57:00 compute-0 nova_compute[257700]: 2025-11-24 09:57:00.830 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:00 compute-0 nova_compute[257700]: 2025-11-24 09:57:00.894 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:00 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:00 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:00 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:00.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:57:00] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Nov 24 09:57:00 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:57:00] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Nov 24 09:57:01 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v904: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 104 KiB/s wr, 16 op/s
Nov 24 09:57:01 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/1740854011' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 09:57:01 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/1740854011' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 09:57:02 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:02 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:02 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:02.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:02 compute-0 ceph-mon[74331]: pgmap v904: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 104 KiB/s wr, 16 op/s
Nov 24 09:57:02 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:02 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:57:02 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:02.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:57:03 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v905: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 107 KiB/s wr, 17 op/s
Nov 24 09:57:03 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:57:03 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/2841874782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:57:04 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:04 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:04 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:04.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:04 compute-0 podman[269476]: 2025-11-24 09:57:04.805606066 +0000 UTC m=+0.065588555 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Nov 24 09:57:04 compute-0 ceph-mon[74331]: pgmap v905: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 107 KiB/s wr, 17 op/s
Nov 24 09:57:04 compute-0 podman[269477]: 2025-11-24 09:57:04.82633599 +0000 UTC m=+0.087498467 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 09:57:04 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:04 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:04 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:04.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:57:05 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:57:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:57:05 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:57:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:57:05 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:57:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:57:05 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:57:05 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v906: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 16 KiB/s wr, 2 op/s
Nov 24 09:57:05 compute-0 nova_compute[257700]: 2025-11-24 09:57:05.832 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:05 compute-0 nova_compute[257700]: 2025-11-24 09:57:05.895 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:06 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:06 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:57:06 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:06.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:57:06 compute-0 sudo[269524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:57:06 compute-0 sudo[269524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:57:06 compute-0 sudo[269524]: pam_unix(sudo:session): session closed for user root
Nov 24 09:57:06 compute-0 ceph-mon[74331]: pgmap v906: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 16 KiB/s wr, 2 op/s
Nov 24 09:57:06 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:06 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:06 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:06.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:57:07.518Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:57:07 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v907: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Nov 24 09:57:08 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:08 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:57:08 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:08.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:57:08 compute-0 sudo[269550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:57:08 compute-0 sudo[269550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:57:08 compute-0 sudo[269550]: pam_unix(sudo:session): session closed for user root
Nov 24 09:57:08 compute-0 sudo[269575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Nov 24 09:57:08 compute-0 sudo[269575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:57:08 compute-0 sudo[269575]: pam_unix(sudo:session): session closed for user root
Nov 24 09:57:08 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:57:08 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:57:08 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:57:08 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:57:08 compute-0 sudo[269621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:57:08 compute-0 sudo[269621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:57:08 compute-0 sudo[269621]: pam_unix(sudo:session): session closed for user root
Nov 24 09:57:08 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:57:08 compute-0 sudo[269646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:57:08 compute-0 sudo[269646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:57:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:57:08.876Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:57:08 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:08 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:08 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:08.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:09 compute-0 sudo[269646]: pam_unix(sudo:session): session closed for user root
Nov 24 09:57:09 compute-0 sudo[269700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:57:09 compute-0 sudo[269700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:57:09 compute-0 sudo[269700]: pam_unix(sudo:session): session closed for user root
Nov 24 09:57:09 compute-0 sudo[269725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- inventory --format=json-pretty --filter-for-batch
Nov 24 09:57:09 compute-0 sudo[269725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:57:09 compute-0 ceph-mon[74331]: pgmap v907: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Nov 24 09:57:09 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:57:09 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:57:09 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v908: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 24 09:57:09 compute-0 podman[269794]: 2025-11-24 09:57:09.619713329 +0000 UTC m=+0.037898132 container create 08f871158aab4aad8484cdf0c1f366995f995ce369a48553b05d28830af9ad8d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1)
Nov 24 09:57:09 compute-0 systemd[1]: Started libpod-conmon-08f871158aab4aad8484cdf0c1f366995f995ce369a48553b05d28830af9ad8d.scope.
Nov 24 09:57:09 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:57:09 compute-0 podman[269794]: 2025-11-24 09:57:09.690391726 +0000 UTC m=+0.108576569 container init 08f871158aab4aad8484cdf0c1f366995f995ce369a48553b05d28830af9ad8d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:57:09 compute-0 podman[269794]: 2025-11-24 09:57:09.697769695 +0000 UTC m=+0.115954508 container start 08f871158aab4aad8484cdf0c1f366995f995ce369a48553b05d28830af9ad8d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_chatterjee, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:57:09 compute-0 podman[269794]: 2025-11-24 09:57:09.603905474 +0000 UTC m=+0.022090307 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:57:09 compute-0 podman[269794]: 2025-11-24 09:57:09.700643124 +0000 UTC m=+0.118827937 container attach 08f871158aab4aad8484cdf0c1f366995f995ce369a48553b05d28830af9ad8d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_chatterjee, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:57:09 compute-0 agitated_chatterjee[269810]: 167 167
Nov 24 09:57:09 compute-0 systemd[1]: libpod-08f871158aab4aad8484cdf0c1f366995f995ce369a48553b05d28830af9ad8d.scope: Deactivated successfully.
Nov 24 09:57:09 compute-0 conmon[269810]: conmon 08f871158aab4aad8484 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-08f871158aab4aad8484cdf0c1f366995f995ce369a48553b05d28830af9ad8d.scope/container/memory.events
Nov 24 09:57:09 compute-0 podman[269794]: 2025-11-24 09:57:09.704703073 +0000 UTC m=+0.122887896 container died 08f871158aab4aad8484cdf0c1f366995f995ce369a48553b05d28830af9ad8d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_chatterjee, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 24 09:57:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c47120667122acb7f66cc1d338c50da0ab9d36a1cf6d61c58ac30385fa13a60-merged.mount: Deactivated successfully.
Nov 24 09:57:09 compute-0 podman[269794]: 2025-11-24 09:57:09.746521729 +0000 UTC m=+0.164706542 container remove 08f871158aab4aad8484cdf0c1f366995f995ce369a48553b05d28830af9ad8d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:57:09 compute-0 systemd[1]: libpod-conmon-08f871158aab4aad8484cdf0c1f366995f995ce369a48553b05d28830af9ad8d.scope: Deactivated successfully.
Nov 24 09:57:09 compute-0 podman[269833]: 2025-11-24 09:57:09.913079566 +0000 UTC m=+0.040683369 container create 8d63a628b0a4df7e10ee58613fd2c2d3b8bc2503cedcdb9f91239962a5f31187 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_sammet, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:57:09 compute-0 systemd[1]: Started libpod-conmon-8d63a628b0a4df7e10ee58613fd2c2d3b8bc2503cedcdb9f91239962a5f31187.scope.
Nov 24 09:57:09 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:57:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb314da646af61e9b97089e02d95c259aa3084a508c44a9cf20d6aa70d5f69c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:57:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb314da646af61e9b97089e02d95c259aa3084a508c44a9cf20d6aa70d5f69c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:57:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb314da646af61e9b97089e02d95c259aa3084a508c44a9cf20d6aa70d5f69c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:57:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb314da646af61e9b97089e02d95c259aa3084a508c44a9cf20d6aa70d5f69c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:57:09 compute-0 podman[269833]: 2025-11-24 09:57:09.989975894 +0000 UTC m=+0.117579717 container init 8d63a628b0a4df7e10ee58613fd2c2d3b8bc2503cedcdb9f91239962a5f31187 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:57:09 compute-0 podman[269833]: 2025-11-24 09:57:09.896230447 +0000 UTC m=+0.023834270 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:57:09 compute-0 podman[269833]: 2025-11-24 09:57:09.999707031 +0000 UTC m=+0.127310834 container start 8d63a628b0a4df7e10ee58613fd2c2d3b8bc2503cedcdb9f91239962a5f31187 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_sammet, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Nov 24 09:57:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:57:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:57:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:57:10 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:57:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:57:10 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:57:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:57:10 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:57:10 compute-0 podman[269833]: 2025-11-24 09:57:10.004240331 +0000 UTC m=+0.131844154 container attach 8d63a628b0a4df7e10ee58613fd2c2d3b8bc2503cedcdb9f91239962a5f31187 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_sammet, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:57:10 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:10 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:10 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:10.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:10 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/146065392' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 09:57:10 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 09:57:10 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:57:10 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 09:57:10 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:57:10 compute-0 sleepy_sammet[269849]: [
Nov 24 09:57:10 compute-0 sleepy_sammet[269849]:     {
Nov 24 09:57:10 compute-0 sleepy_sammet[269849]:         "available": false,
Nov 24 09:57:10 compute-0 sleepy_sammet[269849]:         "being_replaced": false,
Nov 24 09:57:10 compute-0 sleepy_sammet[269849]:         "ceph_device_lvm": false,
Nov 24 09:57:10 compute-0 sleepy_sammet[269849]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 24 09:57:10 compute-0 sleepy_sammet[269849]:         "lsm_data": {},
Nov 24 09:57:10 compute-0 sleepy_sammet[269849]:         "lvs": [],
Nov 24 09:57:10 compute-0 sleepy_sammet[269849]:         "path": "/dev/sr0",
Nov 24 09:57:10 compute-0 sleepy_sammet[269849]:         "rejected_reasons": [
Nov 24 09:57:10 compute-0 sleepy_sammet[269849]:             "Has a FileSystem",
Nov 24 09:57:10 compute-0 sleepy_sammet[269849]:             "Insufficient space (<5GB)"
Nov 24 09:57:10 compute-0 sleepy_sammet[269849]:         ],
Nov 24 09:57:10 compute-0 sleepy_sammet[269849]:         "sys_api": {
Nov 24 09:57:10 compute-0 sleepy_sammet[269849]:             "actuators": null,
Nov 24 09:57:10 compute-0 sleepy_sammet[269849]:             "device_nodes": [
Nov 24 09:57:10 compute-0 sleepy_sammet[269849]:                 "sr0"
Nov 24 09:57:10 compute-0 sleepy_sammet[269849]:             ],
Nov 24 09:57:10 compute-0 sleepy_sammet[269849]:             "devname": "sr0",
Nov 24 09:57:10 compute-0 sleepy_sammet[269849]:             "human_readable_size": "482.00 KB",
Nov 24 09:57:10 compute-0 sleepy_sammet[269849]:             "id_bus": "ata",
Nov 24 09:57:10 compute-0 sleepy_sammet[269849]:             "model": "QEMU DVD-ROM",
Nov 24 09:57:10 compute-0 sleepy_sammet[269849]:             "nr_requests": "2",
Nov 24 09:57:10 compute-0 sleepy_sammet[269849]:             "parent": "/dev/sr0",
Nov 24 09:57:10 compute-0 sleepy_sammet[269849]:             "partitions": {},
Nov 24 09:57:10 compute-0 sleepy_sammet[269849]:             "path": "/dev/sr0",
Nov 24 09:57:10 compute-0 sleepy_sammet[269849]:             "removable": "1",
Nov 24 09:57:10 compute-0 sleepy_sammet[269849]:             "rev": "2.5+",
Nov 24 09:57:10 compute-0 sleepy_sammet[269849]:             "ro": "0",
Nov 24 09:57:10 compute-0 sleepy_sammet[269849]:             "rotational": "1",
Nov 24 09:57:10 compute-0 sleepy_sammet[269849]:             "sas_address": "",
Nov 24 09:57:10 compute-0 sleepy_sammet[269849]:             "sas_device_handle": "",
Nov 24 09:57:10 compute-0 sleepy_sammet[269849]:             "scheduler_mode": "mq-deadline",
Nov 24 09:57:10 compute-0 sleepy_sammet[269849]:             "sectors": 0,
Nov 24 09:57:10 compute-0 sleepy_sammet[269849]:             "sectorsize": "2048",
Nov 24 09:57:10 compute-0 sleepy_sammet[269849]:             "size": 493568.0,
Nov 24 09:57:10 compute-0 sleepy_sammet[269849]:             "support_discard": "2048",
Nov 24 09:57:10 compute-0 sleepy_sammet[269849]:             "type": "disk",
Nov 24 09:57:10 compute-0 sleepy_sammet[269849]:             "vendor": "QEMU"
Nov 24 09:57:10 compute-0 sleepy_sammet[269849]:         }
Nov 24 09:57:10 compute-0 sleepy_sammet[269849]:     }
Nov 24 09:57:10 compute-0 sleepy_sammet[269849]: ]
Nov 24 09:57:10 compute-0 systemd[1]: libpod-8d63a628b0a4df7e10ee58613fd2c2d3b8bc2503cedcdb9f91239962a5f31187.scope: Deactivated successfully.
Nov 24 09:57:10 compute-0 podman[269833]: 2025-11-24 09:57:10.773422719 +0000 UTC m=+0.901026542 container died 8d63a628b0a4df7e10ee58613fd2c2d3b8bc2503cedcdb9f91239962a5f31187 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_sammet, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True)
Nov 24 09:57:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb314da646af61e9b97089e02d95c259aa3084a508c44a9cf20d6aa70d5f69c5-merged.mount: Deactivated successfully.
Nov 24 09:57:10 compute-0 podman[269833]: 2025-11-24 09:57:10.813644526 +0000 UTC m=+0.941248329 container remove 8d63a628b0a4df7e10ee58613fd2c2d3b8bc2503cedcdb9f91239962a5f31187 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:57:10 compute-0 systemd[1]: libpod-conmon-8d63a628b0a4df7e10ee58613fd2c2d3b8bc2503cedcdb9f91239962a5f31187.scope: Deactivated successfully.
Nov 24 09:57:10 compute-0 nova_compute[257700]: 2025-11-24 09:57:10.835 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:10 compute-0 sudo[269725]: pam_unix(sudo:session): session closed for user root
Nov 24 09:57:10 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:57:10 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:57:10 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:57:10 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:57:10 compute-0 nova_compute[257700]: 2025-11-24 09:57:10.897 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:10 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v909: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.9 MiB/s wr, 30 op/s
Nov 24 09:57:10 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v910: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 2.3 MiB/s wr, 36 op/s
Nov 24 09:57:10 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:57:10 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:57:10 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:57:10 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:57:10 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:10 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:10 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:10.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:10 compute-0 sudo[271302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:57:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:57:10] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Nov 24 09:57:10 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:57:10] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Nov 24 09:57:10 compute-0 sudo[271302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:57:10 compute-0 sudo[271302]: pam_unix(sudo:session): session closed for user root
Nov 24 09:57:11 compute-0 sudo[271327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 24 09:57:11 compute-0 sudo[271327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:57:11 compute-0 podman[271392]: 2025-11-24 09:57:11.424547189 +0000 UTC m=+0.047299620 container create 5df7be63e5c9816de4451a0bb0e620373428797dfdcf3bc3609f1624f7c7b928 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_tharp, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:57:11 compute-0 systemd[1]: Started libpod-conmon-5df7be63e5c9816de4451a0bb0e620373428797dfdcf3bc3609f1624f7c7b928.scope.
Nov 24 09:57:11 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:57:11 compute-0 podman[271392]: 2025-11-24 09:57:11.404849791 +0000 UTC m=+0.027602232 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:57:11 compute-0 podman[271392]: 2025-11-24 09:57:11.501889938 +0000 UTC m=+0.124642379 container init 5df7be63e5c9816de4451a0bb0e620373428797dfdcf3bc3609f1624f7c7b928 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 09:57:11 compute-0 podman[271392]: 2025-11-24 09:57:11.510909097 +0000 UTC m=+0.133661518 container start 5df7be63e5c9816de4451a0bb0e620373428797dfdcf3bc3609f1624f7c7b928 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 24 09:57:11 compute-0 ceph-mon[74331]: pgmap v908: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 24 09:57:11 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:57:11 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:57:11 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/4164713245' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 09:57:11 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:57:11 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:57:11 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:57:11 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:57:11 compute-0 ceph-mon[74331]: pgmap v909: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.9 MiB/s wr, 30 op/s
Nov 24 09:57:11 compute-0 ceph-mon[74331]: pgmap v910: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 2.3 MiB/s wr, 36 op/s
Nov 24 09:57:11 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:57:11 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:57:11 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:57:11 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:57:11 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:57:11 compute-0 podman[271392]: 2025-11-24 09:57:11.513905659 +0000 UTC m=+0.136658100 container attach 5df7be63e5c9816de4451a0bb0e620373428797dfdcf3bc3609f1624f7c7b928 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_tharp, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 24 09:57:11 compute-0 clever_tharp[271409]: 167 167
Nov 24 09:57:11 compute-0 systemd[1]: libpod-5df7be63e5c9816de4451a0bb0e620373428797dfdcf3bc3609f1624f7c7b928.scope: Deactivated successfully.
Nov 24 09:57:11 compute-0 conmon[271409]: conmon 5df7be63e5c9816de445 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5df7be63e5c9816de4451a0bb0e620373428797dfdcf3bc3609f1624f7c7b928.scope/container/memory.events
Nov 24 09:57:11 compute-0 podman[271392]: 2025-11-24 09:57:11.519236519 +0000 UTC m=+0.141988940 container died 5df7be63e5c9816de4451a0bb0e620373428797dfdcf3bc3609f1624f7c7b928 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid)
Nov 24 09:57:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-ebe07b1b44ff880f06ebc629ecffa418a81cdc705bad34488980ad045491648c-merged.mount: Deactivated successfully.
Nov 24 09:57:11 compute-0 podman[271392]: 2025-11-24 09:57:11.555863169 +0000 UTC m=+0.178615590 container remove 5df7be63e5c9816de4451a0bb0e620373428797dfdcf3bc3609f1624f7c7b928 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Nov 24 09:57:11 compute-0 systemd[1]: libpod-conmon-5df7be63e5c9816de4451a0bb0e620373428797dfdcf3bc3609f1624f7c7b928.scope: Deactivated successfully.
Nov 24 09:57:11 compute-0 podman[271435]: 2025-11-24 09:57:11.729866487 +0000 UTC m=+0.049350311 container create 913e6956c7ab501d7bb9cd230cbc3049aa9275e48acf8b723fa026b48821b0f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_wright, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:57:11 compute-0 systemd[1]: Started libpod-conmon-913e6956c7ab501d7bb9cd230cbc3049aa9275e48acf8b723fa026b48821b0f8.scope.
Nov 24 09:57:11 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:57:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aa6105ac4390c7acb818e2ff6709adf380ff5c628f85612bd377c9e1449a1a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:57:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aa6105ac4390c7acb818e2ff6709adf380ff5c628f85612bd377c9e1449a1a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:57:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aa6105ac4390c7acb818e2ff6709adf380ff5c628f85612bd377c9e1449a1a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:57:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aa6105ac4390c7acb818e2ff6709adf380ff5c628f85612bd377c9e1449a1a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:57:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aa6105ac4390c7acb818e2ff6709adf380ff5c628f85612bd377c9e1449a1a9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:57:11 compute-0 podman[271435]: 2025-11-24 09:57:11.70858707 +0000 UTC m=+0.028070914 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:57:11 compute-0 podman[271435]: 2025-11-24 09:57:11.819627418 +0000 UTC m=+0.139111252 container init 913e6956c7ab501d7bb9cd230cbc3049aa9275e48acf8b723fa026b48821b0f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_wright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:57:11 compute-0 podman[271435]: 2025-11-24 09:57:11.830509872 +0000 UTC m=+0.149993706 container start 913e6956c7ab501d7bb9cd230cbc3049aa9275e48acf8b723fa026b48821b0f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_wright, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:57:11 compute-0 podman[271435]: 2025-11-24 09:57:11.834048738 +0000 UTC m=+0.153532582 container attach 913e6956c7ab501d7bb9cd230cbc3049aa9275e48acf8b723fa026b48821b0f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_wright, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:57:12 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:12 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:12 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:12.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:12 compute-0 elastic_wright[271451]: --> passed data devices: 0 physical, 1 LVM
Nov 24 09:57:12 compute-0 elastic_wright[271451]: --> All data devices are unavailable
Nov 24 09:57:12 compute-0 systemd[1]: libpod-913e6956c7ab501d7bb9cd230cbc3049aa9275e48acf8b723fa026b48821b0f8.scope: Deactivated successfully.
Nov 24 09:57:12 compute-0 podman[271435]: 2025-11-24 09:57:12.216870409 +0000 UTC m=+0.536354243 container died 913e6956c7ab501d7bb9cd230cbc3049aa9275e48acf8b723fa026b48821b0f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 24 09:57:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-1aa6105ac4390c7acb818e2ff6709adf380ff5c628f85612bd377c9e1449a1a9-merged.mount: Deactivated successfully.
Nov 24 09:57:12 compute-0 podman[271435]: 2025-11-24 09:57:12.265648014 +0000 UTC m=+0.585131838 container remove 913e6956c7ab501d7bb9cd230cbc3049aa9275e48acf8b723fa026b48821b0f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_wright, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default)
Nov 24 09:57:12 compute-0 systemd[1]: libpod-conmon-913e6956c7ab501d7bb9cd230cbc3049aa9275e48acf8b723fa026b48821b0f8.scope: Deactivated successfully.
Nov 24 09:57:12 compute-0 sudo[271327]: pam_unix(sudo:session): session closed for user root
Nov 24 09:57:12 compute-0 sudo[271478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:57:12 compute-0 sudo[271478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:57:12 compute-0 sudo[271478]: pam_unix(sudo:session): session closed for user root
Nov 24 09:57:12 compute-0 sudo[271503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- lvm list --format json
Nov 24 09:57:12 compute-0 sudo[271503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:57:12 compute-0 podman[271569]: 2025-11-24 09:57:12.882542502 +0000 UTC m=+0.048466589 container create bf196040b63d9148dcadbe59e3b8c51205b24c253bd7ea96acd97fbc1fb04399 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 24 09:57:12 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v911: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 2.3 MiB/s wr, 38 op/s
Nov 24 09:57:12 compute-0 systemd[1]: Started libpod-conmon-bf196040b63d9148dcadbe59e3b8c51205b24c253bd7ea96acd97fbc1fb04399.scope.
Nov 24 09:57:12 compute-0 podman[271569]: 2025-11-24 09:57:12.860279541 +0000 UTC m=+0.026203678 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:57:12 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:57:12 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:12 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:57:12 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:12.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:57:12 compute-0 podman[271569]: 2025-11-24 09:57:12.978059413 +0000 UTC m=+0.143983520 container init bf196040b63d9148dcadbe59e3b8c51205b24c253bd7ea96acd97fbc1fb04399 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_haibt, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True)
Nov 24 09:57:12 compute-0 podman[271569]: 2025-11-24 09:57:12.985450852 +0000 UTC m=+0.151374939 container start bf196040b63d9148dcadbe59e3b8c51205b24c253bd7ea96acd97fbc1fb04399 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_haibt, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True)
Nov 24 09:57:12 compute-0 podman[271569]: 2025-11-24 09:57:12.989119911 +0000 UTC m=+0.155044018 container attach bf196040b63d9148dcadbe59e3b8c51205b24c253bd7ea96acd97fbc1fb04399 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_haibt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:57:12 compute-0 vigorous_haibt[271586]: 167 167
Nov 24 09:57:12 compute-0 systemd[1]: libpod-bf196040b63d9148dcadbe59e3b8c51205b24c253bd7ea96acd97fbc1fb04399.scope: Deactivated successfully.
Nov 24 09:57:12 compute-0 podman[271569]: 2025-11-24 09:57:12.991700194 +0000 UTC m=+0.157624271 container died bf196040b63d9148dcadbe59e3b8c51205b24c253bd7ea96acd97fbc1fb04399 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_haibt, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:57:13 compute-0 podman[271585]: 2025-11-24 09:57:13.001617895 +0000 UTC m=+0.074787858 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 24 09:57:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-a7f9926684dd7d8fc788832c873e7f6f8bdda4b36c25e646dd43b9ea30bbc3b3-merged.mount: Deactivated successfully.
Nov 24 09:57:13 compute-0 podman[271569]: 2025-11-24 09:57:13.029368109 +0000 UTC m=+0.195292206 container remove bf196040b63d9148dcadbe59e3b8c51205b24c253bd7ea96acd97fbc1fb04399 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Nov 24 09:57:13 compute-0 systemd[1]: libpod-conmon-bf196040b63d9148dcadbe59e3b8c51205b24c253bd7ea96acd97fbc1fb04399.scope: Deactivated successfully.
Nov 24 09:57:13 compute-0 podman[271628]: 2025-11-24 09:57:13.200233121 +0000 UTC m=+0.047551127 container create 01ea7dd9a23b4eb1cd3223768582f0829e39a05141f4ef54d1c3359eeb51999f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_kare, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:57:13 compute-0 systemd[1]: Started libpod-conmon-01ea7dd9a23b4eb1cd3223768582f0829e39a05141f4ef54d1c3359eeb51999f.scope.
Nov 24 09:57:13 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:57:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd6ecb7fbe2441436bebc52a854dccc60d2daa2130045a61df50c307fdc90468/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:57:13 compute-0 podman[271628]: 2025-11-24 09:57:13.17920503 +0000 UTC m=+0.026523056 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:57:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd6ecb7fbe2441436bebc52a854dccc60d2daa2130045a61df50c307fdc90468/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:57:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd6ecb7fbe2441436bebc52a854dccc60d2daa2130045a61df50c307fdc90468/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:57:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd6ecb7fbe2441436bebc52a854dccc60d2daa2130045a61df50c307fdc90468/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:57:13 compute-0 podman[271628]: 2025-11-24 09:57:13.290102034 +0000 UTC m=+0.137420060 container init 01ea7dd9a23b4eb1cd3223768582f0829e39a05141f4ef54d1c3359eeb51999f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_kare, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 09:57:13 compute-0 podman[271628]: 2025-11-24 09:57:13.299069252 +0000 UTC m=+0.146387258 container start 01ea7dd9a23b4eb1cd3223768582f0829e39a05141f4ef54d1c3359eeb51999f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_kare, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 24 09:57:13 compute-0 podman[271628]: 2025-11-24 09:57:13.302313931 +0000 UTC m=+0.149631957 container attach 01ea7dd9a23b4eb1cd3223768582f0829e39a05141f4ef54d1c3359eeb51999f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_kare, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:57:13 compute-0 festive_kare[271644]: {
Nov 24 09:57:13 compute-0 festive_kare[271644]:     "0": [
Nov 24 09:57:13 compute-0 festive_kare[271644]:         {
Nov 24 09:57:13 compute-0 festive_kare[271644]:             "devices": [
Nov 24 09:57:13 compute-0 festive_kare[271644]:                 "/dev/loop3"
Nov 24 09:57:13 compute-0 festive_kare[271644]:             ],
Nov 24 09:57:13 compute-0 festive_kare[271644]:             "lv_name": "ceph_lv0",
Nov 24 09:57:13 compute-0 festive_kare[271644]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:57:13 compute-0 festive_kare[271644]:             "lv_size": "21470642176",
Nov 24 09:57:13 compute-0 festive_kare[271644]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=84a084c3-61a7-5de7-8207-1f88efa59a64,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 24 09:57:13 compute-0 festive_kare[271644]:             "lv_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:57:13 compute-0 festive_kare[271644]:             "name": "ceph_lv0",
Nov 24 09:57:13 compute-0 festive_kare[271644]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:57:13 compute-0 festive_kare[271644]:             "tags": {
Nov 24 09:57:13 compute-0 festive_kare[271644]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:57:13 compute-0 festive_kare[271644]:                 "ceph.block_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:57:13 compute-0 festive_kare[271644]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 09:57:13 compute-0 festive_kare[271644]:                 "ceph.cluster_fsid": "84a084c3-61a7-5de7-8207-1f88efa59a64",
Nov 24 09:57:13 compute-0 festive_kare[271644]:                 "ceph.cluster_name": "ceph",
Nov 24 09:57:13 compute-0 festive_kare[271644]:                 "ceph.crush_device_class": "",
Nov 24 09:57:13 compute-0 festive_kare[271644]:                 "ceph.encrypted": "0",
Nov 24 09:57:13 compute-0 festive_kare[271644]:                 "ceph.osd_fsid": "4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c",
Nov 24 09:57:13 compute-0 festive_kare[271644]:                 "ceph.osd_id": "0",
Nov 24 09:57:13 compute-0 festive_kare[271644]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 09:57:13 compute-0 festive_kare[271644]:                 "ceph.type": "block",
Nov 24 09:57:13 compute-0 festive_kare[271644]:                 "ceph.vdo": "0",
Nov 24 09:57:13 compute-0 festive_kare[271644]:                 "ceph.with_tpm": "0"
Nov 24 09:57:13 compute-0 festive_kare[271644]:             },
Nov 24 09:57:13 compute-0 festive_kare[271644]:             "type": "block",
Nov 24 09:57:13 compute-0 festive_kare[271644]:             "vg_name": "ceph_vg0"
Nov 24 09:57:13 compute-0 festive_kare[271644]:         }
Nov 24 09:57:13 compute-0 festive_kare[271644]:     ]
Nov 24 09:57:13 compute-0 festive_kare[271644]: }
Nov 24 09:57:13 compute-0 systemd[1]: libpod-01ea7dd9a23b4eb1cd3223768582f0829e39a05141f4ef54d1c3359eeb51999f.scope: Deactivated successfully.
Nov 24 09:57:13 compute-0 podman[271628]: 2025-11-24 09:57:13.609099685 +0000 UTC m=+0.456417681 container died 01ea7dd9a23b4eb1cd3223768582f0829e39a05141f4ef54d1c3359eeb51999f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_kare, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2)
Nov 24 09:57:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:57:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd6ecb7fbe2441436bebc52a854dccc60d2daa2130045a61df50c307fdc90468-merged.mount: Deactivated successfully.
Nov 24 09:57:13 compute-0 podman[271628]: 2025-11-24 09:57:13.650821918 +0000 UTC m=+0.498139924 container remove 01ea7dd9a23b4eb1cd3223768582f0829e39a05141f4ef54d1c3359eeb51999f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_kare, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Nov 24 09:57:13 compute-0 systemd[1]: libpod-conmon-01ea7dd9a23b4eb1cd3223768582f0829e39a05141f4ef54d1c3359eeb51999f.scope: Deactivated successfully.
Nov 24 09:57:13 compute-0 sudo[271503]: pam_unix(sudo:session): session closed for user root
Nov 24 09:57:13 compute-0 sudo[271666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:57:13 compute-0 sudo[271666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:57:13 compute-0 sudo[271666]: pam_unix(sudo:session): session closed for user root
Nov 24 09:57:13 compute-0 sudo[271691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- raw list --format json
Nov 24 09:57:13 compute-0 sudo[271691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:57:13 compute-0 nova_compute[257700]: 2025-11-24 09:57:13.922 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:57:14 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:14 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:57:14 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:14.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:57:14 compute-0 podman[271759]: 2025-11-24 09:57:14.196492405 +0000 UTC m=+0.039607863 container create b1731584867b81b31d1447aa8c3d00f80d55971b144b67171205ad5eb43e225c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_rubin, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:57:14 compute-0 systemd[1]: Started libpod-conmon-b1731584867b81b31d1447aa8c3d00f80d55971b144b67171205ad5eb43e225c.scope.
Nov 24 09:57:14 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:57:14 compute-0 ceph-mon[74331]: pgmap v911: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 2.3 MiB/s wr, 38 op/s
Nov 24 09:57:14 compute-0 podman[271759]: 2025-11-24 09:57:14.273828505 +0000 UTC m=+0.116944103 container init b1731584867b81b31d1447aa8c3d00f80d55971b144b67171205ad5eb43e225c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_rubin, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:57:14 compute-0 podman[271759]: 2025-11-24 09:57:14.179195275 +0000 UTC m=+0.022310753 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:57:14 compute-0 podman[271759]: 2025-11-24 09:57:14.283424988 +0000 UTC m=+0.126540446 container start b1731584867b81b31d1447aa8c3d00f80d55971b144b67171205ad5eb43e225c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_rubin, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Nov 24 09:57:14 compute-0 podman[271759]: 2025-11-24 09:57:14.286927103 +0000 UTC m=+0.130042581 container attach b1731584867b81b31d1447aa8c3d00f80d55971b144b67171205ad5eb43e225c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 24 09:57:14 compute-0 kind_rubin[271776]: 167 167
Nov 24 09:57:14 compute-0 systemd[1]: libpod-b1731584867b81b31d1447aa8c3d00f80d55971b144b67171205ad5eb43e225c.scope: Deactivated successfully.
Nov 24 09:57:14 compute-0 conmon[271776]: conmon b1731584867b81b31d14 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b1731584867b81b31d1447aa8c3d00f80d55971b144b67171205ad5eb43e225c.scope/container/memory.events
Nov 24 09:57:14 compute-0 podman[271759]: 2025-11-24 09:57:14.292093839 +0000 UTC m=+0.135209347 container died b1731584867b81b31d1447aa8c3d00f80d55971b144b67171205ad5eb43e225c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_rubin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Nov 24 09:57:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-81ac869c2a24196f1bb9480f77df012db5d1fe0dc350661c3861961db0e7b9a2-merged.mount: Deactivated successfully.
Nov 24 09:57:14 compute-0 podman[271759]: 2025-11-24 09:57:14.32797634 +0000 UTC m=+0.171091798 container remove b1731584867b81b31d1447aa8c3d00f80d55971b144b67171205ad5eb43e225c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_rubin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325)
Nov 24 09:57:14 compute-0 systemd[1]: libpod-conmon-b1731584867b81b31d1447aa8c3d00f80d55971b144b67171205ad5eb43e225c.scope: Deactivated successfully.
Nov 24 09:57:14 compute-0 podman[271801]: 2025-11-24 09:57:14.522057046 +0000 UTC m=+0.058388420 container create b3aee6a6eecf2f1c11b71e21c38659faafcb1497062f4ae4d91aef37cbabcb36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_galileo, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:57:14 compute-0 systemd[1]: Started libpod-conmon-b3aee6a6eecf2f1c11b71e21c38659faafcb1497062f4ae4d91aef37cbabcb36.scope.
Nov 24 09:57:14 compute-0 podman[271801]: 2025-11-24 09:57:14.496931115 +0000 UTC m=+0.033262489 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:57:14 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:57:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ca022a8340b8a536718210066ac7c381e9f48d40cdfeee978a0d19e87f784be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:57:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ca022a8340b8a536718210066ac7c381e9f48d40cdfeee978a0d19e87f784be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:57:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ca022a8340b8a536718210066ac7c381e9f48d40cdfeee978a0d19e87f784be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:57:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ca022a8340b8a536718210066ac7c381e9f48d40cdfeee978a0d19e87f784be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:57:14 compute-0 podman[271801]: 2025-11-24 09:57:14.634593229 +0000 UTC m=+0.170924583 container init b3aee6a6eecf2f1c11b71e21c38659faafcb1497062f4ae4d91aef37cbabcb36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 24 09:57:14 compute-0 podman[271801]: 2025-11-24 09:57:14.643325032 +0000 UTC m=+0.179656376 container start b3aee6a6eecf2f1c11b71e21c38659faafcb1497062f4ae4d91aef37cbabcb36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_galileo, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 24 09:57:14 compute-0 podman[271801]: 2025-11-24 09:57:14.647164165 +0000 UTC m=+0.183495529 container attach b3aee6a6eecf2f1c11b71e21c38659faafcb1497062f4ae4d91aef37cbabcb36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:57:14 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v912: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 2.3 MiB/s wr, 38 op/s
Nov 24 09:57:14 compute-0 nova_compute[257700]: 2025-11-24 09:57:14.922 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:57:14 compute-0 nova_compute[257700]: 2025-11-24 09:57:14.924 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 09:57:14 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:14 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:57:14 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:14.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:57:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:57:14 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:57:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:57:14 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:57:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:57:14 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:57:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:57:15 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:57:15 compute-0 lvm[271891]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 09:57:15 compute-0 lvm[271891]: VG ceph_vg0 finished
Nov 24 09:57:15 compute-0 serene_galileo[271817]: {}
Nov 24 09:57:15 compute-0 systemd[1]: libpod-b3aee6a6eecf2f1c11b71e21c38659faafcb1497062f4ae4d91aef37cbabcb36.scope: Deactivated successfully.
Nov 24 09:57:15 compute-0 podman[271801]: 2025-11-24 09:57:15.361462209 +0000 UTC m=+0.897793543 container died b3aee6a6eecf2f1c11b71e21c38659faafcb1497062f4ae4d91aef37cbabcb36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_galileo, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:57:15 compute-0 systemd[1]: libpod-b3aee6a6eecf2f1c11b71e21c38659faafcb1497062f4ae4d91aef37cbabcb36.scope: Consumed 1.172s CPU time.
Nov 24 09:57:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ca022a8340b8a536718210066ac7c381e9f48d40cdfeee978a0d19e87f784be-merged.mount: Deactivated successfully.
Nov 24 09:57:15 compute-0 podman[271801]: 2025-11-24 09:57:15.406130045 +0000 UTC m=+0.942461379 container remove b3aee6a6eecf2f1c11b71e21c38659faafcb1497062f4ae4d91aef37cbabcb36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_galileo, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default)
Nov 24 09:57:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:57:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:57:15 compute-0 systemd[1]: libpod-conmon-b3aee6a6eecf2f1c11b71e21c38659faafcb1497062f4ae4d91aef37cbabcb36.scope: Deactivated successfully.
Nov 24 09:57:15 compute-0 sudo[271691]: pam_unix(sudo:session): session closed for user root
Nov 24 09:57:15 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:57:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:57:15 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:57:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:57:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:57:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:57:15 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:57:15 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:57:15 compute-0 sudo[271907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:57:15 compute-0 sudo[271907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:57:15 compute-0 sudo[271907]: pam_unix(sudo:session): session closed for user root
Nov 24 09:57:15 compute-0 nova_compute[257700]: 2025-11-24 09:57:15.837 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:15 compute-0 nova_compute[257700]: 2025-11-24 09:57:15.899 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:15 compute-0 nova_compute[257700]: 2025-11-24 09:57:15.923 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:57:16 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:16 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:57:16 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:16.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:57:16 compute-0 ceph-mon[74331]: pgmap v912: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 2.3 MiB/s wr, 38 op/s
Nov 24 09:57:16 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:57:16 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:57:16 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:57:16 compute-0 nova_compute[257700]: 2025-11-24 09:57:16.896 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:57:16 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v913: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 16 KiB/s wr, 96 op/s
Nov 24 09:57:16 compute-0 nova_compute[257700]: 2025-11-24 09:57:16.938 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Triggering sync for uuid b014f69e-04e5-4c5d-bb6c-e88b4410e6ab _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 24 09:57:16 compute-0 nova_compute[257700]: 2025-11-24 09:57:16.939 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "b014f69e-04e5-4c5d-bb6c-e88b4410e6ab" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:57:16 compute-0 nova_compute[257700]: 2025-11-24 09:57:16.939 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "b014f69e-04e5-4c5d-bb6c-e88b4410e6ab" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:57:16 compute-0 nova_compute[257700]: 2025-11-24 09:57:16.939 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:57:16 compute-0 nova_compute[257700]: 2025-11-24 09:57:16.940 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 24 09:57:16 compute-0 nova_compute[257700]: 2025-11-24 09:57:16.967 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 24 09:57:16 compute-0 nova_compute[257700]: 2025-11-24 09:57:16.968 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "b014f69e-04e5-4c5d-bb6c-e88b4410e6ab" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.029s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:57:16 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:16 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:16 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:16.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:16 compute-0 nova_compute[257700]: 2025-11-24 09:57:16.991 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:57:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:57:17.520Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:57:17 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:57:17.694 165073 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:13:51', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4e:f0:a8:6f:5e:1b'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 09:57:17 compute-0 nova_compute[257700]: 2025-11-24 09:57:17.695 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:17 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:57:17.696 165073 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 09:57:17 compute-0 nova_compute[257700]: 2025-11-24 09:57:17.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:57:17 compute-0 nova_compute[257700]: 2025-11-24 09:57:17.921 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 09:57:17 compute-0 nova_compute[257700]: 2025-11-24 09:57:17.922 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 09:57:18 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:18 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:57:18 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:18.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:57:18 compute-0 ceph-mon[74331]: pgmap v913: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 16 KiB/s wr, 96 op/s
Nov 24 09:57:18 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/1778182928' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:57:18 compute-0 nova_compute[257700]: 2025-11-24 09:57:18.325 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "refresh_cache-b014f69e-04e5-4c5d-bb6c-e88b4410e6ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 09:57:18 compute-0 nova_compute[257700]: 2025-11-24 09:57:18.325 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquired lock "refresh_cache-b014f69e-04e5-4c5d-bb6c-e88b4410e6ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 09:57:18 compute-0 nova_compute[257700]: 2025-11-24 09:57:18.325 257704 DEBUG nova.network.neutron [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 24 09:57:18 compute-0 nova_compute[257700]: 2025-11-24 09:57:18.326 257704 DEBUG nova.objects.instance [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b014f69e-04e5-4c5d-bb6c-e88b4410e6ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 09:57:18 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:57:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:57:18.877Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:57:18 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v914: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 16 KiB/s wr, 96 op/s
Nov 24 09:57:18 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:18 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:18 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:18.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:19 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/1551701770' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:57:19 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:57:19.698 165073 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feb242b9-6422-4c37-bc7a-5c14a79beaf8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:57:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:57:19 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:57:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:57:19 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:57:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:57:19 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:57:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:57:20 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:57:20 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:20 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:20 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:20.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:20 compute-0 ceph-mon[74331]: pgmap v914: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 16 KiB/s wr, 96 op/s
Nov 24 09:57:20 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/65500581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:57:20 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/2192205247' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:57:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:57:20.568 165073 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:57:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:57:20.568 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:57:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:57:20.569 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:57:20 compute-0 nova_compute[257700]: 2025-11-24 09:57:20.839 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:20 compute-0 nova_compute[257700]: 2025-11-24 09:57:20.900 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:20 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v915: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 89 op/s
Nov 24 09:57:20 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:20 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:20 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:20.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:57:20] "GET /metrics HTTP/1.1" 200 48472 "" "Prometheus/2.51.0"
Nov 24 09:57:20 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:57:20] "GET /metrics HTTP/1.1" 200 48472 "" "Prometheus/2.51.0"
Nov 24 09:57:21 compute-0 nova_compute[257700]: 2025-11-24 09:57:21.421 257704 DEBUG nova.network.neutron [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Updating instance_info_cache with network_info: [{"id": "c6d0f148-a8f4-467e-8be3-1a120663dc95", "address": "fa:16:3e:88:13:8b", "network": {"id": "4a54e00b-2ddf-4829-be22-9a556b586781", "bridge": "br-int", "label": "tempest-network-smoke--280510625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6d0f148-a8", "ovs_interfaceid": "c6d0f148-a8f4-467e-8be3-1a120663dc95", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 09:57:21 compute-0 nova_compute[257700]: 2025-11-24 09:57:21.436 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Releasing lock "refresh_cache-b014f69e-04e5-4c5d-bb6c-e88b4410e6ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 09:57:21 compute-0 nova_compute[257700]: 2025-11-24 09:57:21.436 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 24 09:57:21 compute-0 nova_compute[257700]: 2025-11-24 09:57:21.436 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:57:21 compute-0 nova_compute[257700]: 2025-11-24 09:57:21.436 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:57:21 compute-0 nova_compute[257700]: 2025-11-24 09:57:21.456 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:57:21 compute-0 nova_compute[257700]: 2025-11-24 09:57:21.456 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:57:21 compute-0 nova_compute[257700]: 2025-11-24 09:57:21.457 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:57:21 compute-0 nova_compute[257700]: 2025-11-24 09:57:21.457 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 09:57:21 compute-0 nova_compute[257700]: 2025-11-24 09:57:21.457 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:57:21 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:57:21 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1259732043' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:57:21 compute-0 nova_compute[257700]: 2025-11-24 09:57:21.883 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:57:21 compute-0 nova_compute[257700]: 2025-11-24 09:57:21.952 257704 DEBUG nova.virt.libvirt.driver [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 24 09:57:21 compute-0 nova_compute[257700]: 2025-11-24 09:57:21.953 257704 DEBUG nova.virt.libvirt.driver [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 24 09:57:22 compute-0 nova_compute[257700]: 2025-11-24 09:57:22.098 257704 WARNING nova.virt.libvirt.driver [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 09:57:22 compute-0 nova_compute[257700]: 2025-11-24 09:57:22.099 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4346MB free_disk=59.92180252075195GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 09:57:22 compute-0 nova_compute[257700]: 2025-11-24 09:57:22.100 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:57:22 compute-0 nova_compute[257700]: 2025-11-24 09:57:22.100 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:57:22 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:22 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:22 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:22.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:22 compute-0 nova_compute[257700]: 2025-11-24 09:57:22.220 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Instance b014f69e-04e5-4c5d-bb6c-e88b4410e6ab actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 09:57:22 compute-0 nova_compute[257700]: 2025-11-24 09:57:22.221 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 09:57:22 compute-0 nova_compute[257700]: 2025-11-24 09:57:22.221 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 09:57:22 compute-0 ceph-mon[74331]: pgmap v915: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 89 op/s
Nov 24 09:57:22 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1259732043' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:57:22 compute-0 nova_compute[257700]: 2025-11-24 09:57:22.359 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:57:22 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:57:22 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3264970087' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:57:22 compute-0 nova_compute[257700]: 2025-11-24 09:57:22.850 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:57:22 compute-0 nova_compute[257700]: 2025-11-24 09:57:22.855 257704 DEBUG nova.compute.provider_tree [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Inventory has not changed in ProviderTree for provider: a50ce3b5-7e9e-4263-a4aa-c35573ac7257 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 09:57:22 compute-0 nova_compute[257700]: 2025-11-24 09:57:22.869 257704 DEBUG nova.scheduler.client.report [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Inventory has not changed for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 09:57:22 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v916: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 75 op/s
Nov 24 09:57:22 compute-0 nova_compute[257700]: 2025-11-24 09:57:22.935 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 09:57:22 compute-0 nova_compute[257700]: 2025-11-24 09:57:22.936 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.836s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:57:22 compute-0 nova_compute[257700]: 2025-11-24 09:57:22.936 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:57:22 compute-0 nova_compute[257700]: 2025-11-24 09:57:22.937 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 24 09:57:22 compute-0 nova_compute[257700]: 2025-11-24 09:57:22.957 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:57:22 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:22 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:57:22 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:22.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:57:23 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3264970087' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:57:23 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:57:23 compute-0 nova_compute[257700]: 2025-11-24 09:57:23.960 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:57:23 compute-0 nova_compute[257700]: 2025-11-24 09:57:23.961 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:57:23 compute-0 nova_compute[257700]: 2025-11-24 09:57:23.976 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:57:24 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:24 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:24 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:24.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:24 compute-0 ceph-mon[74331]: pgmap v916: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 75 op/s
Nov 24 09:57:24 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v917: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 73 op/s
Nov 24 09:57:24 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:24 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:57:24 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:24.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:57:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:57:25 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:57:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:57:25 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:57:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:57:25 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:57:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:57:25 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:57:25 compute-0 nova_compute[257700]: 2025-11-24 09:57:25.866 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:25 compute-0 nova_compute[257700]: 2025-11-24 09:57:25.902 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:26 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:26 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:26 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:26.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:26 compute-0 ceph-mon[74331]: pgmap v917: 353 pgs: 353 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 73 op/s
Nov 24 09:57:26 compute-0 sudo[271988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:57:26 compute-0 sudo[271988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:57:26 compute-0 sudo[271988]: pam_unix(sudo:session): session closed for user root
Nov 24 09:57:26 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v918: 353 pgs: 353 active+clean; 200 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Nov 24 09:57:26 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:26 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:57:26 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:26.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:57:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:57:27.521Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:57:28 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:28 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:57:28 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:28.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:57:28 compute-0 ceph-mon[74331]: pgmap v918: 353 pgs: 353 active+clean; 200 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Nov 24 09:57:28 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:57:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:57:28.878Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:57:28 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v919: 353 pgs: 353 active+clean; 200 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 24 09:57:28 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:28 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:28 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:28.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:57:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:57:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:57:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:57:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:57:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:57:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:57:30 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:57:30 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:30 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:30 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:30.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:30 compute-0 ceph-mon[74331]: pgmap v919: 353 pgs: 353 active+clean; 200 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 24 09:57:30 compute-0 nova_compute[257700]: 2025-11-24 09:57:30.869 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:30 compute-0 nova_compute[257700]: 2025-11-24 09:57:30.904 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:30 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v920: 353 pgs: 353 active+clean; 200 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 24 09:57:30 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:30 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:30 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:30.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:57:30] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Nov 24 09:57:30 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:57:30] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Nov 24 09:57:31 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:57:32 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:32 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:32 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:32.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:32 compute-0 ceph-mon[74331]: pgmap v920: 353 pgs: 353 active+clean; 200 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 24 09:57:32 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v921: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 307 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 24 09:57:32 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:32 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:32 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:32.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:57:34 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:34 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:57:34 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:34.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:57:34 compute-0 ceph-mon[74331]: pgmap v921: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 307 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 24 09:57:34 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v922: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 24 09:57:34 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:34 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:34 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:34.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:57:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:57:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:57:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:57:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:57:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:57:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:57:35 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:57:35 compute-0 podman[272023]: 2025-11-24 09:57:35.793872023 +0000 UTC m=+0.065303938 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Nov 24 09:57:35 compute-0 podman[272024]: 2025-11-24 09:57:35.819037565 +0000 UTC m=+0.090426288 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 09:57:35 compute-0 nova_compute[257700]: 2025-11-24 09:57:35.869 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:35 compute-0 nova_compute[257700]: 2025-11-24 09:57:35.906 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:36 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:36 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:36 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:36.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:36 compute-0 ceph-mon[74331]: pgmap v922: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 24 09:57:36 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/3443165567' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:57:36 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v923: 353 pgs: 353 active+clean; 121 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Nov 24 09:57:36 compute-0 nova_compute[257700]: 2025-11-24 09:57:36.960 257704 DEBUG oslo_concurrency.lockutils [None req-eccf3b7a-f43e-4999-8128-f4f87759167c 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "b014f69e-04e5-4c5d-bb6c-e88b4410e6ab" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:57:36 compute-0 nova_compute[257700]: 2025-11-24 09:57:36.960 257704 DEBUG oslo_concurrency.lockutils [None req-eccf3b7a-f43e-4999-8128-f4f87759167c 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "b014f69e-04e5-4c5d-bb6c-e88b4410e6ab" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:57:36 compute-0 nova_compute[257700]: 2025-11-24 09:57:36.960 257704 DEBUG oslo_concurrency.lockutils [None req-eccf3b7a-f43e-4999-8128-f4f87759167c 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "b014f69e-04e5-4c5d-bb6c-e88b4410e6ab-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:57:36 compute-0 nova_compute[257700]: 2025-11-24 09:57:36.960 257704 DEBUG oslo_concurrency.lockutils [None req-eccf3b7a-f43e-4999-8128-f4f87759167c 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "b014f69e-04e5-4c5d-bb6c-e88b4410e6ab-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:57:36 compute-0 nova_compute[257700]: 2025-11-24 09:57:36.961 257704 DEBUG oslo_concurrency.lockutils [None req-eccf3b7a-f43e-4999-8128-f4f87759167c 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "b014f69e-04e5-4c5d-bb6c-e88b4410e6ab-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:57:36 compute-0 nova_compute[257700]: 2025-11-24 09:57:36.962 257704 INFO nova.compute.manager [None req-eccf3b7a-f43e-4999-8128-f4f87759167c 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Terminating instance
Nov 24 09:57:36 compute-0 nova_compute[257700]: 2025-11-24 09:57:36.962 257704 DEBUG nova.compute.manager [None req-eccf3b7a-f43e-4999-8128-f4f87759167c 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 24 09:57:36 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:36 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:36 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:36.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:37 compute-0 kernel: tapc6d0f148-a8 (unregistering): left promiscuous mode
Nov 24 09:57:37 compute-0 NetworkManager[48883]: <info>  [1763978257.0149] device (tapc6d0f148-a8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 24 09:57:37 compute-0 ovn_controller[155123]: 2025-11-24T09:57:37Z|00046|binding|INFO|Releasing lport c6d0f148-a8f4-467e-8be3-1a120663dc95 from this chassis (sb_readonly=0)
Nov 24 09:57:37 compute-0 nova_compute[257700]: 2025-11-24 09:57:37.023 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:37 compute-0 ovn_controller[155123]: 2025-11-24T09:57:37Z|00047|binding|INFO|Setting lport c6d0f148-a8f4-467e-8be3-1a120663dc95 down in Southbound
Nov 24 09:57:37 compute-0 ovn_controller[155123]: 2025-11-24T09:57:37Z|00048|binding|INFO|Removing iface tapc6d0f148-a8 ovn-installed in OVS
Nov 24 09:57:37 compute-0 nova_compute[257700]: 2025-11-24 09:57:37.025 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:37 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:57:37.030 165073 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:13:8b 10.100.0.6'], port_security=['fa:16:3e:88:13:8b 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'b014f69e-04e5-4c5d-bb6c-e88b4410e6ab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4a54e00b-2ddf-4829-be22-9a556b586781', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94d069fc040647d5a6e54894eec915fe', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fae8e741-c53a-4962-8907-2f1b9659e2f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cefc33a4-ddb4-430f-bd3b-965ffc7d2eca, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f45b2855760>], logical_port=c6d0f148-a8f4-467e-8be3-1a120663dc95) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f45b2855760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 09:57:37 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:57:37.031 165073 INFO neutron.agent.ovn.metadata.agent [-] Port c6d0f148-a8f4-467e-8be3-1a120663dc95 in datapath 4a54e00b-2ddf-4829-be22-9a556b586781 unbound from our chassis
Nov 24 09:57:37 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:57:37.032 165073 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4a54e00b-2ddf-4829-be22-9a556b586781, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 24 09:57:37 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:57:37.033 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[b4c12425-fd07-4cc9-a721-58cafce21e72]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:57:37 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:57:37.033 165073 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781 namespace which is not needed anymore
Nov 24 09:57:37 compute-0 nova_compute[257700]: 2025-11-24 09:57:37.047 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:37 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000004.scope: Deactivated successfully.
Nov 24 09:57:37 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000004.scope: Consumed 15.736s CPU time.
Nov 24 09:57:37 compute-0 systemd-machined[219130]: Machine qemu-2-instance-00000004 terminated.
Nov 24 09:57:37 compute-0 neutron-haproxy-ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781[269295]: [NOTICE]   (269299) : haproxy version is 2.8.14-c23fe91
Nov 24 09:57:37 compute-0 neutron-haproxy-ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781[269295]: [NOTICE]   (269299) : path to executable is /usr/sbin/haproxy
Nov 24 09:57:37 compute-0 neutron-haproxy-ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781[269295]: [WARNING]  (269299) : Exiting Master process...
Nov 24 09:57:37 compute-0 neutron-haproxy-ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781[269295]: [WARNING]  (269299) : Exiting Master process...
Nov 24 09:57:37 compute-0 neutron-haproxy-ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781[269295]: [ALERT]    (269299) : Current worker (269301) exited with code 143 (Terminated)
Nov 24 09:57:37 compute-0 neutron-haproxy-ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781[269295]: [WARNING]  (269299) : All workers exited. Exiting... (0)
Nov 24 09:57:37 compute-0 systemd[1]: libpod-4b6e281a6fc71ea21b7ef329aa51ba71efd6dd920f7003f7cb9ba194f515d509.scope: Deactivated successfully.
Nov 24 09:57:37 compute-0 podman[272093]: 2025-11-24 09:57:37.166255246 +0000 UTC m=+0.042043302 container died 4b6e281a6fc71ea21b7ef329aa51ba71efd6dd920f7003f7cb9ba194f515d509 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 09:57:37 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4b6e281a6fc71ea21b7ef329aa51ba71efd6dd920f7003f7cb9ba194f515d509-userdata-shm.mount: Deactivated successfully.
Nov 24 09:57:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa8138a8c70e7d0457e244c836822f524085b4e78fa99c62495f104a05341cf4-merged.mount: Deactivated successfully.
Nov 24 09:57:37 compute-0 nova_compute[257700]: 2025-11-24 09:57:37.201 257704 INFO nova.virt.libvirt.driver [-] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Instance destroyed successfully.
Nov 24 09:57:37 compute-0 nova_compute[257700]: 2025-11-24 09:57:37.201 257704 DEBUG nova.objects.instance [None req-eccf3b7a-f43e-4999-8128-f4f87759167c 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'resources' on Instance uuid b014f69e-04e5-4c5d-bb6c-e88b4410e6ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 09:57:37 compute-0 podman[272093]: 2025-11-24 09:57:37.206041393 +0000 UTC m=+0.081829449 container cleanup 4b6e281a6fc71ea21b7ef329aa51ba71efd6dd920f7003f7cb9ba194f515d509 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 24 09:57:37 compute-0 nova_compute[257700]: 2025-11-24 09:57:37.211 257704 DEBUG nova.virt.libvirt.vif [None req-eccf3b7a-f43e-4999-8128-f4f87759167c 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T09:56:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1166261396',display_name='tempest-TestNetworkBasicOps-server-1166261396',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1166261396',id=4,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOlcnO+6RLSVuTHVahBA4wvrglLYJUElJsxDY5VFlil+CW8gtXSqW2DbLbNmozC6q2P0tVa4tVNMNBCnQGiQcIUf/IVfWp1zQZ3KhjDNVA3XKoC7hMjVrSu87SHoeev9EQ==',key_name='tempest-TestNetworkBasicOps-1920594917',keypairs=<?>,launch_index=0,launched_at=2025-11-24T09:56:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-57ahusuc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T09:56:33Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=b014f69e-04e5-4c5d-bb6c-e88b4410e6ab,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c6d0f148-a8f4-467e-8be3-1a120663dc95", "address": "fa:16:3e:88:13:8b", "network": {"id": "4a54e00b-2ddf-4829-be22-9a556b586781", "bridge": "br-int", "label": "tempest-network-smoke--280510625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6d0f148-a8", "ovs_interfaceid": "c6d0f148-a8f4-467e-8be3-1a120663dc95", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 24 09:57:37 compute-0 nova_compute[257700]: 2025-11-24 09:57:37.211 257704 DEBUG nova.network.os_vif_util [None req-eccf3b7a-f43e-4999-8128-f4f87759167c 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "c6d0f148-a8f4-467e-8be3-1a120663dc95", "address": "fa:16:3e:88:13:8b", "network": {"id": "4a54e00b-2ddf-4829-be22-9a556b586781", "bridge": "br-int", "label": "tempest-network-smoke--280510625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6d0f148-a8", "ovs_interfaceid": "c6d0f148-a8f4-467e-8be3-1a120663dc95", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 09:57:37 compute-0 nova_compute[257700]: 2025-11-24 09:57:37.212 257704 DEBUG nova.network.os_vif_util [None req-eccf3b7a-f43e-4999-8128-f4f87759167c 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:88:13:8b,bridge_name='br-int',has_traffic_filtering=True,id=c6d0f148-a8f4-467e-8be3-1a120663dc95,network=Network(4a54e00b-2ddf-4829-be22-9a556b586781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6d0f148-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 09:57:37 compute-0 nova_compute[257700]: 2025-11-24 09:57:37.212 257704 DEBUG os_vif [None req-eccf3b7a-f43e-4999-8128-f4f87759167c 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:88:13:8b,bridge_name='br-int',has_traffic_filtering=True,id=c6d0f148-a8f4-467e-8be3-1a120663dc95,network=Network(4a54e00b-2ddf-4829-be22-9a556b586781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6d0f148-a8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 24 09:57:37 compute-0 systemd[1]: libpod-conmon-4b6e281a6fc71ea21b7ef329aa51ba71efd6dd920f7003f7cb9ba194f515d509.scope: Deactivated successfully.
Nov 24 09:57:37 compute-0 nova_compute[257700]: 2025-11-24 09:57:37.214 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:37 compute-0 nova_compute[257700]: 2025-11-24 09:57:37.214 257704 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc6d0f148-a8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:57:37 compute-0 nova_compute[257700]: 2025-11-24 09:57:37.215 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:37 compute-0 nova_compute[257700]: 2025-11-24 09:57:37.216 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:37 compute-0 nova_compute[257700]: 2025-11-24 09:57:37.218 257704 INFO os_vif [None req-eccf3b7a-f43e-4999-8128-f4f87759167c 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:88:13:8b,bridge_name='br-int',has_traffic_filtering=True,id=c6d0f148-a8f4-467e-8be3-1a120663dc95,network=Network(4a54e00b-2ddf-4829-be22-9a556b586781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6d0f148-a8')
Nov 24 09:57:37 compute-0 podman[272135]: 2025-11-24 09:57:37.287285436 +0000 UTC m=+0.061034853 container remove 4b6e281a6fc71ea21b7ef329aa51ba71efd6dd920f7003f7cb9ba194f515d509 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 24 09:57:37 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:57:37.292 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[0cd5f41e-a7c2-417b-bc54-792ad7ec8cc4]: (4, ('Mon Nov 24 09:57:37 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781 (4b6e281a6fc71ea21b7ef329aa51ba71efd6dd920f7003f7cb9ba194f515d509)\n4b6e281a6fc71ea21b7ef329aa51ba71efd6dd920f7003f7cb9ba194f515d509\nMon Nov 24 09:57:37 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781 (4b6e281a6fc71ea21b7ef329aa51ba71efd6dd920f7003f7cb9ba194f515d509)\n4b6e281a6fc71ea21b7ef329aa51ba71efd6dd920f7003f7cb9ba194f515d509\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:57:37 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:57:37.294 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[df90c2d5-50a8-490d-abf3-2293907f6a1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:57:37 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:57:37.295 165073 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a54e00b-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:57:37 compute-0 nova_compute[257700]: 2025-11-24 09:57:37.296 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:37 compute-0 kernel: tap4a54e00b-20: left promiscuous mode
Nov 24 09:57:37 compute-0 nova_compute[257700]: 2025-11-24 09:57:37.298 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:37 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:57:37.301 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[d728f2d2-0c27-4e37-9d0c-9b716a96b225]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:57:37 compute-0 sshd-session[272067]: Invalid user tomcat from 36.255.3.203 port 43862
Nov 24 09:57:37 compute-0 nova_compute[257700]: 2025-11-24 09:57:37.311 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:37 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:57:37.315 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[bd57a379-3f26-4798-b664-9e737bdc9656]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:57:37 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:57:37.316 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[18f88a30-43f7-4109-85ba-7e80ba8b998c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:57:37 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:57:37.330 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[65cd0e09-7e8c-4422-91b2-b16ec5ca18df]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 413955, 'reachable_time': 36130, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272169, 'error': None, 'target': 'ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:57:37 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:57:37.333 165227 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4a54e00b-2ddf-4829-be22-9a556b586781 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 24 09:57:37 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:57:37.333 165227 DEBUG oslo.privsep.daemon [-] privsep: reply[ac36025a-dc4d-4f34-85cf-29353362d997]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:57:37 compute-0 systemd[1]: run-netns-ovnmeta\x2d4a54e00b\x2d2ddf\x2d4829\x2dbe22\x2d9a556b586781.mount: Deactivated successfully.
Nov 24 09:57:37 compute-0 sshd-session[272067]: Received disconnect from 36.255.3.203 port 43862:11: Bye Bye [preauth]
Nov 24 09:57:37 compute-0 sshd-session[272067]: Disconnected from invalid user tomcat 36.255.3.203 port 43862 [preauth]
Nov 24 09:57:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:57:37.522Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 09:57:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:57:37.522Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:57:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:57:37.522Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:57:37 compute-0 nova_compute[257700]: 2025-11-24 09:57:37.608 257704 INFO nova.virt.libvirt.driver [None req-eccf3b7a-f43e-4999-8128-f4f87759167c 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Deleting instance files /var/lib/nova/instances/b014f69e-04e5-4c5d-bb6c-e88b4410e6ab_del
Nov 24 09:57:37 compute-0 nova_compute[257700]: 2025-11-24 09:57:37.609 257704 INFO nova.virt.libvirt.driver [None req-eccf3b7a-f43e-4999-8128-f4f87759167c 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Deletion of /var/lib/nova/instances/b014f69e-04e5-4c5d-bb6c-e88b4410e6ab_del complete
Nov 24 09:57:37 compute-0 nova_compute[257700]: 2025-11-24 09:57:37.658 257704 INFO nova.compute.manager [None req-eccf3b7a-f43e-4999-8128-f4f87759167c 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Took 0.70 seconds to destroy the instance on the hypervisor.
Nov 24 09:57:37 compute-0 nova_compute[257700]: 2025-11-24 09:57:37.659 257704 DEBUG oslo.service.loopingcall [None req-eccf3b7a-f43e-4999-8128-f4f87759167c 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 24 09:57:37 compute-0 nova_compute[257700]: 2025-11-24 09:57:37.659 257704 DEBUG nova.compute.manager [-] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 24 09:57:37 compute-0 nova_compute[257700]: 2025-11-24 09:57:37.659 257704 DEBUG nova.network.neutron [-] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 24 09:57:37 compute-0 nova_compute[257700]: 2025-11-24 09:57:37.719 257704 DEBUG nova.compute.manager [req-2b8cacc9-b6a0-4276-b9cc-ae6be7682cb5 req-a7b02627-b03e-4feb-887b-8c73a3e98fde 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Received event network-vif-unplugged-c6d0f148-a8f4-467e-8be3-1a120663dc95 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 09:57:37 compute-0 nova_compute[257700]: 2025-11-24 09:57:37.720 257704 DEBUG oslo_concurrency.lockutils [req-2b8cacc9-b6a0-4276-b9cc-ae6be7682cb5 req-a7b02627-b03e-4feb-887b-8c73a3e98fde 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "b014f69e-04e5-4c5d-bb6c-e88b4410e6ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:57:37 compute-0 nova_compute[257700]: 2025-11-24 09:57:37.720 257704 DEBUG oslo_concurrency.lockutils [req-2b8cacc9-b6a0-4276-b9cc-ae6be7682cb5 req-a7b02627-b03e-4feb-887b-8c73a3e98fde 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "b014f69e-04e5-4c5d-bb6c-e88b4410e6ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:57:37 compute-0 nova_compute[257700]: 2025-11-24 09:57:37.720 257704 DEBUG oslo_concurrency.lockutils [req-2b8cacc9-b6a0-4276-b9cc-ae6be7682cb5 req-a7b02627-b03e-4feb-887b-8c73a3e98fde 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "b014f69e-04e5-4c5d-bb6c-e88b4410e6ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:57:37 compute-0 nova_compute[257700]: 2025-11-24 09:57:37.720 257704 DEBUG nova.compute.manager [req-2b8cacc9-b6a0-4276-b9cc-ae6be7682cb5 req-a7b02627-b03e-4feb-887b-8c73a3e98fde 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] No waiting events found dispatching network-vif-unplugged-c6d0f148-a8f4-467e-8be3-1a120663dc95 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 09:57:37 compute-0 nova_compute[257700]: 2025-11-24 09:57:37.721 257704 DEBUG nova.compute.manager [req-2b8cacc9-b6a0-4276-b9cc-ae6be7682cb5 req-a7b02627-b03e-4feb-887b-8c73a3e98fde 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Received event network-vif-unplugged-c6d0f148-a8f4-467e-8be3-1a120663dc95 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 24 09:57:37 compute-0 nova_compute[257700]: 2025-11-24 09:57:37.721 257704 DEBUG nova.compute.manager [req-2b8cacc9-b6a0-4276-b9cc-ae6be7682cb5 req-a7b02627-b03e-4feb-887b-8c73a3e98fde 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Received event network-vif-plugged-c6d0f148-a8f4-467e-8be3-1a120663dc95 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 09:57:37 compute-0 nova_compute[257700]: 2025-11-24 09:57:37.721 257704 DEBUG oslo_concurrency.lockutils [req-2b8cacc9-b6a0-4276-b9cc-ae6be7682cb5 req-a7b02627-b03e-4feb-887b-8c73a3e98fde 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "b014f69e-04e5-4c5d-bb6c-e88b4410e6ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:57:37 compute-0 nova_compute[257700]: 2025-11-24 09:57:37.721 257704 DEBUG oslo_concurrency.lockutils [req-2b8cacc9-b6a0-4276-b9cc-ae6be7682cb5 req-a7b02627-b03e-4feb-887b-8c73a3e98fde 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "b014f69e-04e5-4c5d-bb6c-e88b4410e6ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:57:37 compute-0 nova_compute[257700]: 2025-11-24 09:57:37.721 257704 DEBUG oslo_concurrency.lockutils [req-2b8cacc9-b6a0-4276-b9cc-ae6be7682cb5 req-a7b02627-b03e-4feb-887b-8c73a3e98fde 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "b014f69e-04e5-4c5d-bb6c-e88b4410e6ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:57:37 compute-0 nova_compute[257700]: 2025-11-24 09:57:37.722 257704 DEBUG nova.compute.manager [req-2b8cacc9-b6a0-4276-b9cc-ae6be7682cb5 req-a7b02627-b03e-4feb-887b-8c73a3e98fde 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] No waiting events found dispatching network-vif-plugged-c6d0f148-a8f4-467e-8be3-1a120663dc95 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 09:57:37 compute-0 nova_compute[257700]: 2025-11-24 09:57:37.722 257704 WARNING nova.compute.manager [req-2b8cacc9-b6a0-4276-b9cc-ae6be7682cb5 req-a7b02627-b03e-4feb-887b-8c73a3e98fde 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Received unexpected event network-vif-plugged-c6d0f148-a8f4-467e-8be3-1a120663dc95 for instance with vm_state active and task_state deleting.
Nov 24 09:57:38 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:38 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:38 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:38.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:38 compute-0 nova_compute[257700]: 2025-11-24 09:57:38.300 257704 DEBUG nova.network.neutron [-] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 09:57:38 compute-0 nova_compute[257700]: 2025-11-24 09:57:38.317 257704 INFO nova.compute.manager [-] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Took 0.66 seconds to deallocate network for instance.
Nov 24 09:57:38 compute-0 nova_compute[257700]: 2025-11-24 09:57:38.360 257704 DEBUG nova.compute.manager [req-c64605ed-1d8b-4826-811a-62a76a21e780 req-c27ea918-1352-41bd-b4e1-8307f5a41ec2 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Received event network-vif-deleted-c6d0f148-a8f4-467e-8be3-1a120663dc95 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 09:57:38 compute-0 nova_compute[257700]: 2025-11-24 09:57:38.362 257704 DEBUG oslo_concurrency.lockutils [None req-eccf3b7a-f43e-4999-8128-f4f87759167c 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:57:38 compute-0 nova_compute[257700]: 2025-11-24 09:57:38.363 257704 DEBUG oslo_concurrency.lockutils [None req-eccf3b7a-f43e-4999-8128-f4f87759167c 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:57:38 compute-0 nova_compute[257700]: 2025-11-24 09:57:38.417 257704 DEBUG oslo_concurrency.processutils [None req-eccf3b7a-f43e-4999-8128-f4f87759167c 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:57:38 compute-0 ceph-mon[74331]: pgmap v923: 353 pgs: 353 active+clean; 121 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Nov 24 09:57:38 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:57:38 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:57:38 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2735909461' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:57:38 compute-0 nova_compute[257700]: 2025-11-24 09:57:38.842 257704 DEBUG oslo_concurrency.processutils [None req-eccf3b7a-f43e-4999-8128-f4f87759167c 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:57:38 compute-0 nova_compute[257700]: 2025-11-24 09:57:38.848 257704 DEBUG nova.compute.provider_tree [None req-eccf3b7a-f43e-4999-8128-f4f87759167c 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed in ProviderTree for provider: a50ce3b5-7e9e-4263-a4aa-c35573ac7257 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 09:57:38 compute-0 nova_compute[257700]: 2025-11-24 09:57:38.859 257704 DEBUG nova.scheduler.client.report [None req-eccf3b7a-f43e-4999-8128-f4f87759167c 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 09:57:38 compute-0 nova_compute[257700]: 2025-11-24 09:57:38.874 257704 DEBUG oslo_concurrency.lockutils [None req-eccf3b7a-f43e-4999-8128-f4f87759167c 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.511s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:57:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:57:38.878Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:57:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:57:38.879Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:57:38 compute-0 nova_compute[257700]: 2025-11-24 09:57:38.903 257704 INFO nova.scheduler.client.report [None req-eccf3b7a-f43e-4999-8128-f4f87759167c 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Deleted allocations for instance b014f69e-04e5-4c5d-bb6c-e88b4410e6ab
Nov 24 09:57:38 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v924: 353 pgs: 353 active+clean; 121 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 13 KiB/s wr, 29 op/s
Nov 24 09:57:38 compute-0 nova_compute[257700]: 2025-11-24 09:57:38.958 257704 DEBUG oslo_concurrency.lockutils [None req-eccf3b7a-f43e-4999-8128-f4f87759167c 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "b014f69e-04e5-4c5d-bb6c-e88b4410e6ab" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.998s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:57:38 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:38 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:57:38 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:38.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:57:39 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2735909461' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:57:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:57:39 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:57:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:57:39 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:57:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:57:39 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:57:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:57:40 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:57:40 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:40 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:40 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:40.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:40 compute-0 ceph-mon[74331]: pgmap v924: 353 pgs: 353 active+clean; 121 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 13 KiB/s wr, 29 op/s
Nov 24 09:57:40 compute-0 sshd-session[272016]: error: kex_exchange_identification: read: Connection timed out
Nov 24 09:57:40 compute-0 sshd-session[272016]: banner exchange: Connection from 14.215.126.91 port 34636: Connection timed out
Nov 24 09:57:40 compute-0 nova_compute[257700]: 2025-11-24 09:57:40.872 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:40 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v925: 353 pgs: 353 active+clean; 121 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 13 KiB/s wr, 29 op/s
Nov 24 09:57:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:57:40] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Nov 24 09:57:40 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:57:40] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Nov 24 09:57:40 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:40 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:40 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:40.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:41 compute-0 ceph-mon[74331]: pgmap v925: 353 pgs: 353 active+clean; 121 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 13 KiB/s wr, 29 op/s
Nov 24 09:57:42 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:42 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:42 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:42.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:42 compute-0 nova_compute[257700]: 2025-11-24 09:57:42.217 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:42 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v926: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 14 KiB/s wr, 57 op/s
Nov 24 09:57:42 compute-0 nova_compute[257700]: 2025-11-24 09:57:42.959 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:43 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:43 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:43 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:42.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:43 compute-0 nova_compute[257700]: 2025-11-24 09:57:43.081 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:57:43 compute-0 podman[272202]: 2025-11-24 09:57:43.776836025 +0000 UTC m=+0.052067966 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 09:57:44 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:44 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:57:44 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:44.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:57:44 compute-0 ceph-mon[74331]: pgmap v926: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 14 KiB/s wr, 57 op/s
Nov 24 09:57:44 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v927: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 14 KiB/s wr, 56 op/s
Nov 24 09:57:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:57:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:57:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:57:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:57:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:57:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:57:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:57:45 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:57:45 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:45 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:57:45 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:45.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:57:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Optimize plan auto_2025-11-24_09:57:45
Nov 24 09:57:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 09:57:45 compute-0 ceph-mgr[74626]: [balancer INFO root] do_upmap
Nov 24 09:57:45 compute-0 ceph-mgr[74626]: [balancer INFO root] pools ['default.rgw.log', 'images', 'vms', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.data', '.mgr', '.nfs', 'backups', 'volumes', 'default.rgw.control']
Nov 24 09:57:45 compute-0 ceph-mgr[74626]: [balancer INFO root] prepared 0/10 upmap changes
Nov 24 09:57:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:57:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:57:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:57:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:57:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:57:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:57:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 09:57:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:57:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 09:57:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:57:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 09:57:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:57:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:57:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:57:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:57:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:57:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 24 09:57:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:57:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 09:57:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:57:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:57:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:57:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 24 09:57:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:57:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 09:57:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:57:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:57:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:57:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 09:57:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:57:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 09:57:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 09:57:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:57:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 09:57:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:57:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:57:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:57:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:57:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:57:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:57:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:57:45 compute-0 nova_compute[257700]: 2025-11-24 09:57:45.875 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:46 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:46 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:57:46 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:46.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:57:46 compute-0 ceph-mon[74331]: pgmap v927: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 14 KiB/s wr, 56 op/s
Nov 24 09:57:46 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:57:46 compute-0 sudo[272224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:57:46 compute-0 sudo[272224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:57:46 compute-0 sudo[272224]: pam_unix(sudo:session): session closed for user root
Nov 24 09:57:46 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v928: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 14 KiB/s wr, 57 op/s
Nov 24 09:57:47 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:47 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:47 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:47.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:47 compute-0 nova_compute[257700]: 2025-11-24 09:57:47.219 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:57:47.523Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:57:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:57:47.523Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 09:57:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:57:47.523Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 09:57:48 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:48 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:57:48 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:48.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:57:48 compute-0 ceph-mon[74331]: pgmap v928: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 14 KiB/s wr, 57 op/s
Nov 24 09:57:48 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:57:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:57:48.879Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:57:48 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v929: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Nov 24 09:57:49 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:49 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:49 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:49.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:57:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:57:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:57:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:57:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:57:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:57:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:57:50 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:57:50 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:50 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:50 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:50.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:50 compute-0 ceph-mon[74331]: pgmap v929: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Nov 24 09:57:50 compute-0 nova_compute[257700]: 2025-11-24 09:57:50.904 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:50 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v930: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Nov 24 09:57:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:57:50] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 24 09:57:50 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:57:50] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 24 09:57:51 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:51 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:51 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:51.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:52 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:52 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:57:52 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:52.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:57:52 compute-0 nova_compute[257700]: 2025-11-24 09:57:52.197 257704 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763978257.1952386, b014f69e-04e5-4c5d-bb6c-e88b4410e6ab => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 09:57:52 compute-0 nova_compute[257700]: 2025-11-24 09:57:52.197 257704 INFO nova.compute.manager [-] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] VM Stopped (Lifecycle Event)
Nov 24 09:57:52 compute-0 nova_compute[257700]: 2025-11-24 09:57:52.213 257704 DEBUG nova.compute.manager [None req-772fb0b7-594d-406b-b6a9-2e9aad59a5b3 - - - - - -] [instance: b014f69e-04e5-4c5d-bb6c-e88b4410e6ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 09:57:52 compute-0 nova_compute[257700]: 2025-11-24 09:57:52.222 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:52 compute-0 ceph-mon[74331]: pgmap v930: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Nov 24 09:57:52 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v931: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 30 op/s
Nov 24 09:57:53 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:53 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:53 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:53.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:53 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:57:54 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:54 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:57:54 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:54.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:57:54 compute-0 ceph-mon[74331]: pgmap v931: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 30 op/s
Nov 24 09:57:54 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v932: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 1 op/s
Nov 24 09:57:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:57:54 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:57:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:57:54 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:57:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:57:54 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:57:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:57:55 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:57:55 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:55 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:55 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:55.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:55 compute-0 nova_compute[257700]: 2025-11-24 09:57:55.906 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:56 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:56 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:57:56 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:56.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:57:56 compute-0 ceph-mon[74331]: pgmap v932: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 1 op/s
Nov 24 09:57:56 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v933: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 103 KiB/s rd, 0 B/s wr, 171 op/s
Nov 24 09:57:57 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:57 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:57 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:57.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:57 compute-0 nova_compute[257700]: 2025-11-24 09:57:57.225 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:57:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:57:57.524Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:57:58 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:58 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:58 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:57:58.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:58 compute-0 ceph-mon[74331]: pgmap v933: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 103 KiB/s rd, 0 B/s wr, 171 op/s
Nov 24 09:57:58 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/4049579656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:57:58 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:57:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:57:58.880Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:57:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:57:58.880Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 09:57:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:57:58.880Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 09:57:58 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v934: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 103 KiB/s rd, 0 B/s wr, 170 op/s
Nov 24 09:57:59 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:57:59 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:57:59 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:57:59.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:57:59 compute-0 ceph-mon[74331]: pgmap v934: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 103 KiB/s rd, 0 B/s wr, 170 op/s
Nov 24 09:58:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:57:59 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:58:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:57:59 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:58:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:57:59 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:58:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:58:00 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:58:00 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:00 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:00 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:00.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:00 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:58:00 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v935: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 103 KiB/s rd, 0 B/s wr, 170 op/s
Nov 24 09:58:00 compute-0 nova_compute[257700]: 2025-11-24 09:58:00.944 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:58:00] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 24 09:58:00 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:58:00] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 24 09:58:01 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:01 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:01 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:01.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:01 compute-0 ceph-mon[74331]: pgmap v935: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 103 KiB/s rd, 0 B/s wr, 170 op/s
Nov 24 09:58:02 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:02 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:02 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:02.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:02 compute-0 nova_compute[257700]: 2025-11-24 09:58:02.227 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:02 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/1301144539' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 09:58:02 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/1301144539' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 09:58:02 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v936: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 121 KiB/s rd, 1.8 MiB/s wr, 198 op/s
Nov 24 09:58:03 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:03 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:03 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:03.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:03 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:58:03 compute-0 ceph-mon[74331]: pgmap v936: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 121 KiB/s rd, 1.8 MiB/s wr, 198 op/s
Nov 24 09:58:03 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/4014279327' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 09:58:03 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/625293257' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 09:58:04 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:04 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:04 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:04.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:04 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v937: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 120 KiB/s rd, 1.8 MiB/s wr, 197 op/s
Nov 24 09:58:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:58:04 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:58:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:58:04 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:58:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:58:04 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:58:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:58:05 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:58:05 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:05 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:05 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:05.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:05 compute-0 nova_compute[257700]: 2025-11-24 09:58:05.946 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:05 compute-0 ceph-mon[74331]: pgmap v937: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 120 KiB/s rd, 1.8 MiB/s wr, 197 op/s
Nov 24 09:58:06 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:06 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:58:06 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:06.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:58:06 compute-0 podman[272271]: 2025-11-24 09:58:06.810315964 +0000 UTC m=+0.077334110 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 09:58:06 compute-0 podman[272272]: 2025-11-24 09:58:06.825119704 +0000 UTC m=+0.095238185 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 24 09:58:06 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v938: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1014 KiB/s rd, 1.8 MiB/s wr, 237 op/s
Nov 24 09:58:06 compute-0 sudo[272316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:58:06 compute-0 sudo[272316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:58:06 compute-0 sudo[272316]: pam_unix(sudo:session): session closed for user root
Nov 24 09:58:07 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:07 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:07 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:07.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:07 compute-0 nova_compute[257700]: 2025-11-24 09:58:07.229 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:58:07.525Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:58:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:58:07.526Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:58:08 compute-0 ceph-mon[74331]: pgmap v938: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1014 KiB/s rd, 1.8 MiB/s wr, 237 op/s
Nov 24 09:58:08 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:08 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:08 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:08.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:08 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:58:08 compute-0 sshd-session[272341]: Received disconnect from 83.229.122.23 port 55730:11: Bye Bye [preauth]
Nov 24 09:58:08 compute-0 sshd-session[272341]: Disconnected from authenticating user root 83.229.122.23 port 55730 [preauth]
Nov 24 09:58:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:58:08.881Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:58:08 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v939: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 912 KiB/s rd, 1.8 MiB/s wr, 67 op/s
Nov 24 09:58:09 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:09 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:09 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:09.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:58:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:58:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:58:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:58:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:58:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:58:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:58:10 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:58:10 compute-0 ceph-mon[74331]: pgmap v939: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 912 KiB/s rd, 1.8 MiB/s wr, 67 op/s
Nov 24 09:58:10 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:10 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:10 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:10.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:10 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v940: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 912 KiB/s rd, 1.8 MiB/s wr, 67 op/s
Nov 24 09:58:10 compute-0 nova_compute[257700]: 2025-11-24 09:58:10.948 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:58:10] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 24 09:58:10 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:58:10] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 24 09:58:11 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:11 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:11 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:11.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:12 compute-0 ceph-mon[74331]: pgmap v940: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 912 KiB/s rd, 1.8 MiB/s wr, 67 op/s
Nov 24 09:58:12 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:12 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:12 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:12.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:12 compute-0 nova_compute[257700]: 2025-11-24 09:58:12.232 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:12 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v941: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Nov 24 09:58:13 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:13 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:13 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:13.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:58:14 compute-0 ceph-mon[74331]: pgmap v941: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Nov 24 09:58:14 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:14 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:14 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:14.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:14 compute-0 podman[272352]: 2025-11-24 09:58:14.773093915 +0000 UTC m=+0.051765689 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 09:58:14 compute-0 nova_compute[257700]: 2025-11-24 09:58:14.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:58:14 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v942: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 24 09:58:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:58:14 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:58:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:58:14 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:58:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:58:14 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:58:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:58:15 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:58:15 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:15 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:15 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:15.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:58:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:58:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:58:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:58:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:58:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:58:15 compute-0 sudo[272373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:58:15 compute-0 sudo[272373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:58:15 compute-0 sudo[272373]: pam_unix(sudo:session): session closed for user root
Nov 24 09:58:15 compute-0 sudo[272398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:58:15 compute-0 sudo[272398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:58:15 compute-0 nova_compute[257700]: 2025-11-24 09:58:15.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:58:15 compute-0 nova_compute[257700]: 2025-11-24 09:58:15.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:58:15 compute-0 nova_compute[257700]: 2025-11-24 09:58:15.921 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 09:58:15 compute-0 nova_compute[257700]: 2025-11-24 09:58:15.950 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:16 compute-0 ceph-mon[74331]: pgmap v942: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 24 09:58:16 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:58:16 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:16 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:58:16 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:16.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:58:16 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 09:58:16 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:58:16 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 09:58:16 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:58:16 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 09:58:16 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:58:16 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 09:58:16 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:58:16 compute-0 sudo[272398]: pam_unix(sudo:session): session closed for user root
Nov 24 09:58:16 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v943: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Nov 24 09:58:17 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:17 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:58:17 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:17.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:58:17 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:58:17 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:58:17 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:58:17 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:58:17 compute-0 nova_compute[257700]: 2025-11-24 09:58:17.235 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:58:17.526Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:58:17 compute-0 nova_compute[257700]: 2025-11-24 09:58:17.922 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:58:18 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:18 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:58:18 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:18.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:58:18 compute-0 ovn_controller[155123]: 2025-11-24T09:58:18Z|00049|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Nov 24 09:58:18 compute-0 ceph-mon[74331]: pgmap v943: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Nov 24 09:58:18 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:58:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:58:18.882Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:58:18 compute-0 sshd-session[272345]: error: kex_exchange_identification: read: Connection timed out
Nov 24 09:58:18 compute-0 sshd-session[272345]: banner exchange: Connection from 14.215.126.91 port 33994: Connection timed out
Nov 24 09:58:18 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v944: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 35 op/s
Nov 24 09:58:19 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:19 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:58:19 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:19.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:58:19 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 09:58:19 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:58:19 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 09:58:19 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:58:19 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v945: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 41 op/s
Nov 24 09:58:19 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:58:19 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:58:19 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:58:19 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:58:19 compute-0 sudo[272457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:58:19 compute-0 sudo[272457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:58:19 compute-0 sudo[272457]: pam_unix(sudo:session): session closed for user root
Nov 24 09:58:19 compute-0 sudo[272482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 24 09:58:19 compute-0 sudo[272482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:58:19 compute-0 podman[272549]: 2025-11-24 09:58:19.659347241 +0000 UTC m=+0.040093356 container create 6792918156c429987d0cdedb0077e066bf905ca7d1f9bc3c3f78417e4dddc53b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Nov 24 09:58:19 compute-0 systemd[1]: Started libpod-conmon-6792918156c429987d0cdedb0077e066bf905ca7d1f9bc3c3f78417e4dddc53b.scope.
Nov 24 09:58:19 compute-0 podman[272549]: 2025-11-24 09:58:19.641326383 +0000 UTC m=+0.022072518 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:58:19 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:58:19 compute-0 podman[272549]: 2025-11-24 09:58:19.755592169 +0000 UTC m=+0.136338304 container init 6792918156c429987d0cdedb0077e066bf905ca7d1f9bc3c3f78417e4dddc53b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 09:58:19 compute-0 podman[272549]: 2025-11-24 09:58:19.764001524 +0000 UTC m=+0.144747639 container start 6792918156c429987d0cdedb0077e066bf905ca7d1f9bc3c3f78417e4dddc53b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_pike, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Nov 24 09:58:19 compute-0 podman[272549]: 2025-11-24 09:58:19.767259642 +0000 UTC m=+0.148005777 container attach 6792918156c429987d0cdedb0077e066bf905ca7d1f9bc3c3f78417e4dddc53b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_pike, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:58:19 compute-0 zealous_pike[272566]: 167 167
Nov 24 09:58:19 compute-0 systemd[1]: libpod-6792918156c429987d0cdedb0077e066bf905ca7d1f9bc3c3f78417e4dddc53b.scope: Deactivated successfully.
Nov 24 09:58:19 compute-0 podman[272549]: 2025-11-24 09:58:19.772868729 +0000 UTC m=+0.153614844 container died 6792918156c429987d0cdedb0077e066bf905ca7d1f9bc3c3f78417e4dddc53b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_pike, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:58:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-7202e13620cd3e86a4bb6f6ffe1df164702a9e28a819cc1b54c7dce657121634-merged.mount: Deactivated successfully.
Nov 24 09:58:19 compute-0 podman[272549]: 2025-11-24 09:58:19.813423914 +0000 UTC m=+0.194170029 container remove 6792918156c429987d0cdedb0077e066bf905ca7d1f9bc3c3f78417e4dddc53b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_pike, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1)
Nov 24 09:58:19 compute-0 systemd[1]: libpod-conmon-6792918156c429987d0cdedb0077e066bf905ca7d1f9bc3c3f78417e4dddc53b.scope: Deactivated successfully.
Nov 24 09:58:19 compute-0 nova_compute[257700]: 2025-11-24 09:58:19.920 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:58:19 compute-0 nova_compute[257700]: 2025-11-24 09:58:19.922 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 09:58:19 compute-0 nova_compute[257700]: 2025-11-24 09:58:19.922 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 09:58:19 compute-0 nova_compute[257700]: 2025-11-24 09:58:19.935 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 09:58:19 compute-0 nova_compute[257700]: 2025-11-24 09:58:19.935 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:58:19 compute-0 nova_compute[257700]: 2025-11-24 09:58:19.949 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:58:19 compute-0 nova_compute[257700]: 2025-11-24 09:58:19.949 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:58:19 compute-0 nova_compute[257700]: 2025-11-24 09:58:19.949 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:58:19 compute-0 nova_compute[257700]: 2025-11-24 09:58:19.949 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 09:58:19 compute-0 nova_compute[257700]: 2025-11-24 09:58:19.950 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:58:19 compute-0 podman[272590]: 2025-11-24 09:58:19.971508535 +0000 UTC m=+0.039672795 container create e81b083b4d870a41c72a557557961609a15355522c28a5288ce7d695dc431767 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:58:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:58:19 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:58:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:58:19 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:58:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:58:20 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:58:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:58:20 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:58:20 compute-0 systemd[1]: Started libpod-conmon-e81b083b4d870a41c72a557557961609a15355522c28a5288ce7d695dc431767.scope.
Nov 24 09:58:20 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:58:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/035134e4271f179dea3a7cda42beb5a3aa131cb74aea394cd8ebe4a11739e055/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:58:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/035134e4271f179dea3a7cda42beb5a3aa131cb74aea394cd8ebe4a11739e055/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:58:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/035134e4271f179dea3a7cda42beb5a3aa131cb74aea394cd8ebe4a11739e055/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:58:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/035134e4271f179dea3a7cda42beb5a3aa131cb74aea394cd8ebe4a11739e055/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:58:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/035134e4271f179dea3a7cda42beb5a3aa131cb74aea394cd8ebe4a11739e055/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:58:20 compute-0 podman[272590]: 2025-11-24 09:58:19.955478005 +0000 UTC m=+0.023642295 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:58:20 compute-0 podman[272590]: 2025-11-24 09:58:20.050852553 +0000 UTC m=+0.119016813 container init e81b083b4d870a41c72a557557961609a15355522c28a5288ce7d695dc431767 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_bardeen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Nov 24 09:58:20 compute-0 podman[272590]: 2025-11-24 09:58:20.059932683 +0000 UTC m=+0.128096943 container start e81b083b4d870a41c72a557557961609a15355522c28a5288ce7d695dc431767 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_bardeen, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid)
Nov 24 09:58:20 compute-0 podman[272590]: 2025-11-24 09:58:20.063560422 +0000 UTC m=+0.131724702 container attach e81b083b4d870a41c72a557557961609a15355522c28a5288ce7d695dc431767 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_bardeen, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:58:20 compute-0 ceph-mon[74331]: pgmap v944: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 35 op/s
Nov 24 09:58:20 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:58:20 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:58:20 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/829144257' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:58:20 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:58:20 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:58:20 compute-0 ceph-mon[74331]: pgmap v945: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 41 op/s
Nov 24 09:58:20 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:58:20 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:58:20 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:58:20 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:58:20 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:58:20 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/2462380472' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:58:20 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/1162875952' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:58:20 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/3874872579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:58:20 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:20 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:20 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:20.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:20 compute-0 practical_bardeen[272607]: --> passed data devices: 0 physical, 1 LVM
Nov 24 09:58:20 compute-0 practical_bardeen[272607]: --> All data devices are unavailable
Nov 24 09:58:20 compute-0 systemd[1]: libpod-e81b083b4d870a41c72a557557961609a15355522c28a5288ce7d695dc431767.scope: Deactivated successfully.
Nov 24 09:58:20 compute-0 conmon[272607]: conmon e81b083b4d870a41c72a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e81b083b4d870a41c72a557557961609a15355522c28a5288ce7d695dc431767.scope/container/memory.events
Nov 24 09:58:20 compute-0 podman[272590]: 2025-11-24 09:58:20.391312265 +0000 UTC m=+0.459476515 container died e81b083b4d870a41c72a557557961609a15355522c28a5288ce7d695dc431767 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_bardeen, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:58:20 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:58:20 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2425649229' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:58:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-035134e4271f179dea3a7cda42beb5a3aa131cb74aea394cd8ebe4a11739e055-merged.mount: Deactivated successfully.
Nov 24 09:58:20 compute-0 nova_compute[257700]: 2025-11-24 09:58:20.413 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:58:20 compute-0 podman[272590]: 2025-11-24 09:58:20.430596329 +0000 UTC m=+0.498760589 container remove e81b083b4d870a41c72a557557961609a15355522c28a5288ce7d695dc431767 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Nov 24 09:58:20 compute-0 systemd[1]: libpod-conmon-e81b083b4d870a41c72a557557961609a15355522c28a5288ce7d695dc431767.scope: Deactivated successfully.
Nov 24 09:58:20 compute-0 sudo[272482]: pam_unix(sudo:session): session closed for user root
Nov 24 09:58:20 compute-0 sudo[272658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:58:20 compute-0 sudo[272658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:58:20 compute-0 sudo[272658]: pam_unix(sudo:session): session closed for user root
Nov 24 09:58:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:58:20.569 165073 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:58:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:58:20.570 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:58:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:58:20.570 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:58:20 compute-0 nova_compute[257700]: 2025-11-24 09:58:20.586 257704 WARNING nova.virt.libvirt.driver [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 09:58:20 compute-0 nova_compute[257700]: 2025-11-24 09:58:20.587 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4545MB free_disk=59.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 09:58:20 compute-0 nova_compute[257700]: 2025-11-24 09:58:20.587 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:58:20 compute-0 nova_compute[257700]: 2025-11-24 09:58:20.587 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:58:20 compute-0 sudo[272683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- lvm list --format json
Nov 24 09:58:20 compute-0 sudo[272683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:58:20 compute-0 nova_compute[257700]: 2025-11-24 09:58:20.658 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 09:58:20 compute-0 nova_compute[257700]: 2025-11-24 09:58:20.659 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 09:58:20 compute-0 nova_compute[257700]: 2025-11-24 09:58:20.672 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:58:20 compute-0 nova_compute[257700]: 2025-11-24 09:58:20.954 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:20 compute-0 podman[272768]: 2025-11-24 09:58:20.987357205 +0000 UTC m=+0.041139310 container create 6540ea94480c3a147acfba154c379d0b3b562cd767fabf97bb560563ff51569b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:58:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:58:20] "GET /metrics HTTP/1.1" 200 48471 "" "Prometheus/2.51.0"
Nov 24 09:58:20 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:58:20] "GET /metrics HTTP/1.1" 200 48471 "" "Prometheus/2.51.0"
Nov 24 09:58:21 compute-0 systemd[1]: Started libpod-conmon-6540ea94480c3a147acfba154c379d0b3b562cd767fabf97bb560563ff51569b.scope.
Nov 24 09:58:21 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:21 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:21 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:21.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:21 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:58:21 compute-0 podman[272768]: 2025-11-24 09:58:21.059438587 +0000 UTC m=+0.113220712 container init 6540ea94480c3a147acfba154c379d0b3b562cd767fabf97bb560563ff51569b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:58:21 compute-0 podman[272768]: 2025-11-24 09:58:21.066615221 +0000 UTC m=+0.120397316 container start 6540ea94480c3a147acfba154c379d0b3b562cd767fabf97bb560563ff51569b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 09:58:21 compute-0 podman[272768]: 2025-11-24 09:58:20.971145552 +0000 UTC m=+0.024927687 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:58:21 compute-0 podman[272768]: 2025-11-24 09:58:21.069743007 +0000 UTC m=+0.123525132 container attach 6540ea94480c3a147acfba154c379d0b3b562cd767fabf97bb560563ff51569b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Nov 24 09:58:21 compute-0 friendly_tesla[272784]: 167 167
Nov 24 09:58:21 compute-0 systemd[1]: libpod-6540ea94480c3a147acfba154c379d0b3b562cd767fabf97bb560563ff51569b.scope: Deactivated successfully.
Nov 24 09:58:21 compute-0 podman[272768]: 2025-11-24 09:58:21.07435318 +0000 UTC m=+0.128135285 container died 6540ea94480c3a147acfba154c379d0b3b562cd767fabf97bb560563ff51569b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_tesla, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Nov 24 09:58:21 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:58:21 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2246084627' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:58:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-243957ecbe237bedef16a02f66978b237eb3508f25becf750f094228594bdd26-merged.mount: Deactivated successfully.
Nov 24 09:58:21 compute-0 podman[272768]: 2025-11-24 09:58:21.108381926 +0000 UTC m=+0.162164031 container remove 6540ea94480c3a147acfba154c379d0b3b562cd767fabf97bb560563ff51569b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_tesla, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid)
Nov 24 09:58:21 compute-0 nova_compute[257700]: 2025-11-24 09:58:21.109 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:58:21 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2425649229' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:58:21 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2246084627' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:58:21 compute-0 systemd[1]: libpod-conmon-6540ea94480c3a147acfba154c379d0b3b562cd767fabf97bb560563ff51569b.scope: Deactivated successfully.
Nov 24 09:58:21 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v946: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 41 op/s
Nov 24 09:58:21 compute-0 nova_compute[257700]: 2025-11-24 09:58:21.118 257704 DEBUG nova.compute.provider_tree [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Inventory has not changed in ProviderTree for provider: a50ce3b5-7e9e-4263-a4aa-c35573ac7257 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 09:58:21 compute-0 nova_compute[257700]: 2025-11-24 09:58:21.136 257704 DEBUG nova.scheduler.client.report [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Inventory has not changed for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 09:58:21 compute-0 nova_compute[257700]: 2025-11-24 09:58:21.153 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 09:58:21 compute-0 nova_compute[257700]: 2025-11-24 09:58:21.153 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.566s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:58:21 compute-0 podman[272810]: 2025-11-24 09:58:21.267925313 +0000 UTC m=+0.042878493 container create 50484cc8791e89aa7e68a782e3fd0f1d7f43bdedbe05e200249188f5ef80acdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_goldberg, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 09:58:21 compute-0 systemd[1]: Started libpod-conmon-50484cc8791e89aa7e68a782e3fd0f1d7f43bdedbe05e200249188f5ef80acdc.scope.
Nov 24 09:58:21 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:58:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eb00a6e380f60aad95517e7481d12138b4c5cf6b7a203ecb463bcdc094cf278/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:58:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eb00a6e380f60aad95517e7481d12138b4c5cf6b7a203ecb463bcdc094cf278/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:58:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eb00a6e380f60aad95517e7481d12138b4c5cf6b7a203ecb463bcdc094cf278/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:58:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eb00a6e380f60aad95517e7481d12138b4c5cf6b7a203ecb463bcdc094cf278/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:58:21 compute-0 podman[272810]: 2025-11-24 09:58:21.341142471 +0000 UTC m=+0.116095661 container init 50484cc8791e89aa7e68a782e3fd0f1d7f43bdedbe05e200249188f5ef80acdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Nov 24 09:58:21 compute-0 podman[272810]: 2025-11-24 09:58:21.251935724 +0000 UTC m=+0.026888924 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:58:21 compute-0 podman[272810]: 2025-11-24 09:58:21.348620543 +0000 UTC m=+0.123573723 container start 50484cc8791e89aa7e68a782e3fd0f1d7f43bdedbe05e200249188f5ef80acdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_goldberg, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:58:21 compute-0 podman[272810]: 2025-11-24 09:58:21.351708038 +0000 UTC m=+0.126661248 container attach 50484cc8791e89aa7e68a782e3fd0f1d7f43bdedbe05e200249188f5ef80acdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_goldberg, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 24 09:58:21 compute-0 boring_goldberg[272826]: {
Nov 24 09:58:21 compute-0 boring_goldberg[272826]:     "0": [
Nov 24 09:58:21 compute-0 boring_goldberg[272826]:         {
Nov 24 09:58:21 compute-0 boring_goldberg[272826]:             "devices": [
Nov 24 09:58:21 compute-0 boring_goldberg[272826]:                 "/dev/loop3"
Nov 24 09:58:21 compute-0 boring_goldberg[272826]:             ],
Nov 24 09:58:21 compute-0 boring_goldberg[272826]:             "lv_name": "ceph_lv0",
Nov 24 09:58:21 compute-0 boring_goldberg[272826]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:58:21 compute-0 boring_goldberg[272826]:             "lv_size": "21470642176",
Nov 24 09:58:21 compute-0 boring_goldberg[272826]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=84a084c3-61a7-5de7-8207-1f88efa59a64,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 24 09:58:21 compute-0 boring_goldberg[272826]:             "lv_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:58:21 compute-0 boring_goldberg[272826]:             "name": "ceph_lv0",
Nov 24 09:58:21 compute-0 boring_goldberg[272826]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:58:21 compute-0 boring_goldberg[272826]:             "tags": {
Nov 24 09:58:21 compute-0 boring_goldberg[272826]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:58:21 compute-0 boring_goldberg[272826]:                 "ceph.block_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:58:21 compute-0 boring_goldberg[272826]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 09:58:21 compute-0 boring_goldberg[272826]:                 "ceph.cluster_fsid": "84a084c3-61a7-5de7-8207-1f88efa59a64",
Nov 24 09:58:21 compute-0 boring_goldberg[272826]:                 "ceph.cluster_name": "ceph",
Nov 24 09:58:21 compute-0 boring_goldberg[272826]:                 "ceph.crush_device_class": "",
Nov 24 09:58:21 compute-0 boring_goldberg[272826]:                 "ceph.encrypted": "0",
Nov 24 09:58:21 compute-0 boring_goldberg[272826]:                 "ceph.osd_fsid": "4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c",
Nov 24 09:58:21 compute-0 boring_goldberg[272826]:                 "ceph.osd_id": "0",
Nov 24 09:58:21 compute-0 boring_goldberg[272826]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 09:58:21 compute-0 boring_goldberg[272826]:                 "ceph.type": "block",
Nov 24 09:58:21 compute-0 boring_goldberg[272826]:                 "ceph.vdo": "0",
Nov 24 09:58:21 compute-0 boring_goldberg[272826]:                 "ceph.with_tpm": "0"
Nov 24 09:58:21 compute-0 boring_goldberg[272826]:             },
Nov 24 09:58:21 compute-0 boring_goldberg[272826]:             "type": "block",
Nov 24 09:58:21 compute-0 boring_goldberg[272826]:             "vg_name": "ceph_vg0"
Nov 24 09:58:21 compute-0 boring_goldberg[272826]:         }
Nov 24 09:58:21 compute-0 boring_goldberg[272826]:     ]
Nov 24 09:58:21 compute-0 boring_goldberg[272826]: }
Nov 24 09:58:21 compute-0 systemd[1]: libpod-50484cc8791e89aa7e68a782e3fd0f1d7f43bdedbe05e200249188f5ef80acdc.scope: Deactivated successfully.
Nov 24 09:58:21 compute-0 podman[272810]: 2025-11-24 09:58:21.609951242 +0000 UTC m=+0.384904422 container died 50484cc8791e89aa7e68a782e3fd0f1d7f43bdedbe05e200249188f5ef80acdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:58:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-0eb00a6e380f60aad95517e7481d12138b4c5cf6b7a203ecb463bcdc094cf278-merged.mount: Deactivated successfully.
Nov 24 09:58:21 compute-0 podman[272810]: 2025-11-24 09:58:21.6481664 +0000 UTC m=+0.423119580 container remove 50484cc8791e89aa7e68a782e3fd0f1d7f43bdedbe05e200249188f5ef80acdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_goldberg, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Nov 24 09:58:21 compute-0 systemd[1]: libpod-conmon-50484cc8791e89aa7e68a782e3fd0f1d7f43bdedbe05e200249188f5ef80acdc.scope: Deactivated successfully.
Nov 24 09:58:21 compute-0 sudo[272683]: pam_unix(sudo:session): session closed for user root
Nov 24 09:58:21 compute-0 sudo[272847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:58:21 compute-0 sudo[272847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:58:21 compute-0 sudo[272847]: pam_unix(sudo:session): session closed for user root
Nov 24 09:58:21 compute-0 sudo[272872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- raw list --format json
Nov 24 09:58:21 compute-0 sudo[272872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:58:22 compute-0 ceph-mon[74331]: pgmap v946: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 41 op/s
Nov 24 09:58:22 compute-0 nova_compute[257700]: 2025-11-24 09:58:22.139 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:58:22 compute-0 nova_compute[257700]: 2025-11-24 09:58:22.140 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:58:22 compute-0 podman[272938]: 2025-11-24 09:58:22.145684659 +0000 UTC m=+0.037189935 container create c9b0990f7fb093c1573e5b18089b3021cc085b6c2908117ca387c881457376ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_payne, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:58:22 compute-0 systemd[1]: Started libpod-conmon-c9b0990f7fb093c1573e5b18089b3021cc085b6c2908117ca387c881457376ed.scope.
Nov 24 09:58:22 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:58:22 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:22 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:22 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:22.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:22 compute-0 podman[272938]: 2025-11-24 09:58:22.218539129 +0000 UTC m=+0.110044425 container init c9b0990f7fb093c1573e5b18089b3021cc085b6c2908117ca387c881457376ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_payne, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 09:58:22 compute-0 podman[272938]: 2025-11-24 09:58:22.129781452 +0000 UTC m=+0.021286758 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:58:22 compute-0 podman[272938]: 2025-11-24 09:58:22.226052011 +0000 UTC m=+0.117557287 container start c9b0990f7fb093c1573e5b18089b3021cc085b6c2908117ca387c881457376ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid)
Nov 24 09:58:22 compute-0 podman[272938]: 2025-11-24 09:58:22.228811478 +0000 UTC m=+0.120316754 container attach c9b0990f7fb093c1573e5b18089b3021cc085b6c2908117ca387c881457376ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_payne, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:58:22 compute-0 festive_payne[272954]: 167 167
Nov 24 09:58:22 compute-0 systemd[1]: libpod-c9b0990f7fb093c1573e5b18089b3021cc085b6c2908117ca387c881457376ed.scope: Deactivated successfully.
Nov 24 09:58:22 compute-0 podman[272938]: 2025-11-24 09:58:22.232019506 +0000 UTC m=+0.123524782 container died c9b0990f7fb093c1573e5b18089b3021cc085b6c2908117ca387c881457376ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_payne, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:58:22 compute-0 nova_compute[257700]: 2025-11-24 09:58:22.237 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-e2ca39dbb9399208daa3f0d85ccd015a529a3876e268164b69c5ad8a723a9e20-merged.mount: Deactivated successfully.
Nov 24 09:58:22 compute-0 podman[272938]: 2025-11-24 09:58:22.266868892 +0000 UTC m=+0.158374168 container remove c9b0990f7fb093c1573e5b18089b3021cc085b6c2908117ca387c881457376ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_payne, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:58:22 compute-0 systemd[1]: libpod-conmon-c9b0990f7fb093c1573e5b18089b3021cc085b6c2908117ca387c881457376ed.scope: Deactivated successfully.
Nov 24 09:58:22 compute-0 podman[272981]: 2025-11-24 09:58:22.429548415 +0000 UTC m=+0.040500454 container create da8645f825591cd0c45c3f15a1a9abc54ce3ab21cab25d0f643504dfa91b780a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:58:22 compute-0 systemd[1]: Started libpod-conmon-da8645f825591cd0c45c3f15a1a9abc54ce3ab21cab25d0f643504dfa91b780a.scope.
Nov 24 09:58:22 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:58:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c3b919aa92702449598a6e0c604254e23f5e9939809ebfdadbeb3fabb3f4ba9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:58:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c3b919aa92702449598a6e0c604254e23f5e9939809ebfdadbeb3fabb3f4ba9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:58:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c3b919aa92702449598a6e0c604254e23f5e9939809ebfdadbeb3fabb3f4ba9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:58:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c3b919aa92702449598a6e0c604254e23f5e9939809ebfdadbeb3fabb3f4ba9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:58:22 compute-0 podman[272981]: 2025-11-24 09:58:22.493947809 +0000 UTC m=+0.104899888 container init da8645f825591cd0c45c3f15a1a9abc54ce3ab21cab25d0f643504dfa91b780a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_swirles, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:58:22 compute-0 podman[272981]: 2025-11-24 09:58:22.501015681 +0000 UTC m=+0.111967720 container start da8645f825591cd0c45c3f15a1a9abc54ce3ab21cab25d0f643504dfa91b780a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_swirles, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 24 09:58:22 compute-0 podman[272981]: 2025-11-24 09:58:22.504408754 +0000 UTC m=+0.115360793 container attach da8645f825591cd0c45c3f15a1a9abc54ce3ab21cab25d0f643504dfa91b780a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Nov 24 09:58:22 compute-0 podman[272981]: 2025-11-24 09:58:22.414367017 +0000 UTC m=+0.025319086 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:58:23 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:23 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:23 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:23.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:23 compute-0 lvm[273071]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 09:58:23 compute-0 lvm[273071]: VG ceph_vg0 finished
Nov 24 09:58:23 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v947: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 385 KiB/s rd, 2.5 MiB/s wr, 77 op/s
Nov 24 09:58:23 compute-0 charming_swirles[272997]: {}
Nov 24 09:58:23 compute-0 systemd[1]: libpod-da8645f825591cd0c45c3f15a1a9abc54ce3ab21cab25d0f643504dfa91b780a.scope: Deactivated successfully.
Nov 24 09:58:23 compute-0 systemd[1]: libpod-da8645f825591cd0c45c3f15a1a9abc54ce3ab21cab25d0f643504dfa91b780a.scope: Consumed 1.114s CPU time.
Nov 24 09:58:23 compute-0 podman[272981]: 2025-11-24 09:58:23.201728816 +0000 UTC m=+0.812680885 container died da8645f825591cd0c45c3f15a1a9abc54ce3ab21cab25d0f643504dfa91b780a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_swirles, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid)
Nov 24 09:58:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c3b919aa92702449598a6e0c604254e23f5e9939809ebfdadbeb3fabb3f4ba9-merged.mount: Deactivated successfully.
Nov 24 09:58:23 compute-0 podman[272981]: 2025-11-24 09:58:23.242444855 +0000 UTC m=+0.853396894 container remove da8645f825591cd0c45c3f15a1a9abc54ce3ab21cab25d0f643504dfa91b780a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_swirles, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Nov 24 09:58:23 compute-0 systemd[1]: libpod-conmon-da8645f825591cd0c45c3f15a1a9abc54ce3ab21cab25d0f643504dfa91b780a.scope: Deactivated successfully.
Nov 24 09:58:23 compute-0 sudo[272872]: pam_unix(sudo:session): session closed for user root
Nov 24 09:58:23 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:58:23 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:58:23 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:58:23 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:58:23 compute-0 sudo[273089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:58:23 compute-0 sudo[273089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:58:23 compute-0 sudo[273089]: pam_unix(sudo:session): session closed for user root
Nov 24 09:58:23 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:58:24 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:24 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:58:24 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:24.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:58:24 compute-0 ceph-mon[74331]: pgmap v947: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 385 KiB/s rd, 2.5 MiB/s wr, 77 op/s
Nov 24 09:58:24 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:58:24 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:58:24 compute-0 nova_compute[257700]: 2025-11-24 09:58:24.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:58:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:58:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:58:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:58:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:58:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:58:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:58:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:58:25 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:58:25 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:25 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:25 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:25.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:25 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v948: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 385 KiB/s rd, 2.5 MiB/s wr, 77 op/s
Nov 24 09:58:25 compute-0 nova_compute[257700]: 2025-11-24 09:58:25.954 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:26 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:26 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:26 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:26.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:26 compute-0 ceph-mon[74331]: pgmap v948: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 385 KiB/s rd, 2.5 MiB/s wr, 77 op/s
Nov 24 09:58:27 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:27 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:27 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:27.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:27 compute-0 sudo[273119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:58:27 compute-0 sudo[273119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:58:27 compute-0 sudo[273119]: pam_unix(sudo:session): session closed for user root
Nov 24 09:58:27 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v949: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 382 KiB/s rd, 2.5 MiB/s wr, 77 op/s
Nov 24 09:58:27 compute-0 nova_compute[257700]: 2025-11-24 09:58:27.240 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:58:27.527Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:58:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:58:27.528Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 09:58:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:58:27.528Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:58:28 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:28 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:28 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:28.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:28 compute-0 ceph-mon[74331]: pgmap v949: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 382 KiB/s rd, 2.5 MiB/s wr, 77 op/s
Nov 24 09:58:28 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:58:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:58:28.882Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:58:29 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:29 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:29 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:29.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:29 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v950: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 382 KiB/s rd, 2.5 MiB/s wr, 77 op/s
Nov 24 09:58:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:58:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:58:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:58:30 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:58:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:58:30 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:58:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:58:30 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:58:30 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:30 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:30 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:30.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:30 compute-0 ceph-mon[74331]: pgmap v950: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 382 KiB/s rd, 2.5 MiB/s wr, 77 op/s
Nov 24 09:58:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:58:30] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Nov 24 09:58:30 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:58:30] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Nov 24 09:58:30 compute-0 nova_compute[257700]: 2025-11-24 09:58:30.995 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:31 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:31 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:31 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:31.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:31 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v951: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 24 09:58:31 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:58:31 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 24 09:58:32 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:32 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:58:32 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:32.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:58:32 compute-0 nova_compute[257700]: 2025-11-24 09:58:32.243 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:32 compute-0 ceph-mon[74331]: pgmap v951: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 24 09:58:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:33.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:33 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v952: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Nov 24 09:58:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:58:34 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:34 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:34 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:34.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:34 compute-0 ceph-mon[74331]: pgmap v952: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Nov 24 09:58:34 compute-0 sshd-session[273115]: error: kex_exchange_identification: read: Connection timed out
Nov 24 09:58:34 compute-0 sshd-session[273115]: banner exchange: Connection from 121.31.210.125 port 48506: Connection timed out
Nov 24 09:58:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:58:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:58:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:58:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:58:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:58:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:58:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:58:35 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:58:35 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:35 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:58:35 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:35.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:58:35 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v953: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 16 KiB/s wr, 1 op/s
Nov 24 09:58:35 compute-0 nova_compute[257700]: 2025-11-24 09:58:35.997 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:36 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:58:36.051 165073 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:13:51', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4e:f0:a8:6f:5e:1b'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 09:58:36 compute-0 nova_compute[257700]: 2025-11-24 09:58:36.051 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:36 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:58:36.052 165073 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 09:58:36 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:36 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:58:36 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:36.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:58:36 compute-0 ceph-mon[74331]: pgmap v953: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 16 KiB/s wr, 1 op/s
Nov 24 09:58:37 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:58:37.053 165073 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feb242b9-6422-4c37-bc7a-5c14a79beaf8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:58:37 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:37 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:37 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:37.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:37 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v954: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 17 KiB/s wr, 2 op/s
Nov 24 09:58:37 compute-0 nova_compute[257700]: 2025-11-24 09:58:37.247 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:58:37.529Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:58:37 compute-0 ceph-mon[74331]: pgmap v954: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 17 KiB/s wr, 2 op/s
Nov 24 09:58:37 compute-0 podman[273156]: 2025-11-24 09:58:37.789010975 +0000 UTC m=+0.062107240 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 24 09:58:37 compute-0 podman[273157]: 2025-11-24 09:58:37.813025299 +0000 UTC m=+0.085550140 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 24 09:58:38 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:38 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:38 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:38.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:38 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:58:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:58:38.884Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:58:39 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:39 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:58:39 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:39.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:58:39 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v955: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 5.7 KiB/s wr, 1 op/s
Nov 24 09:58:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:58:39 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:58:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:58:39 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:58:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:58:39 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:58:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:58:40 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:58:40 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:40 compute-0 ceph-mon[74331]: pgmap v955: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 5.7 KiB/s wr, 1 op/s
Nov 24 09:58:40 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:40 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:40.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:40 compute-0 sshd-session[273208]: Invalid user ftp1 from 36.255.3.203 port 56020
Nov 24 09:58:40 compute-0 sshd-session[273208]: Received disconnect from 36.255.3.203 port 56020:11: Bye Bye [preauth]
Nov 24 09:58:40 compute-0 sshd-session[273208]: Disconnected from invalid user ftp1 36.255.3.203 port 56020 [preauth]
Nov 24 09:58:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:58:40] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Nov 24 09:58:40 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:58:40] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Nov 24 09:58:40 compute-0 nova_compute[257700]: 2025-11-24 09:58:40.998 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:41 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:41 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:41 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:41.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:41 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v956: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 5.7 KiB/s wr, 1 op/s
Nov 24 09:58:42 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:42 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:58:42 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:42.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:58:42 compute-0 nova_compute[257700]: 2025-11-24 09:58:42.249 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:42 compute-0 ceph-mon[74331]: pgmap v956: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 5.7 KiB/s wr, 1 op/s
Nov 24 09:58:42 compute-0 nova_compute[257700]: 2025-11-24 09:58:42.464 257704 DEBUG oslo_concurrency.lockutils [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "72448c73-f653-4d79-8800-4ac3e9261a45" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:58:42 compute-0 nova_compute[257700]: 2025-11-24 09:58:42.465 257704 DEBUG oslo_concurrency.lockutils [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "72448c73-f653-4d79-8800-4ac3e9261a45" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:58:42 compute-0 nova_compute[257700]: 2025-11-24 09:58:42.480 257704 DEBUG nova.compute.manager [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 24 09:58:42 compute-0 nova_compute[257700]: 2025-11-24 09:58:42.551 257704 DEBUG oslo_concurrency.lockutils [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:58:42 compute-0 nova_compute[257700]: 2025-11-24 09:58:42.551 257704 DEBUG oslo_concurrency.lockutils [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:58:42 compute-0 nova_compute[257700]: 2025-11-24 09:58:42.559 257704 DEBUG nova.virt.hardware [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 24 09:58:42 compute-0 nova_compute[257700]: 2025-11-24 09:58:42.559 257704 INFO nova.compute.claims [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Claim successful on node compute-0.ctlplane.example.com
Nov 24 09:58:42 compute-0 nova_compute[257700]: 2025-11-24 09:58:42.676 257704 DEBUG oslo_concurrency.processutils [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:58:43 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:43 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:43 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:43.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:43 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v957: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 3.8 KiB/s rd, 5.7 KiB/s wr, 2 op/s
Nov 24 09:58:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:58:43 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/654886703' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:58:43 compute-0 nova_compute[257700]: 2025-11-24 09:58:43.168 257704 DEBUG oslo_concurrency.processutils [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:58:43 compute-0 nova_compute[257700]: 2025-11-24 09:58:43.177 257704 DEBUG nova.compute.provider_tree [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed in ProviderTree for provider: a50ce3b5-7e9e-4263-a4aa-c35573ac7257 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 09:58:43 compute-0 nova_compute[257700]: 2025-11-24 09:58:43.192 257704 DEBUG nova.scheduler.client.report [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 09:58:43 compute-0 nova_compute[257700]: 2025-11-24 09:58:43.214 257704 DEBUG oslo_concurrency.lockutils [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:58:43 compute-0 nova_compute[257700]: 2025-11-24 09:58:43.215 257704 DEBUG nova.compute.manager [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 24 09:58:43 compute-0 nova_compute[257700]: 2025-11-24 09:58:43.258 257704 DEBUG nova.compute.manager [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 24 09:58:43 compute-0 nova_compute[257700]: 2025-11-24 09:58:43.259 257704 DEBUG nova.network.neutron [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 24 09:58:43 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/654886703' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:58:43 compute-0 nova_compute[257700]: 2025-11-24 09:58:43.283 257704 INFO nova.virt.libvirt.driver [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 24 09:58:43 compute-0 nova_compute[257700]: 2025-11-24 09:58:43.301 257704 DEBUG nova.compute.manager [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 24 09:58:43 compute-0 nova_compute[257700]: 2025-11-24 09:58:43.414 257704 DEBUG nova.compute.manager [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 24 09:58:43 compute-0 nova_compute[257700]: 2025-11-24 09:58:43.416 257704 DEBUG nova.virt.libvirt.driver [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 24 09:58:43 compute-0 nova_compute[257700]: 2025-11-24 09:58:43.416 257704 INFO nova.virt.libvirt.driver [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Creating image(s)
Nov 24 09:58:43 compute-0 nova_compute[257700]: 2025-11-24 09:58:43.451 257704 DEBUG nova.storage.rbd_utils [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 72448c73-f653-4d79-8800-4ac3e9261a45_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 09:58:43 compute-0 nova_compute[257700]: 2025-11-24 09:58:43.489 257704 DEBUG nova.storage.rbd_utils [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 72448c73-f653-4d79-8800-4ac3e9261a45_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 09:58:43 compute-0 nova_compute[257700]: 2025-11-24 09:58:43.524 257704 DEBUG nova.storage.rbd_utils [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 72448c73-f653-4d79-8800-4ac3e9261a45_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 09:58:43 compute-0 nova_compute[257700]: 2025-11-24 09:58:43.531 257704 DEBUG oslo_concurrency.processutils [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:58:43 compute-0 nova_compute[257700]: 2025-11-24 09:58:43.563 257704 DEBUG nova.policy [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '43f79ff3105e4372a3c095e8057d4f1f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '94d069fc040647d5a6e54894eec915fe', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 24 09:58:43 compute-0 nova_compute[257700]: 2025-11-24 09:58:43.622 257704 DEBUG oslo_concurrency.processutils [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:58:43 compute-0 nova_compute[257700]: 2025-11-24 09:58:43.623 257704 DEBUG oslo_concurrency.lockutils [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "2ed5c667523487159c4c4503c82babbc95dbae40" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:58:43 compute-0 nova_compute[257700]: 2025-11-24 09:58:43.624 257704 DEBUG oslo_concurrency.lockutils [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "2ed5c667523487159c4c4503c82babbc95dbae40" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:58:43 compute-0 nova_compute[257700]: 2025-11-24 09:58:43.624 257704 DEBUG oslo_concurrency.lockutils [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "2ed5c667523487159c4c4503c82babbc95dbae40" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:58:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:58:43 compute-0 nova_compute[257700]: 2025-11-24 09:58:43.658 257704 DEBUG nova.storage.rbd_utils [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 72448c73-f653-4d79-8800-4ac3e9261a45_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 09:58:43 compute-0 nova_compute[257700]: 2025-11-24 09:58:43.664 257704 DEBUG oslo_concurrency.processutils [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 72448c73-f653-4d79-8800-4ac3e9261a45_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:58:43 compute-0 nova_compute[257700]: 2025-11-24 09:58:43.941 257704 DEBUG oslo_concurrency.processutils [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 72448c73-f653-4d79-8800-4ac3e9261a45_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.277s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:58:44 compute-0 nova_compute[257700]: 2025-11-24 09:58:44.016 257704 DEBUG nova.storage.rbd_utils [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] resizing rbd image 72448c73-f653-4d79-8800-4ac3e9261a45_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 24 09:58:44 compute-0 nova_compute[257700]: 2025-11-24 09:58:44.112 257704 DEBUG nova.objects.instance [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'migration_context' on Instance uuid 72448c73-f653-4d79-8800-4ac3e9261a45 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 09:58:44 compute-0 nova_compute[257700]: 2025-11-24 09:58:44.124 257704 DEBUG nova.virt.libvirt.driver [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 24 09:58:44 compute-0 nova_compute[257700]: 2025-11-24 09:58:44.124 257704 DEBUG nova.virt.libvirt.driver [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Ensure instance console log exists: /var/lib/nova/instances/72448c73-f653-4d79-8800-4ac3e9261a45/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 24 09:58:44 compute-0 nova_compute[257700]: 2025-11-24 09:58:44.124 257704 DEBUG oslo_concurrency.lockutils [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:58:44 compute-0 nova_compute[257700]: 2025-11-24 09:58:44.125 257704 DEBUG oslo_concurrency.lockutils [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:58:44 compute-0 nova_compute[257700]: 2025-11-24 09:58:44.125 257704 DEBUG oslo_concurrency.lockutils [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:58:44 compute-0 nova_compute[257700]: 2025-11-24 09:58:44.184 257704 DEBUG nova.network.neutron [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Successfully created port: aaa65cd2-1ea3-464c-88bb-de1faf8ae995 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 24 09:58:44 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:44 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:44 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:44.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:44 compute-0 ceph-mon[74331]: pgmap v957: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 3.8 KiB/s rd, 5.7 KiB/s wr, 2 op/s
Nov 24 09:58:44 compute-0 nova_compute[257700]: 2025-11-24 09:58:44.956 257704 DEBUG nova.network.neutron [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Successfully updated port: aaa65cd2-1ea3-464c-88bb-de1faf8ae995 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 24 09:58:44 compute-0 nova_compute[257700]: 2025-11-24 09:58:44.966 257704 DEBUG oslo_concurrency.lockutils [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "refresh_cache-72448c73-f653-4d79-8800-4ac3e9261a45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 09:58:44 compute-0 nova_compute[257700]: 2025-11-24 09:58:44.967 257704 DEBUG oslo_concurrency.lockutils [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquired lock "refresh_cache-72448c73-f653-4d79-8800-4ac3e9261a45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 09:58:44 compute-0 nova_compute[257700]: 2025-11-24 09:58:44.967 257704 DEBUG nova.network.neutron [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 24 09:58:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:58:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:58:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:58:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:58:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:58:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:58:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:58:45 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:58:45 compute-0 nova_compute[257700]: 2025-11-24 09:58:45.049 257704 DEBUG nova.compute.manager [req-f6788c4a-473c-4264-ab28-6ac059c3a997 req-80b084cc-69d4-493d-a0a8-892c35aa8a5e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Received event network-changed-aaa65cd2-1ea3-464c-88bb-de1faf8ae995 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 09:58:45 compute-0 nova_compute[257700]: 2025-11-24 09:58:45.049 257704 DEBUG nova.compute.manager [req-f6788c4a-473c-4264-ab28-6ac059c3a997 req-80b084cc-69d4-493d-a0a8-892c35aa8a5e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Refreshing instance network info cache due to event network-changed-aaa65cd2-1ea3-464c-88bb-de1faf8ae995. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 09:58:45 compute-0 nova_compute[257700]: 2025-11-24 09:58:45.050 257704 DEBUG oslo_concurrency.lockutils [req-f6788c4a-473c-4264-ab28-6ac059c3a997 req-80b084cc-69d4-493d-a0a8-892c35aa8a5e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "refresh_cache-72448c73-f653-4d79-8800-4ac3e9261a45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 09:58:45 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:45 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:45 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:45.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:45 compute-0 nova_compute[257700]: 2025-11-24 09:58:45.093 257704 DEBUG nova.network.neutron [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 24 09:58:45 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v958: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1023 B/s wr, 1 op/s
Nov 24 09:58:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Optimize plan auto_2025-11-24_09:58:45
Nov 24 09:58:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 09:58:45 compute-0 ceph-mgr[74626]: [balancer INFO root] do_upmap
Nov 24 09:58:45 compute-0 ceph-mgr[74626]: [balancer INFO root] pools ['volumes', 'default.rgw.log', 'vms', '.mgr', '.nfs', 'images', 'backups', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root']
Nov 24 09:58:45 compute-0 ceph-mgr[74626]: [balancer INFO root] prepared 0/10 upmap changes
Nov 24 09:58:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:58:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:58:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:58:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:58:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:58:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:58:45 compute-0 sshd-session[273205]: Received disconnect from 45.78.198.78 port 34676:11: Bye Bye [preauth]
Nov 24 09:58:45 compute-0 sshd-session[273205]: Disconnected from 45.78.198.78 port 34676 [preauth]
Nov 24 09:58:45 compute-0 podman[273404]: 2025-11-24 09:58:45.772857851 +0000 UTC m=+0.049072024 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 09:58:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 09:58:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:58:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 09:58:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:58:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000759845367747607 of space, bias 1.0, pg target 0.2279536103242821 quantized to 32 (current 32)
Nov 24 09:58:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:58:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:58:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:58:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:58:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:58:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 24 09:58:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:58:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 09:58:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:58:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:58:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:58:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 24 09:58:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:58:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 09:58:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:58:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:58:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:58:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 09:58:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:58:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 09:58:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 09:58:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:58:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 09:58:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:58:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:58:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:58:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:58:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:58:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:58:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:58:46 compute-0 nova_compute[257700]: 2025-11-24 09:58:46.000 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:46 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:46 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:58:46 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:46.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:58:46 compute-0 ceph-mon[74331]: pgmap v958: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1023 B/s wr, 1 op/s
Nov 24 09:58:46 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:58:47 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:47 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:47 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:47.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:47 compute-0 sudo[273426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:58:47 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v959: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Nov 24 09:58:47 compute-0 sudo[273426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:58:47 compute-0 sudo[273426]: pam_unix(sudo:session): session closed for user root
Nov 24 09:58:47 compute-0 nova_compute[257700]: 2025-11-24 09:58:47.250 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:58:47.530Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:58:48 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:48 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:48 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:48.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:48 compute-0 ceph-mon[74331]: pgmap v959: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Nov 24 09:58:48 compute-0 nova_compute[257700]: 2025-11-24 09:58:48.500 257704 DEBUG nova.network.neutron [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Updating instance_info_cache with network_info: [{"id": "aaa65cd2-1ea3-464c-88bb-de1faf8ae995", "address": "fa:16:3e:60:a7:a4", "network": {"id": "cbb18554-4df6-4004-8b94-6d2a9b50722d", "bridge": "br-int", "label": "tempest-network-smoke--1864982359", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaaa65cd2-1e", "ovs_interfaceid": "aaa65cd2-1ea3-464c-88bb-de1faf8ae995", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 09:58:48 compute-0 nova_compute[257700]: 2025-11-24 09:58:48.517 257704 DEBUG oslo_concurrency.lockutils [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Releasing lock "refresh_cache-72448c73-f653-4d79-8800-4ac3e9261a45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 09:58:48 compute-0 nova_compute[257700]: 2025-11-24 09:58:48.517 257704 DEBUG nova.compute.manager [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Instance network_info: |[{"id": "aaa65cd2-1ea3-464c-88bb-de1faf8ae995", "address": "fa:16:3e:60:a7:a4", "network": {"id": "cbb18554-4df6-4004-8b94-6d2a9b50722d", "bridge": "br-int", "label": "tempest-network-smoke--1864982359", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaaa65cd2-1e", "ovs_interfaceid": "aaa65cd2-1ea3-464c-88bb-de1faf8ae995", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 24 09:58:48 compute-0 nova_compute[257700]: 2025-11-24 09:58:48.517 257704 DEBUG oslo_concurrency.lockutils [req-f6788c4a-473c-4264-ab28-6ac059c3a997 req-80b084cc-69d4-493d-a0a8-892c35aa8a5e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquired lock "refresh_cache-72448c73-f653-4d79-8800-4ac3e9261a45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 09:58:48 compute-0 nova_compute[257700]: 2025-11-24 09:58:48.518 257704 DEBUG nova.network.neutron [req-f6788c4a-473c-4264-ab28-6ac059c3a997 req-80b084cc-69d4-493d-a0a8-892c35aa8a5e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Refreshing network info cache for port aaa65cd2-1ea3-464c-88bb-de1faf8ae995 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 09:58:48 compute-0 nova_compute[257700]: 2025-11-24 09:58:48.521 257704 DEBUG nova.virt.libvirt.driver [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Start _get_guest_xml network_info=[{"id": "aaa65cd2-1ea3-464c-88bb-de1faf8ae995", "address": "fa:16:3e:60:a7:a4", "network": {"id": "cbb18554-4df6-4004-8b94-6d2a9b50722d", "bridge": "br-int", "label": "tempest-network-smoke--1864982359", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaaa65cd2-1e", "ovs_interfaceid": "aaa65cd2-1ea3-464c-88bb-de1faf8ae995", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T09:52:37Z,direct_url=<?>,disk_format='qcow2',id=6ef14bdf-4f04-4400-8040-4409d9d5271e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='cf636babb68a4ebe9bf137d3fe0e4c0c',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T09:52:41Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'device_name': '/dev/vda', 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_format': None, 'boot_index': 0, 'device_type': 'disk', 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': '6ef14bdf-4f04-4400-8040-4409d9d5271e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 24 09:58:48 compute-0 nova_compute[257700]: 2025-11-24 09:58:48.525 257704 WARNING nova.virt.libvirt.driver [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 09:58:48 compute-0 nova_compute[257700]: 2025-11-24 09:58:48.534 257704 DEBUG nova.virt.libvirt.host [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 24 09:58:48 compute-0 nova_compute[257700]: 2025-11-24 09:58:48.535 257704 DEBUG nova.virt.libvirt.host [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 24 09:58:48 compute-0 nova_compute[257700]: 2025-11-24 09:58:48.538 257704 DEBUG nova.virt.libvirt.host [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 24 09:58:48 compute-0 nova_compute[257700]: 2025-11-24 09:58:48.539 257704 DEBUG nova.virt.libvirt.host [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 24 09:58:48 compute-0 nova_compute[257700]: 2025-11-24 09:58:48.539 257704 DEBUG nova.virt.libvirt.driver [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 24 09:58:48 compute-0 nova_compute[257700]: 2025-11-24 09:58:48.540 257704 DEBUG nova.virt.hardware [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-24T09:52:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='4a5d03ad-925b-45f1-89bd-f1325f9f3292',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T09:52:37Z,direct_url=<?>,disk_format='qcow2',id=6ef14bdf-4f04-4400-8040-4409d9d5271e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='cf636babb68a4ebe9bf137d3fe0e4c0c',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T09:52:41Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 24 09:58:48 compute-0 nova_compute[257700]: 2025-11-24 09:58:48.540 257704 DEBUG nova.virt.hardware [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 24 09:58:48 compute-0 nova_compute[257700]: 2025-11-24 09:58:48.540 257704 DEBUG nova.virt.hardware [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 24 09:58:48 compute-0 nova_compute[257700]: 2025-11-24 09:58:48.541 257704 DEBUG nova.virt.hardware [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 24 09:58:48 compute-0 nova_compute[257700]: 2025-11-24 09:58:48.541 257704 DEBUG nova.virt.hardware [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 24 09:58:48 compute-0 nova_compute[257700]: 2025-11-24 09:58:48.541 257704 DEBUG nova.virt.hardware [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 24 09:58:48 compute-0 nova_compute[257700]: 2025-11-24 09:58:48.542 257704 DEBUG nova.virt.hardware [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 24 09:58:48 compute-0 nova_compute[257700]: 2025-11-24 09:58:48.542 257704 DEBUG nova.virt.hardware [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 24 09:58:48 compute-0 nova_compute[257700]: 2025-11-24 09:58:48.542 257704 DEBUG nova.virt.hardware [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 24 09:58:48 compute-0 nova_compute[257700]: 2025-11-24 09:58:48.543 257704 DEBUG nova.virt.hardware [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 24 09:58:48 compute-0 nova_compute[257700]: 2025-11-24 09:58:48.543 257704 DEBUG nova.virt.hardware [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 24 09:58:48 compute-0 nova_compute[257700]: 2025-11-24 09:58:48.547 257704 DEBUG oslo_concurrency.processutils [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:58:48 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:58:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:58:48.884Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:58:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:58:48.885Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:58:48 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 24 09:58:48 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1615056899' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 09:58:48 compute-0 nova_compute[257700]: 2025-11-24 09:58:48.987 257704 DEBUG oslo_concurrency.processutils [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:58:49 compute-0 nova_compute[257700]: 2025-11-24 09:58:49.023 257704 DEBUG nova.storage.rbd_utils [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 72448c73-f653-4d79-8800-4ac3e9261a45_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 09:58:49 compute-0 nova_compute[257700]: 2025-11-24 09:58:49.029 257704 DEBUG oslo_concurrency.processutils [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:58:49 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:49 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:58:49 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:49.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:58:49 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v960: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 24 09:58:49 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1615056899' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 09:58:49 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 24 09:58:49 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4212559010' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 09:58:49 compute-0 nova_compute[257700]: 2025-11-24 09:58:49.477 257704 DEBUG oslo_concurrency.processutils [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:58:49 compute-0 nova_compute[257700]: 2025-11-24 09:58:49.479 257704 DEBUG nova.virt.libvirt.vif [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T09:58:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-171812519',display_name='tempest-TestNetworkBasicOps-server-171812519',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-171812519',id=7,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCgmeMHJykGqateSnctRhNcdOGXzWmSb9mhWenyV80u/CYEJehcRd0ODCCJk4df+FrF3/+0nuDoJ1DckUOiB1lj/KcXR6y85li1qYj4UyiOPaprHciIxj1dy7cPFGRzkiA==',key_name='tempest-TestNetworkBasicOps-625568321',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-iwncyci4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T09:58:43Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=72448c73-f653-4d79-8800-4ac3e9261a45,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "aaa65cd2-1ea3-464c-88bb-de1faf8ae995", "address": "fa:16:3e:60:a7:a4", "network": {"id": "cbb18554-4df6-4004-8b94-6d2a9b50722d", "bridge": "br-int", "label": "tempest-network-smoke--1864982359", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaaa65cd2-1e", "ovs_interfaceid": "aaa65cd2-1ea3-464c-88bb-de1faf8ae995", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 24 09:58:49 compute-0 nova_compute[257700]: 2025-11-24 09:58:49.479 257704 DEBUG nova.network.os_vif_util [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "aaa65cd2-1ea3-464c-88bb-de1faf8ae995", "address": "fa:16:3e:60:a7:a4", "network": {"id": "cbb18554-4df6-4004-8b94-6d2a9b50722d", "bridge": "br-int", "label": "tempest-network-smoke--1864982359", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaaa65cd2-1e", "ovs_interfaceid": "aaa65cd2-1ea3-464c-88bb-de1faf8ae995", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 09:58:49 compute-0 nova_compute[257700]: 2025-11-24 09:58:49.480 257704 DEBUG nova.network.os_vif_util [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:60:a7:a4,bridge_name='br-int',has_traffic_filtering=True,id=aaa65cd2-1ea3-464c-88bb-de1faf8ae995,network=Network(cbb18554-4df6-4004-8b94-6d2a9b50722d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaaa65cd2-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 09:58:49 compute-0 nova_compute[257700]: 2025-11-24 09:58:49.481 257704 DEBUG nova.objects.instance [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'pci_devices' on Instance uuid 72448c73-f653-4d79-8800-4ac3e9261a45 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 09:58:49 compute-0 nova_compute[257700]: 2025-11-24 09:58:49.490 257704 DEBUG nova.virt.libvirt.driver [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] End _get_guest_xml xml=<domain type="kvm">
Nov 24 09:58:49 compute-0 nova_compute[257700]:   <uuid>72448c73-f653-4d79-8800-4ac3e9261a45</uuid>
Nov 24 09:58:49 compute-0 nova_compute[257700]:   <name>instance-00000007</name>
Nov 24 09:58:49 compute-0 nova_compute[257700]:   <memory>131072</memory>
Nov 24 09:58:49 compute-0 nova_compute[257700]:   <vcpu>1</vcpu>
Nov 24 09:58:49 compute-0 nova_compute[257700]:   <metadata>
Nov 24 09:58:49 compute-0 nova_compute[257700]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 09:58:49 compute-0 nova_compute[257700]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:       <nova:name>tempest-TestNetworkBasicOps-server-171812519</nova:name>
Nov 24 09:58:49 compute-0 nova_compute[257700]:       <nova:creationTime>2025-11-24 09:58:48</nova:creationTime>
Nov 24 09:58:49 compute-0 nova_compute[257700]:       <nova:flavor name="m1.nano">
Nov 24 09:58:49 compute-0 nova_compute[257700]:         <nova:memory>128</nova:memory>
Nov 24 09:58:49 compute-0 nova_compute[257700]:         <nova:disk>1</nova:disk>
Nov 24 09:58:49 compute-0 nova_compute[257700]:         <nova:swap>0</nova:swap>
Nov 24 09:58:49 compute-0 nova_compute[257700]:         <nova:ephemeral>0</nova:ephemeral>
Nov 24 09:58:49 compute-0 nova_compute[257700]:         <nova:vcpus>1</nova:vcpus>
Nov 24 09:58:49 compute-0 nova_compute[257700]:       </nova:flavor>
Nov 24 09:58:49 compute-0 nova_compute[257700]:       <nova:owner>
Nov 24 09:58:49 compute-0 nova_compute[257700]:         <nova:user uuid="43f79ff3105e4372a3c095e8057d4f1f">tempest-TestNetworkBasicOps-1844071378-project-member</nova:user>
Nov 24 09:58:49 compute-0 nova_compute[257700]:         <nova:project uuid="94d069fc040647d5a6e54894eec915fe">tempest-TestNetworkBasicOps-1844071378</nova:project>
Nov 24 09:58:49 compute-0 nova_compute[257700]:       </nova:owner>
Nov 24 09:58:49 compute-0 nova_compute[257700]:       <nova:root type="image" uuid="6ef14bdf-4f04-4400-8040-4409d9d5271e"/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:       <nova:ports>
Nov 24 09:58:49 compute-0 nova_compute[257700]:         <nova:port uuid="aaa65cd2-1ea3-464c-88bb-de1faf8ae995">
Nov 24 09:58:49 compute-0 nova_compute[257700]:           <nova:ip type="fixed" address="10.100.0.22" ipVersion="4"/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:         </nova:port>
Nov 24 09:58:49 compute-0 nova_compute[257700]:       </nova:ports>
Nov 24 09:58:49 compute-0 nova_compute[257700]:     </nova:instance>
Nov 24 09:58:49 compute-0 nova_compute[257700]:   </metadata>
Nov 24 09:58:49 compute-0 nova_compute[257700]:   <sysinfo type="smbios">
Nov 24 09:58:49 compute-0 nova_compute[257700]:     <system>
Nov 24 09:58:49 compute-0 nova_compute[257700]:       <entry name="manufacturer">RDO</entry>
Nov 24 09:58:49 compute-0 nova_compute[257700]:       <entry name="product">OpenStack Compute</entry>
Nov 24 09:58:49 compute-0 nova_compute[257700]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 09:58:49 compute-0 nova_compute[257700]:       <entry name="serial">72448c73-f653-4d79-8800-4ac3e9261a45</entry>
Nov 24 09:58:49 compute-0 nova_compute[257700]:       <entry name="uuid">72448c73-f653-4d79-8800-4ac3e9261a45</entry>
Nov 24 09:58:49 compute-0 nova_compute[257700]:       <entry name="family">Virtual Machine</entry>
Nov 24 09:58:49 compute-0 nova_compute[257700]:     </system>
Nov 24 09:58:49 compute-0 nova_compute[257700]:   </sysinfo>
Nov 24 09:58:49 compute-0 nova_compute[257700]:   <os>
Nov 24 09:58:49 compute-0 nova_compute[257700]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 24 09:58:49 compute-0 nova_compute[257700]:     <boot dev="hd"/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:     <smbios mode="sysinfo"/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:   </os>
Nov 24 09:58:49 compute-0 nova_compute[257700]:   <features>
Nov 24 09:58:49 compute-0 nova_compute[257700]:     <acpi/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:     <apic/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:     <vmcoreinfo/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:   </features>
Nov 24 09:58:49 compute-0 nova_compute[257700]:   <clock offset="utc">
Nov 24 09:58:49 compute-0 nova_compute[257700]:     <timer name="pit" tickpolicy="delay"/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:     <timer name="hpet" present="no"/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:   </clock>
Nov 24 09:58:49 compute-0 nova_compute[257700]:   <cpu mode="host-model" match="exact">
Nov 24 09:58:49 compute-0 nova_compute[257700]:     <topology sockets="1" cores="1" threads="1"/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:   </cpu>
Nov 24 09:58:49 compute-0 nova_compute[257700]:   <devices>
Nov 24 09:58:49 compute-0 nova_compute[257700]:     <disk type="network" device="disk">
Nov 24 09:58:49 compute-0 nova_compute[257700]:       <driver type="raw" cache="none"/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:       <source protocol="rbd" name="vms/72448c73-f653-4d79-8800-4ac3e9261a45_disk">
Nov 24 09:58:49 compute-0 nova_compute[257700]:         <host name="192.168.122.100" port="6789"/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:         <host name="192.168.122.102" port="6789"/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:         <host name="192.168.122.101" port="6789"/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:       </source>
Nov 24 09:58:49 compute-0 nova_compute[257700]:       <auth username="openstack">
Nov 24 09:58:49 compute-0 nova_compute[257700]:         <secret type="ceph" uuid="84a084c3-61a7-5de7-8207-1f88efa59a64"/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:       </auth>
Nov 24 09:58:49 compute-0 nova_compute[257700]:       <target dev="vda" bus="virtio"/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:     </disk>
Nov 24 09:58:49 compute-0 nova_compute[257700]:     <disk type="network" device="cdrom">
Nov 24 09:58:49 compute-0 nova_compute[257700]:       <driver type="raw" cache="none"/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:       <source protocol="rbd" name="vms/72448c73-f653-4d79-8800-4ac3e9261a45_disk.config">
Nov 24 09:58:49 compute-0 nova_compute[257700]:         <host name="192.168.122.100" port="6789"/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:         <host name="192.168.122.102" port="6789"/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:         <host name="192.168.122.101" port="6789"/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:       </source>
Nov 24 09:58:49 compute-0 nova_compute[257700]:       <auth username="openstack">
Nov 24 09:58:49 compute-0 nova_compute[257700]:         <secret type="ceph" uuid="84a084c3-61a7-5de7-8207-1f88efa59a64"/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:       </auth>
Nov 24 09:58:49 compute-0 nova_compute[257700]:       <target dev="sda" bus="sata"/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:     </disk>
Nov 24 09:58:49 compute-0 nova_compute[257700]:     <interface type="ethernet">
Nov 24 09:58:49 compute-0 nova_compute[257700]:       <mac address="fa:16:3e:60:a7:a4"/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:       <model type="virtio"/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:       <driver name="vhost" rx_queue_size="512"/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:       <mtu size="1442"/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:       <target dev="tapaaa65cd2-1e"/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:     </interface>
Nov 24 09:58:49 compute-0 nova_compute[257700]:     <serial type="pty">
Nov 24 09:58:49 compute-0 nova_compute[257700]:       <log file="/var/lib/nova/instances/72448c73-f653-4d79-8800-4ac3e9261a45/console.log" append="off"/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:     </serial>
Nov 24 09:58:49 compute-0 nova_compute[257700]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:     <video>
Nov 24 09:58:49 compute-0 nova_compute[257700]:       <model type="virtio"/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:     </video>
Nov 24 09:58:49 compute-0 nova_compute[257700]:     <input type="tablet" bus="usb"/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:     <rng model="virtio">
Nov 24 09:58:49 compute-0 nova_compute[257700]:       <backend model="random">/dev/urandom</backend>
Nov 24 09:58:49 compute-0 nova_compute[257700]:     </rng>
Nov 24 09:58:49 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root"/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:     <controller type="usb" index="0"/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:     <memballoon model="virtio">
Nov 24 09:58:49 compute-0 nova_compute[257700]:       <stats period="10"/>
Nov 24 09:58:49 compute-0 nova_compute[257700]:     </memballoon>
Nov 24 09:58:49 compute-0 nova_compute[257700]:   </devices>
Nov 24 09:58:49 compute-0 nova_compute[257700]: </domain>
Nov 24 09:58:49 compute-0 nova_compute[257700]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 24 09:58:49 compute-0 nova_compute[257700]: 2025-11-24 09:58:49.491 257704 DEBUG nova.compute.manager [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Preparing to wait for external event network-vif-plugged-aaa65cd2-1ea3-464c-88bb-de1faf8ae995 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 24 09:58:49 compute-0 nova_compute[257700]: 2025-11-24 09:58:49.492 257704 DEBUG oslo_concurrency.lockutils [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "72448c73-f653-4d79-8800-4ac3e9261a45-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:58:49 compute-0 nova_compute[257700]: 2025-11-24 09:58:49.492 257704 DEBUG oslo_concurrency.lockutils [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "72448c73-f653-4d79-8800-4ac3e9261a45-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:58:49 compute-0 nova_compute[257700]: 2025-11-24 09:58:49.492 257704 DEBUG oslo_concurrency.lockutils [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "72448c73-f653-4d79-8800-4ac3e9261a45-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:58:49 compute-0 nova_compute[257700]: 2025-11-24 09:58:49.493 257704 DEBUG nova.virt.libvirt.vif [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T09:58:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-171812519',display_name='tempest-TestNetworkBasicOps-server-171812519',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-171812519',id=7,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCgmeMHJykGqateSnctRhNcdOGXzWmSb9mhWenyV80u/CYEJehcRd0ODCCJk4df+FrF3/+0nuDoJ1DckUOiB1lj/KcXR6y85li1qYj4UyiOPaprHciIxj1dy7cPFGRzkiA==',key_name='tempest-TestNetworkBasicOps-625568321',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-iwncyci4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T09:58:43Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=72448c73-f653-4d79-8800-4ac3e9261a45,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "aaa65cd2-1ea3-464c-88bb-de1faf8ae995", "address": "fa:16:3e:60:a7:a4", "network": {"id": "cbb18554-4df6-4004-8b94-6d2a9b50722d", "bridge": "br-int", "label": "tempest-network-smoke--1864982359", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaaa65cd2-1e", "ovs_interfaceid": "aaa65cd2-1ea3-464c-88bb-de1faf8ae995", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 24 09:58:49 compute-0 nova_compute[257700]: 2025-11-24 09:58:49.493 257704 DEBUG nova.network.os_vif_util [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "aaa65cd2-1ea3-464c-88bb-de1faf8ae995", "address": "fa:16:3e:60:a7:a4", "network": {"id": "cbb18554-4df6-4004-8b94-6d2a9b50722d", "bridge": "br-int", "label": "tempest-network-smoke--1864982359", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaaa65cd2-1e", "ovs_interfaceid": "aaa65cd2-1ea3-464c-88bb-de1faf8ae995", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 09:58:49 compute-0 nova_compute[257700]: 2025-11-24 09:58:49.494 257704 DEBUG nova.network.os_vif_util [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:60:a7:a4,bridge_name='br-int',has_traffic_filtering=True,id=aaa65cd2-1ea3-464c-88bb-de1faf8ae995,network=Network(cbb18554-4df6-4004-8b94-6d2a9b50722d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaaa65cd2-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 09:58:49 compute-0 nova_compute[257700]: 2025-11-24 09:58:49.494 257704 DEBUG os_vif [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:a7:a4,bridge_name='br-int',has_traffic_filtering=True,id=aaa65cd2-1ea3-464c-88bb-de1faf8ae995,network=Network(cbb18554-4df6-4004-8b94-6d2a9b50722d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaaa65cd2-1e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 24 09:58:49 compute-0 nova_compute[257700]: 2025-11-24 09:58:49.495 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:49 compute-0 nova_compute[257700]: 2025-11-24 09:58:49.495 257704 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:58:49 compute-0 nova_compute[257700]: 2025-11-24 09:58:49.495 257704 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 09:58:49 compute-0 nova_compute[257700]: 2025-11-24 09:58:49.498 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:49 compute-0 nova_compute[257700]: 2025-11-24 09:58:49.499 257704 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaaa65cd2-1e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:58:49 compute-0 nova_compute[257700]: 2025-11-24 09:58:49.499 257704 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapaaa65cd2-1e, col_values=(('external_ids', {'iface-id': 'aaa65cd2-1ea3-464c-88bb-de1faf8ae995', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:60:a7:a4', 'vm-uuid': '72448c73-f653-4d79-8800-4ac3e9261a45'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:58:49 compute-0 nova_compute[257700]: 2025-11-24 09:58:49.533 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:49 compute-0 NetworkManager[48883]: <info>  [1763978329.5340] manager: (tapaaa65cd2-1e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Nov 24 09:58:49 compute-0 nova_compute[257700]: 2025-11-24 09:58:49.536 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 09:58:49 compute-0 nova_compute[257700]: 2025-11-24 09:58:49.538 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:49 compute-0 nova_compute[257700]: 2025-11-24 09:58:49.540 257704 INFO os_vif [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:a7:a4,bridge_name='br-int',has_traffic_filtering=True,id=aaa65cd2-1ea3-464c-88bb-de1faf8ae995,network=Network(cbb18554-4df6-4004-8b94-6d2a9b50722d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaaa65cd2-1e')
Nov 24 09:58:49 compute-0 nova_compute[257700]: 2025-11-24 09:58:49.578 257704 DEBUG nova.virt.libvirt.driver [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 09:58:49 compute-0 nova_compute[257700]: 2025-11-24 09:58:49.579 257704 DEBUG nova.virt.libvirt.driver [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 09:58:49 compute-0 nova_compute[257700]: 2025-11-24 09:58:49.579 257704 DEBUG nova.virt.libvirt.driver [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No VIF found with MAC fa:16:3e:60:a7:a4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 24 09:58:49 compute-0 nova_compute[257700]: 2025-11-24 09:58:49.579 257704 INFO nova.virt.libvirt.driver [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Using config drive
Nov 24 09:58:49 compute-0 nova_compute[257700]: 2025-11-24 09:58:49.601 257704 DEBUG nova.storage.rbd_utils [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 72448c73-f653-4d79-8800-4ac3e9261a45_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 09:58:49 compute-0 sshd-session[273425]: Received disconnect from 14.215.126.91 port 59760:11: Bye Bye [preauth]
Nov 24 09:58:49 compute-0 sshd-session[273425]: Disconnected from authenticating user root 14.215.126.91 port 59760 [preauth]
Nov 24 09:58:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:58:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:58:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:58:50 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:58:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:58:50 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:58:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:58:50 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:58:50 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:50 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:50 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:50.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:50 compute-0 ceph-mon[74331]: pgmap v960: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 24 09:58:50 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/4212559010' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 09:58:50 compute-0 nova_compute[257700]: 2025-11-24 09:58:50.499 257704 INFO nova.virt.libvirt.driver [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Creating config drive at /var/lib/nova/instances/72448c73-f653-4d79-8800-4ac3e9261a45/disk.config
Nov 24 09:58:50 compute-0 nova_compute[257700]: 2025-11-24 09:58:50.504 257704 DEBUG oslo_concurrency.processutils [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/72448c73-f653-4d79-8800-4ac3e9261a45/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgmt9w_cm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:58:50 compute-0 nova_compute[257700]: 2025-11-24 09:58:50.618 257704 DEBUG nova.network.neutron [req-f6788c4a-473c-4264-ab28-6ac059c3a997 req-80b084cc-69d4-493d-a0a8-892c35aa8a5e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Updated VIF entry in instance network info cache for port aaa65cd2-1ea3-464c-88bb-de1faf8ae995. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 09:58:50 compute-0 nova_compute[257700]: 2025-11-24 09:58:50.619 257704 DEBUG nova.network.neutron [req-f6788c4a-473c-4264-ab28-6ac059c3a997 req-80b084cc-69d4-493d-a0a8-892c35aa8a5e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Updating instance_info_cache with network_info: [{"id": "aaa65cd2-1ea3-464c-88bb-de1faf8ae995", "address": "fa:16:3e:60:a7:a4", "network": {"id": "cbb18554-4df6-4004-8b94-6d2a9b50722d", "bridge": "br-int", "label": "tempest-network-smoke--1864982359", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaaa65cd2-1e", "ovs_interfaceid": "aaa65cd2-1ea3-464c-88bb-de1faf8ae995", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 09:58:50 compute-0 nova_compute[257700]: 2025-11-24 09:58:50.632 257704 DEBUG oslo_concurrency.processutils [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/72448c73-f653-4d79-8800-4ac3e9261a45/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgmt9w_cm" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:58:50 compute-0 nova_compute[257700]: 2025-11-24 09:58:50.660 257704 DEBUG nova.storage.rbd_utils [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 72448c73-f653-4d79-8800-4ac3e9261a45_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 09:58:50 compute-0 nova_compute[257700]: 2025-11-24 09:58:50.663 257704 DEBUG oslo_concurrency.processutils [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/72448c73-f653-4d79-8800-4ac3e9261a45/disk.config 72448c73-f653-4d79-8800-4ac3e9261a45_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:58:50 compute-0 nova_compute[257700]: 2025-11-24 09:58:50.686 257704 DEBUG oslo_concurrency.lockutils [req-f6788c4a-473c-4264-ab28-6ac059c3a997 req-80b084cc-69d4-493d-a0a8-892c35aa8a5e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Releasing lock "refresh_cache-72448c73-f653-4d79-8800-4ac3e9261a45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 09:58:50 compute-0 nova_compute[257700]: 2025-11-24 09:58:50.835 257704 DEBUG oslo_concurrency.processutils [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/72448c73-f653-4d79-8800-4ac3e9261a45/disk.config 72448c73-f653-4d79-8800-4ac3e9261a45_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:58:50 compute-0 nova_compute[257700]: 2025-11-24 09:58:50.836 257704 INFO nova.virt.libvirt.driver [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Deleting local config drive /var/lib/nova/instances/72448c73-f653-4d79-8800-4ac3e9261a45/disk.config because it was imported into RBD.
Nov 24 09:58:50 compute-0 systemd[1]: Starting libvirt secret daemon...
Nov 24 09:58:50 compute-0 systemd[1]: Started libvirt secret daemon.
Nov 24 09:58:50 compute-0 kernel: tapaaa65cd2-1e: entered promiscuous mode
Nov 24 09:58:50 compute-0 NetworkManager[48883]: <info>  [1763978330.9348] manager: (tapaaa65cd2-1e): new Tun device (/org/freedesktop/NetworkManager/Devices/39)
Nov 24 09:58:50 compute-0 ovn_controller[155123]: 2025-11-24T09:58:50Z|00050|binding|INFO|Claiming lport aaa65cd2-1ea3-464c-88bb-de1faf8ae995 for this chassis.
Nov 24 09:58:50 compute-0 ovn_controller[155123]: 2025-11-24T09:58:50Z|00051|binding|INFO|aaa65cd2-1ea3-464c-88bb-de1faf8ae995: Claiming fa:16:3e:60:a7:a4 10.100.0.22
Nov 24 09:58:50 compute-0 nova_compute[257700]: 2025-11-24 09:58:50.934 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:50 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:58:50.949 165073 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:a7:a4 10.100.0.22'], port_security=['fa:16:3e:60:a7:a4 10.100.0.22'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.22/28', 'neutron:device_id': '72448c73-f653-4d79-8800-4ac3e9261a45', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cbb18554-4df6-4004-8b94-6d2a9b50722d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94d069fc040647d5a6e54894eec915fe', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd0b398c0-c649-449a-9c4b-f8d4c7e08ebf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58766ea9-d6bf-4e11-9e8a-1652f6f7c4d5, chassis=[<ovs.db.idl.Row object at 0x7f45b2855760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f45b2855760>], logical_port=aaa65cd2-1ea3-464c-88bb-de1faf8ae995) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 09:58:50 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:58:50.952 165073 INFO neutron.agent.ovn.metadata.agent [-] Port aaa65cd2-1ea3-464c-88bb-de1faf8ae995 in datapath cbb18554-4df6-4004-8b94-6d2a9b50722d bound to our chassis
Nov 24 09:58:50 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:58:50.954 165073 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cbb18554-4df6-4004-8b94-6d2a9b50722d
Nov 24 09:58:50 compute-0 systemd-machined[219130]: New machine qemu-3-instance-00000007.
Nov 24 09:58:50 compute-0 systemd-udevd[273611]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 09:58:50 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:58:50.971 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[83f23319-41b0-4539-8c15-0dee94ffa0a9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:58:50 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:58:50.971 165073 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcbb18554-41 in ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 24 09:58:50 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:58:50.974 264910 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcbb18554-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 24 09:58:50 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:58:50.974 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[8c395cd0-1d03-4fee-b541-c7a50ef09ccd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:58:50 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:58:50.975 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[6d7b330c-1199-4288-affd-19a688dcf00e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:58:50 compute-0 nova_compute[257700]: 2025-11-24 09:58:50.980 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:50 compute-0 NetworkManager[48883]: <info>  [1763978330.9822] device (tapaaa65cd2-1e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 09:58:50 compute-0 NetworkManager[48883]: <info>  [1763978330.9833] device (tapaaa65cd2-1e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 24 09:58:50 compute-0 ovn_controller[155123]: 2025-11-24T09:58:50Z|00052|binding|INFO|Setting lport aaa65cd2-1ea3-464c-88bb-de1faf8ae995 ovn-installed in OVS
Nov 24 09:58:50 compute-0 ovn_controller[155123]: 2025-11-24T09:58:50Z|00053|binding|INFO|Setting lport aaa65cd2-1ea3-464c-88bb-de1faf8ae995 up in Southbound
Nov 24 09:58:50 compute-0 nova_compute[257700]: 2025-11-24 09:58:50.985 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:50 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000007.
Nov 24 09:58:50 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:58:50.987 165227 DEBUG oslo.privsep.daemon [-] privsep: reply[6fab7fa9-5f39-4d87-a0e9-6c5a0fdfdc39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:58:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:58:50] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Nov 24 09:58:50 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:58:50] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Nov 24 09:58:51 compute-0 nova_compute[257700]: 2025-11-24 09:58:51.002 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:51 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:58:51.014 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[824047c5-4953-49dc-a829-d61dad364e3c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:58:51 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:58:51.050 264951 DEBUG oslo.privsep.daemon [-] privsep: reply[ff19045c-4645-41a6-9b54-cad85225394f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:58:51 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:58:51.054 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[2e8b6160-d383-4f3b-b3c7-1aa93c86f5ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:58:51 compute-0 NetworkManager[48883]: <info>  [1763978331.0561] manager: (tapcbb18554-40): new Veth device (/org/freedesktop/NetworkManager/Devices/40)
Nov 24 09:58:51 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:51 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:51 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:51.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:51 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:58:51.089 264951 DEBUG oslo.privsep.daemon [-] privsep: reply[adccb20a-27a6-400d-bc93-bf9946d37e38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:58:51 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:58:51.092 264951 DEBUG oslo.privsep.daemon [-] privsep: reply[eaff24b3-d14b-4705-a359-8d81eabec8e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:58:51 compute-0 NetworkManager[48883]: <info>  [1763978331.1110] device (tapcbb18554-40): carrier: link connected
Nov 24 09:58:51 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:58:51.115 264951 DEBUG oslo.privsep.daemon [-] privsep: reply[d9c0414b-6142-469f-8bdf-258e13bb5a35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:58:51 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:58:51.129 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[d927db60-12d7-4c6f-bdf8-ff949350ed8c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcbb18554-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:03:d4:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 427868, 'reachable_time': 36202, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273643, 'error': None, 'target': 'ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:58:51 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v961: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 24 09:58:51 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:58:51.141 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[5b1b07d1-d7f7-419b-906f-467e5dd58ac2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe03:d482'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 427868, 'tstamp': 427868}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273644, 'error': None, 'target': 'ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:58:51 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:58:51.153 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[f5cfb798-7ddb-451d-a675-b9e124598b5c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcbb18554-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:03:d4:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 427868, 'reachable_time': 36202, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 273645, 'error': None, 'target': 'ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:58:51 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:58:51.182 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[af869aaa-d02c-4e45-8b3d-376637b479d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:58:51 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:58:51.240 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[a4adcc05-bd7c-4c94-b045-3d59cb2c5829]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:58:51 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:58:51.241 165073 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcbb18554-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:58:51 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:58:51.242 165073 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 09:58:51 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:58:51.242 165073 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcbb18554-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:58:51 compute-0 kernel: tapcbb18554-40: entered promiscuous mode
Nov 24 09:58:51 compute-0 NetworkManager[48883]: <info>  [1763978331.2449] manager: (tapcbb18554-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Nov 24 09:58:51 compute-0 nova_compute[257700]: 2025-11-24 09:58:51.245 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:51 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:58:51.249 165073 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcbb18554-40, col_values=(('external_ids', {'iface-id': '7477e0b1-7d3c-42ae-9333-aaa2b41f75a9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:58:51 compute-0 nova_compute[257700]: 2025-11-24 09:58:51.250 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:51 compute-0 ovn_controller[155123]: 2025-11-24T09:58:51Z|00054|binding|INFO|Releasing lport 7477e0b1-7d3c-42ae-9333-aaa2b41f75a9 from this chassis (sb_readonly=0)
Nov 24 09:58:51 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:58:51.251 165073 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cbb18554-4df6-4004-8b94-6d2a9b50722d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cbb18554-4df6-4004-8b94-6d2a9b50722d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 24 09:58:51 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:58:51.252 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[b66dbfae-fd0b-4e70-b2ff-02b80786f975]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:58:51 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:58:51.253 165073 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 24 09:58:51 compute-0 ovn_metadata_agent[165067]: global
Nov 24 09:58:51 compute-0 ovn_metadata_agent[165067]:     log         /dev/log local0 debug
Nov 24 09:58:51 compute-0 ovn_metadata_agent[165067]:     log-tag     haproxy-metadata-proxy-cbb18554-4df6-4004-8b94-6d2a9b50722d
Nov 24 09:58:51 compute-0 ovn_metadata_agent[165067]:     user        root
Nov 24 09:58:51 compute-0 ovn_metadata_agent[165067]:     group       root
Nov 24 09:58:51 compute-0 ovn_metadata_agent[165067]:     maxconn     1024
Nov 24 09:58:51 compute-0 ovn_metadata_agent[165067]:     pidfile     /var/lib/neutron/external/pids/cbb18554-4df6-4004-8b94-6d2a9b50722d.pid.haproxy
Nov 24 09:58:51 compute-0 ovn_metadata_agent[165067]:     daemon
Nov 24 09:58:51 compute-0 ovn_metadata_agent[165067]: 
Nov 24 09:58:51 compute-0 ovn_metadata_agent[165067]: defaults
Nov 24 09:58:51 compute-0 ovn_metadata_agent[165067]:     log global
Nov 24 09:58:51 compute-0 ovn_metadata_agent[165067]:     mode http
Nov 24 09:58:51 compute-0 ovn_metadata_agent[165067]:     option httplog
Nov 24 09:58:51 compute-0 ovn_metadata_agent[165067]:     option dontlognull
Nov 24 09:58:51 compute-0 ovn_metadata_agent[165067]:     option http-server-close
Nov 24 09:58:51 compute-0 ovn_metadata_agent[165067]:     option forwardfor
Nov 24 09:58:51 compute-0 ovn_metadata_agent[165067]:     retries                 3
Nov 24 09:58:51 compute-0 ovn_metadata_agent[165067]:     timeout http-request    30s
Nov 24 09:58:51 compute-0 ovn_metadata_agent[165067]:     timeout connect         30s
Nov 24 09:58:51 compute-0 ovn_metadata_agent[165067]:     timeout client          32s
Nov 24 09:58:51 compute-0 ovn_metadata_agent[165067]:     timeout server          32s
Nov 24 09:58:51 compute-0 ovn_metadata_agent[165067]:     timeout http-keep-alive 30s
Nov 24 09:58:51 compute-0 ovn_metadata_agent[165067]: 
Nov 24 09:58:51 compute-0 ovn_metadata_agent[165067]: 
Nov 24 09:58:51 compute-0 ovn_metadata_agent[165067]: listen listener
Nov 24 09:58:51 compute-0 ovn_metadata_agent[165067]:     bind 169.254.169.254:80
Nov 24 09:58:51 compute-0 ovn_metadata_agent[165067]:     server metadata /var/lib/neutron/metadata_proxy
Nov 24 09:58:51 compute-0 ovn_metadata_agent[165067]:     http-request add-header X-OVN-Network-ID cbb18554-4df6-4004-8b94-6d2a9b50722d
Nov 24 09:58:51 compute-0 ovn_metadata_agent[165067]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 24 09:58:51 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:58:51.254 165073 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d', 'env', 'PROCESS_TAG=haproxy-cbb18554-4df6-4004-8b94-6d2a9b50722d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cbb18554-4df6-4004-8b94-6d2a9b50722d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 24 09:58:51 compute-0 nova_compute[257700]: 2025-11-24 09:58:51.263 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:51 compute-0 nova_compute[257700]: 2025-11-24 09:58:51.514 257704 DEBUG nova.compute.manager [req-2646e2d3-0d3b-444a-8167-894e0c472dfc req-68a39e0e-463c-4ee7-af62-b6da3afd151f 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Received event network-vif-plugged-aaa65cd2-1ea3-464c-88bb-de1faf8ae995 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 09:58:51 compute-0 nova_compute[257700]: 2025-11-24 09:58:51.515 257704 DEBUG oslo_concurrency.lockutils [req-2646e2d3-0d3b-444a-8167-894e0c472dfc req-68a39e0e-463c-4ee7-af62-b6da3afd151f 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "72448c73-f653-4d79-8800-4ac3e9261a45-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:58:51 compute-0 nova_compute[257700]: 2025-11-24 09:58:51.515 257704 DEBUG oslo_concurrency.lockutils [req-2646e2d3-0d3b-444a-8167-894e0c472dfc req-68a39e0e-463c-4ee7-af62-b6da3afd151f 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "72448c73-f653-4d79-8800-4ac3e9261a45-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:58:51 compute-0 nova_compute[257700]: 2025-11-24 09:58:51.516 257704 DEBUG oslo_concurrency.lockutils [req-2646e2d3-0d3b-444a-8167-894e0c472dfc req-68a39e0e-463c-4ee7-af62-b6da3afd151f 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "72448c73-f653-4d79-8800-4ac3e9261a45-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:58:51 compute-0 nova_compute[257700]: 2025-11-24 09:58:51.516 257704 DEBUG nova.compute.manager [req-2646e2d3-0d3b-444a-8167-894e0c472dfc req-68a39e0e-463c-4ee7-af62-b6da3afd151f 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Processing event network-vif-plugged-aaa65cd2-1ea3-464c-88bb-de1faf8ae995 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 24 09:58:51 compute-0 nova_compute[257700]: 2025-11-24 09:58:51.529 257704 DEBUG nova.compute.manager [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 24 09:58:51 compute-0 nova_compute[257700]: 2025-11-24 09:58:51.530 257704 DEBUG nova.virt.driver [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Emitting event <LifecycleEvent: 1763978331.528985, 72448c73-f653-4d79-8800-4ac3e9261a45 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 09:58:51 compute-0 nova_compute[257700]: 2025-11-24 09:58:51.531 257704 INFO nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] VM Started (Lifecycle Event)
Nov 24 09:58:51 compute-0 nova_compute[257700]: 2025-11-24 09:58:51.537 257704 DEBUG nova.virt.libvirt.driver [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 24 09:58:51 compute-0 nova_compute[257700]: 2025-11-24 09:58:51.540 257704 INFO nova.virt.libvirt.driver [-] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Instance spawned successfully.
Nov 24 09:58:51 compute-0 nova_compute[257700]: 2025-11-24 09:58:51.541 257704 DEBUG nova.virt.libvirt.driver [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 24 09:58:51 compute-0 nova_compute[257700]: 2025-11-24 09:58:51.565 257704 DEBUG nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 09:58:51 compute-0 nova_compute[257700]: 2025-11-24 09:58:51.570 257704 DEBUG nova.virt.libvirt.driver [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 09:58:51 compute-0 nova_compute[257700]: 2025-11-24 09:58:51.571 257704 DEBUG nova.virt.libvirt.driver [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 09:58:51 compute-0 nova_compute[257700]: 2025-11-24 09:58:51.571 257704 DEBUG nova.virt.libvirt.driver [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 09:58:51 compute-0 nova_compute[257700]: 2025-11-24 09:58:51.572 257704 DEBUG nova.virt.libvirt.driver [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 09:58:51 compute-0 nova_compute[257700]: 2025-11-24 09:58:51.572 257704 DEBUG nova.virt.libvirt.driver [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 09:58:51 compute-0 nova_compute[257700]: 2025-11-24 09:58:51.573 257704 DEBUG nova.virt.libvirt.driver [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 09:58:51 compute-0 nova_compute[257700]: 2025-11-24 09:58:51.577 257704 DEBUG nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 09:58:51 compute-0 nova_compute[257700]: 2025-11-24 09:58:51.610 257704 INFO nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 09:58:51 compute-0 nova_compute[257700]: 2025-11-24 09:58:51.611 257704 DEBUG nova.virt.driver [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Emitting event <LifecycleEvent: 1763978331.5301614, 72448c73-f653-4d79-8800-4ac3e9261a45 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 09:58:51 compute-0 nova_compute[257700]: 2025-11-24 09:58:51.612 257704 INFO nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] VM Paused (Lifecycle Event)
Nov 24 09:58:51 compute-0 nova_compute[257700]: 2025-11-24 09:58:51.630 257704 DEBUG nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 09:58:51 compute-0 nova_compute[257700]: 2025-11-24 09:58:51.635 257704 DEBUG nova.virt.driver [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Emitting event <LifecycleEvent: 1763978331.536711, 72448c73-f653-4d79-8800-4ac3e9261a45 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 09:58:51 compute-0 nova_compute[257700]: 2025-11-24 09:58:51.636 257704 INFO nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] VM Resumed (Lifecycle Event)
Nov 24 09:58:51 compute-0 nova_compute[257700]: 2025-11-24 09:58:51.642 257704 INFO nova.compute.manager [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Took 8.23 seconds to spawn the instance on the hypervisor.
Nov 24 09:58:51 compute-0 nova_compute[257700]: 2025-11-24 09:58:51.642 257704 DEBUG nova.compute.manager [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 09:58:51 compute-0 nova_compute[257700]: 2025-11-24 09:58:51.652 257704 DEBUG nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 09:58:51 compute-0 nova_compute[257700]: 2025-11-24 09:58:51.655 257704 DEBUG nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 09:58:51 compute-0 nova_compute[257700]: 2025-11-24 09:58:51.685 257704 INFO nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 09:58:51 compute-0 nova_compute[257700]: 2025-11-24 09:58:51.724 257704 INFO nova.compute.manager [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Took 9.20 seconds to build instance.
Nov 24 09:58:51 compute-0 nova_compute[257700]: 2025-11-24 09:58:51.742 257704 DEBUG oslo_concurrency.lockutils [None req-ddd2d296-740e-4530-a743-af59b8c4d57e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "72448c73-f653-4d79-8800-4ac3e9261a45" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.277s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:58:51 compute-0 podman[273720]: 2025-11-24 09:58:51.745713859 +0000 UTC m=+0.054558347 container create 40d498fbdcd7a31af38e9bb51d76485f007db97b01470fad413a9eeef2909304 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 24 09:58:51 compute-0 systemd[1]: Started libpod-conmon-40d498fbdcd7a31af38e9bb51d76485f007db97b01470fad413a9eeef2909304.scope.
Nov 24 09:58:51 compute-0 podman[273720]: 2025-11-24 09:58:51.714937841 +0000 UTC m=+0.023782319 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 24 09:58:51 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:58:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/807471785bf64d89d388596599806772ebc0ffb18d232d146376ea154b497c05/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 24 09:58:51 compute-0 podman[273720]: 2025-11-24 09:58:51.867495498 +0000 UTC m=+0.176339966 container init 40d498fbdcd7a31af38e9bb51d76485f007db97b01470fad413a9eeef2909304 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 09:58:51 compute-0 podman[273720]: 2025-11-24 09:58:51.873405852 +0000 UTC m=+0.182250300 container start 40d498fbdcd7a31af38e9bb51d76485f007db97b01470fad413a9eeef2909304 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 24 09:58:51 compute-0 neutron-haproxy-ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d[273735]: [NOTICE]   (273739) : New worker (273741) forked
Nov 24 09:58:51 compute-0 neutron-haproxy-ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d[273735]: [NOTICE]   (273739) : Loading success.
Nov 24 09:58:52 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:52 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:52 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:52.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:52 compute-0 ceph-mon[74331]: pgmap v961: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 24 09:58:53 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:53 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:58:53 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:53.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:58:53 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v962: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.8 MiB/s wr, 39 op/s
Nov 24 09:58:53 compute-0 nova_compute[257700]: 2025-11-24 09:58:53.607 257704 DEBUG nova.compute.manager [req-076d4d23-f393-4abe-930d-d6af11ed9f83 req-33c7eb87-3a0d-4aad-acc1-d84c90a1006d 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Received event network-vif-plugged-aaa65cd2-1ea3-464c-88bb-de1faf8ae995 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 09:58:53 compute-0 nova_compute[257700]: 2025-11-24 09:58:53.608 257704 DEBUG oslo_concurrency.lockutils [req-076d4d23-f393-4abe-930d-d6af11ed9f83 req-33c7eb87-3a0d-4aad-acc1-d84c90a1006d 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "72448c73-f653-4d79-8800-4ac3e9261a45-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:58:53 compute-0 nova_compute[257700]: 2025-11-24 09:58:53.608 257704 DEBUG oslo_concurrency.lockutils [req-076d4d23-f393-4abe-930d-d6af11ed9f83 req-33c7eb87-3a0d-4aad-acc1-d84c90a1006d 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "72448c73-f653-4d79-8800-4ac3e9261a45-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:58:53 compute-0 nova_compute[257700]: 2025-11-24 09:58:53.608 257704 DEBUG oslo_concurrency.lockutils [req-076d4d23-f393-4abe-930d-d6af11ed9f83 req-33c7eb87-3a0d-4aad-acc1-d84c90a1006d 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "72448c73-f653-4d79-8800-4ac3e9261a45-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:58:53 compute-0 nova_compute[257700]: 2025-11-24 09:58:53.609 257704 DEBUG nova.compute.manager [req-076d4d23-f393-4abe-930d-d6af11ed9f83 req-33c7eb87-3a0d-4aad-acc1-d84c90a1006d 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] No waiting events found dispatching network-vif-plugged-aaa65cd2-1ea3-464c-88bb-de1faf8ae995 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 09:58:53 compute-0 nova_compute[257700]: 2025-11-24 09:58:53.609 257704 WARNING nova.compute.manager [req-076d4d23-f393-4abe-930d-d6af11ed9f83 req-33c7eb87-3a0d-4aad-acc1-d84c90a1006d 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Received unexpected event network-vif-plugged-aaa65cd2-1ea3-464c-88bb-de1faf8ae995 for instance with vm_state active and task_state None.
Nov 24 09:58:53 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:58:54 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:54 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:54 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:54.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:54 compute-0 ceph-mon[74331]: pgmap v962: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.8 MiB/s wr, 39 op/s
Nov 24 09:58:54 compute-0 nova_compute[257700]: 2025-11-24 09:58:54.534 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:58:54 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:58:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:58:54 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:58:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:58:54 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:58:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:58:55 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:58:55 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:55 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:58:55 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:55.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:58:55 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v963: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Nov 24 09:58:56 compute-0 nova_compute[257700]: 2025-11-24 09:58:56.004 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:58:56 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:56 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:56 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:56.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:56 compute-0 ceph-mon[74331]: pgmap v963: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Nov 24 09:58:57 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:57 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:57 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:57.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:57 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v964: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Nov 24 09:58:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:58:57.530Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 09:58:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:58:57.530Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:58:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:58:57.531Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:58:57 compute-0 ceph-mon[74331]: pgmap v964: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Nov 24 09:58:58 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:58 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:58:58 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:58:58.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:58:58 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:58:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:58:58.886Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:58:59 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:58:59 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:58:59 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:58:59.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:58:59 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v965: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 75 op/s
Nov 24 09:58:59 compute-0 nova_compute[257700]: 2025-11-24 09:58:59.536 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:58:59 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:59:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:59:00 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:59:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:59:00 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:59:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:59:00 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:59:00 compute-0 ceph-mon[74331]: pgmap v965: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 75 op/s
Nov 24 09:59:00 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:00 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:00 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:00.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:59:00] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Nov 24 09:59:00 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:59:00] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Nov 24 09:59:01 compute-0 nova_compute[257700]: 2025-11-24 09:59:01.006 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:01 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:01 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:01 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:01.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:01 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v966: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 75 op/s
Nov 24 09:59:01 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:59:01 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 24 09:59:01 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4135505958' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 09:59:01 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 24 09:59:01 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4135505958' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 09:59:02 compute-0 ceph-mon[74331]: pgmap v966: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 75 op/s
Nov 24 09:59:02 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/4135505958' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 09:59:02 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/4135505958' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 09:59:02 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:02 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:02 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:02.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:03 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:03 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:03 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:03.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:03 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v967: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 19 KiB/s wr, 76 op/s
Nov 24 09:59:03 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:59:04 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:04 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:04 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:04.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:04 compute-0 ceph-mon[74331]: pgmap v967: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 19 KiB/s wr, 76 op/s
Nov 24 09:59:04 compute-0 nova_compute[257700]: 2025-11-24 09:59:04.539 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:59:04 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:59:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:59:05 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:59:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:59:05 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:59:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:59:05 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:59:05 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:05 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:59:05 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:05.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:59:05 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v968: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.4 KiB/s wr, 65 op/s
Nov 24 09:59:05 compute-0 ovn_controller[155123]: 2025-11-24T09:59:05Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:60:a7:a4 10.100.0.22
Nov 24 09:59:05 compute-0 ovn_controller[155123]: 2025-11-24T09:59:05Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:60:a7:a4 10.100.0.22
Nov 24 09:59:06 compute-0 nova_compute[257700]: 2025-11-24 09:59:06.008 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:06 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:06 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:06 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:06.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:06 compute-0 ceph-mon[74331]: pgmap v968: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.4 KiB/s wr, 65 op/s
Nov 24 09:59:07 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:07 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:59:07 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:07.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:59:07 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v969: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 129 op/s
Nov 24 09:59:07 compute-0 sudo[273766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:59:07 compute-0 sudo[273766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:59:07 compute-0 sudo[273766]: pam_unix(sudo:session): session closed for user root
Nov 24 09:59:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:59:07.532Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:59:08 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:08 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:08 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:08.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:08 compute-0 ceph-mon[74331]: pgmap v969: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 129 op/s
Nov 24 09:59:08 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:59:08 compute-0 podman[273793]: 2025-11-24 09:59:08.81218701 +0000 UTC m=+0.063340885 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 24 09:59:08 compute-0 podman[273794]: 2025-11-24 09:59:08.840085609 +0000 UTC m=+0.093124000 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 24 09:59:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:59:08.886Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:59:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:59:08.886Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:59:09 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:09 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:09 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:09.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:09 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v970: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 350 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 24 09:59:09 compute-0 nova_compute[257700]: 2025-11-24 09:59:09.540 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:59:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:59:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:59:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:59:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:59:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:59:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:59:10 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:59:10 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:10 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:10 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:10.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:10 compute-0 ceph-mon[74331]: pgmap v970: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 350 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 24 09:59:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:59:10] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Nov 24 09:59:10 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:59:10] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Nov 24 09:59:11 compute-0 nova_compute[257700]: 2025-11-24 09:59:11.010 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:11 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:11 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:59:11 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:11.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:59:11 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v971: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 350 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 24 09:59:12 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:12 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:59:12 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:12.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:59:12 compute-0 ceph-mon[74331]: pgmap v971: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 350 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 24 09:59:13 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:13 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:59:13 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:13.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:59:13 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v972: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 350 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 24 09:59:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:59:13 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Nov 24 09:59:13 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:59:13.668366) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 09:59:13 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Nov 24 09:59:13 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978353668412, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 1552, "num_deletes": 255, "total_data_size": 3006393, "memory_usage": 3072096, "flush_reason": "Manual Compaction"}
Nov 24 09:59:13 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Nov 24 09:59:13 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978353684468, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 2881479, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26927, "largest_seqno": 28477, "table_properties": {"data_size": 2874413, "index_size": 4073, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 14851, "raw_average_key_size": 19, "raw_value_size": 2860098, "raw_average_value_size": 3768, "num_data_blocks": 179, "num_entries": 759, "num_filter_entries": 759, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763978216, "oldest_key_time": 1763978216, "file_creation_time": 1763978353, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "42aa12d2-c531-4ddc-8c4c-bc0b5971346b", "db_session_id": "RORHLERH15LC1QL8D0I4", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:59:13 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 16172 microseconds, and 6294 cpu microseconds.
Nov 24 09:59:13 compute-0 ceph-mon[74331]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:59:13 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:59:13.684535) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 2881479 bytes OK
Nov 24 09:59:13 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:59:13.684562) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Nov 24 09:59:13 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:59:13.686543) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Nov 24 09:59:13 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:59:13.686565) EVENT_LOG_v1 {"time_micros": 1763978353686557, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 09:59:13 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:59:13.686586) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 09:59:13 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 2999762, prev total WAL file size 2999762, number of live WAL files 2.
Nov 24 09:59:13 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:59:13 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:59:13.688202) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353031' seq:72057594037927935, type:22 .. '6C6F676D00373532' seq:0, type:0; will stop at (end)
Nov 24 09:59:13 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 09:59:13 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(2813KB)], [59(13MB)]
Nov 24 09:59:13 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978353688260, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 17266818, "oldest_snapshot_seqno": -1}
Nov 24 09:59:13 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 6079 keys, 17121339 bytes, temperature: kUnknown
Nov 24 09:59:13 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978353774987, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 17121339, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17077329, "index_size": 27708, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15237, "raw_key_size": 154699, "raw_average_key_size": 25, "raw_value_size": 16964423, "raw_average_value_size": 2790, "num_data_blocks": 1134, "num_entries": 6079, "num_filter_entries": 6079, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976305, "oldest_key_time": 0, "file_creation_time": 1763978353, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "42aa12d2-c531-4ddc-8c4c-bc0b5971346b", "db_session_id": "RORHLERH15LC1QL8D0I4", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Nov 24 09:59:13 compute-0 ceph-mon[74331]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 09:59:13 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:59:13.775229) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 17121339 bytes
Nov 24 09:59:13 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:59:13.776555) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 198.9 rd, 197.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 13.7 +0.0 blob) out(16.3 +0.0 blob), read-write-amplify(11.9) write-amplify(5.9) OK, records in: 6607, records dropped: 528 output_compression: NoCompression
Nov 24 09:59:13 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:59:13.776573) EVENT_LOG_v1 {"time_micros": 1763978353776562, "job": 32, "event": "compaction_finished", "compaction_time_micros": 86790, "compaction_time_cpu_micros": 31780, "output_level": 6, "num_output_files": 1, "total_output_size": 17121339, "num_input_records": 6607, "num_output_records": 6079, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 09:59:13 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:59:13 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978353777309, "job": 32, "event": "table_file_deletion", "file_number": 61}
Nov 24 09:59:13 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 09:59:13 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978353779991, "job": 32, "event": "table_file_deletion", "file_number": 59}
Nov 24 09:59:13 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:59:13.688056) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:59:13 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:59:13.780036) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:59:13 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:59:13.780040) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:59:13 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:59:13.780042) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:59:13 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:59:13.780043) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:59:13 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-09:59:13.780045) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 09:59:14 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:14 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:59:14 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:14.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:59:14 compute-0 ceph-mon[74331]: pgmap v972: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 350 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 24 09:59:14 compute-0 nova_compute[257700]: 2025-11-24 09:59:14.542 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:14 compute-0 nova_compute[257700]: 2025-11-24 09:59:14.922 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:59:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:59:14 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:59:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:59:14 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:59:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:59:14 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:59:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:59:15 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:59:15 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:15 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:15 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:15.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:15 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v973: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 350 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 24 09:59:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:59:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:59:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:59:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:59:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:59:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:59:16 compute-0 nova_compute[257700]: 2025-11-24 09:59:16.013 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:16 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:16 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:59:16 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:16.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:59:16 compute-0 ceph-mon[74331]: pgmap v973: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 350 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 24 09:59:16 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:59:16 compute-0 podman[273849]: 2025-11-24 09:59:16.774046704 +0000 UTC m=+0.047791616 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 24 09:59:17 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:17 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:17 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:17.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:17 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v974: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 350 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 24 09:59:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:59:17.532Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:59:17 compute-0 nova_compute[257700]: 2025-11-24 09:59:17.920 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:59:17 compute-0 nova_compute[257700]: 2025-11-24 09:59:17.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:59:17 compute-0 nova_compute[257700]: 2025-11-24 09:59:17.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:59:17 compute-0 nova_compute[257700]: 2025-11-24 09:59:17.921 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 09:59:18 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:18 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:18 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:18.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:18 compute-0 ceph-mon[74331]: pgmap v974: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 350 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 24 09:59:18 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:59:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:59:18.888Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 09:59:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:59:18.888Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 09:59:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:59:18.888Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:59:18 compute-0 nova_compute[257700]: 2025-11-24 09:59:18.917 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:59:19 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:19 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:19 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:19.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:19 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v975: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 18 KiB/s wr, 2 op/s
Nov 24 09:59:19 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/4007411981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:59:19 compute-0 nova_compute[257700]: 2025-11-24 09:59:19.544 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:19 compute-0 nova_compute[257700]: 2025-11-24 09:59:19.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:59:19 compute-0 nova_compute[257700]: 2025-11-24 09:59:19.921 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 09:59:19 compute-0 nova_compute[257700]: 2025-11-24 09:59:19.921 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 09:59:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:59:19 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:59:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:59:19 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:59:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:59:19 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:59:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:59:20 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:59:20 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:20 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:20 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:20.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:20 compute-0 nova_compute[257700]: 2025-11-24 09:59:20.368 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "refresh_cache-72448c73-f653-4d79-8800-4ac3e9261a45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 09:59:20 compute-0 nova_compute[257700]: 2025-11-24 09:59:20.368 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquired lock "refresh_cache-72448c73-f653-4d79-8800-4ac3e9261a45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 09:59:20 compute-0 nova_compute[257700]: 2025-11-24 09:59:20.368 257704 DEBUG nova.network.neutron [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 24 09:59:20 compute-0 nova_compute[257700]: 2025-11-24 09:59:20.369 257704 DEBUG nova.objects.instance [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 72448c73-f653-4d79-8800-4ac3e9261a45 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 09:59:20 compute-0 ceph-mon[74331]: pgmap v975: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 18 KiB/s wr, 2 op/s
Nov 24 09:59:20 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/3390432427' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:59:20 compute-0 sshd-session[273870]: Received disconnect from 83.229.122.23 port 47398:11: Bye Bye [preauth]
Nov 24 09:59:20 compute-0 sshd-session[273870]: Disconnected from authenticating user root 83.229.122.23 port 47398 [preauth]
Nov 24 09:59:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:59:20.570 165073 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:59:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:59:20.571 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:59:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:59:20.571 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:59:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:59:20] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Nov 24 09:59:20 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:59:20] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Nov 24 09:59:21 compute-0 nova_compute[257700]: 2025-11-24 09:59:21.014 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:21 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:21 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:21 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:21.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:21 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v976: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 18 KiB/s wr, 2 op/s
Nov 24 09:59:21 compute-0 nova_compute[257700]: 2025-11-24 09:59:21.415 257704 DEBUG nova.network.neutron [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Updating instance_info_cache with network_info: [{"id": "aaa65cd2-1ea3-464c-88bb-de1faf8ae995", "address": "fa:16:3e:60:a7:a4", "network": {"id": "cbb18554-4df6-4004-8b94-6d2a9b50722d", "bridge": "br-int", "label": "tempest-network-smoke--1864982359", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaaa65cd2-1e", "ovs_interfaceid": "aaa65cd2-1ea3-464c-88bb-de1faf8ae995", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 09:59:21 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/3070747134' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:59:21 compute-0 nova_compute[257700]: 2025-11-24 09:59:21.432 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Releasing lock "refresh_cache-72448c73-f653-4d79-8800-4ac3e9261a45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 09:59:21 compute-0 nova_compute[257700]: 2025-11-24 09:59:21.433 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 24 09:59:21 compute-0 nova_compute[257700]: 2025-11-24 09:59:21.433 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:59:21 compute-0 nova_compute[257700]: 2025-11-24 09:59:21.456 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:59:21 compute-0 nova_compute[257700]: 2025-11-24 09:59:21.456 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:59:21 compute-0 nova_compute[257700]: 2025-11-24 09:59:21.457 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:59:21 compute-0 nova_compute[257700]: 2025-11-24 09:59:21.457 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 09:59:21 compute-0 nova_compute[257700]: 2025-11-24 09:59:21.457 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:59:21 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:59:21 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/707340900' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:59:21 compute-0 nova_compute[257700]: 2025-11-24 09:59:21.890 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:59:21 compute-0 nova_compute[257700]: 2025-11-24 09:59:21.951 257704 DEBUG nova.virt.libvirt.driver [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 24 09:59:21 compute-0 nova_compute[257700]: 2025-11-24 09:59:21.951 257704 DEBUG nova.virt.libvirt.driver [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 24 09:59:22 compute-0 nova_compute[257700]: 2025-11-24 09:59:22.092 257704 WARNING nova.virt.libvirt.driver [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 09:59:22 compute-0 nova_compute[257700]: 2025-11-24 09:59:22.093 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4380MB free_disk=59.89700698852539GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 09:59:22 compute-0 nova_compute[257700]: 2025-11-24 09:59:22.093 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:59:22 compute-0 nova_compute[257700]: 2025-11-24 09:59:22.094 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:59:22 compute-0 nova_compute[257700]: 2025-11-24 09:59:22.190 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Instance 72448c73-f653-4d79-8800-4ac3e9261a45 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 09:59:22 compute-0 nova_compute[257700]: 2025-11-24 09:59:22.190 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 09:59:22 compute-0 nova_compute[257700]: 2025-11-24 09:59:22.191 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 09:59:22 compute-0 nova_compute[257700]: 2025-11-24 09:59:22.206 257704 DEBUG nova.scheduler.client.report [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Refreshing inventories for resource provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 24 09:59:22 compute-0 nova_compute[257700]: 2025-11-24 09:59:22.220 257704 DEBUG nova.scheduler.client.report [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Updating ProviderTree inventory for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 24 09:59:22 compute-0 nova_compute[257700]: 2025-11-24 09:59:22.220 257704 DEBUG nova.compute.provider_tree [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Updating inventory in ProviderTree for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 24 09:59:22 compute-0 nova_compute[257700]: 2025-11-24 09:59:22.235 257704 DEBUG nova.scheduler.client.report [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Refreshing aggregate associations for resource provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 24 09:59:22 compute-0 nova_compute[257700]: 2025-11-24 09:59:22.257 257704 DEBUG nova.scheduler.client.report [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Refreshing trait associations for resource provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257, traits: COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_F16C,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SATA,COMPUTE_ACCELERATORS,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_SHA,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSSE3,HW_CPU_X86_AVX,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AESNI,HW_CPU_X86_BMI,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_MMX,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 24 09:59:22 compute-0 nova_compute[257700]: 2025-11-24 09:59:22.287 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:59:22 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:22 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:22 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:22.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:22 compute-0 ceph-mon[74331]: pgmap v976: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 18 KiB/s wr, 2 op/s
Nov 24 09:59:22 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/707340900' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:59:22 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/2311081532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:59:22 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:59:22 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1334978600' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:59:22 compute-0 nova_compute[257700]: 2025-11-24 09:59:22.719 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:59:22 compute-0 nova_compute[257700]: 2025-11-24 09:59:22.724 257704 DEBUG nova.compute.provider_tree [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Inventory has not changed in ProviderTree for provider: a50ce3b5-7e9e-4263-a4aa-c35573ac7257 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 09:59:22 compute-0 nova_compute[257700]: 2025-11-24 09:59:22.737 257704 DEBUG nova.scheduler.client.report [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Inventory has not changed for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 09:59:22 compute-0 nova_compute[257700]: 2025-11-24 09:59:22.757 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 09:59:22 compute-0 nova_compute[257700]: 2025-11-24 09:59:22.757 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:59:23 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:23 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:23 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:23.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:23 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v977: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 27 KiB/s wr, 3 op/s
Nov 24 09:59:23 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1334978600' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:59:23 compute-0 sudo[273922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:59:23 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:59:23 compute-0 sudo[273922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:59:23 compute-0 sudo[273922]: pam_unix(sudo:session): session closed for user root
Nov 24 09:59:23 compute-0 sudo[273947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 09:59:23 compute-0 sudo[273947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:59:24 compute-0 nova_compute[257700]: 2025-11-24 09:59:24.245 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:59:24 compute-0 nova_compute[257700]: 2025-11-24 09:59:24.245 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:59:24 compute-0 sudo[273947]: pam_unix(sudo:session): session closed for user root
Nov 24 09:59:24 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:24 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:59:24 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:24.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:59:24 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v978: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 15 KiB/s wr, 2 op/s
Nov 24 09:59:24 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 09:59:24 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:59:24 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 09:59:24 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:59:24 compute-0 ceph-mon[74331]: pgmap v977: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 27 KiB/s wr, 3 op/s
Nov 24 09:59:24 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:59:24 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 09:59:24 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:59:24 compute-0 sudo[274006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:59:24 compute-0 sudo[274006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:59:24 compute-0 nova_compute[257700]: 2025-11-24 09:59:24.546 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:24 compute-0 sudo[274006]: pam_unix(sudo:session): session closed for user root
Nov 24 09:59:24 compute-0 sudo[274031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 24 09:59:24 compute-0 sudo[274031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:59:24 compute-0 nova_compute[257700]: 2025-11-24 09:59:24.920 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 09:59:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:59:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:59:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:59:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:59:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:59:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:59:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:59:25 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:59:25 compute-0 podman[274096]: 2025-11-24 09:59:25.043627307 +0000 UTC m=+0.055095053 container create 9790a6d511e772f825f6b4e21bce13d8e41039369a332318b8eed4485117e62c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Nov 24 09:59:25 compute-0 systemd[1]: Started libpod-conmon-9790a6d511e772f825f6b4e21bce13d8e41039369a332318b8eed4485117e62c.scope.
Nov 24 09:59:25 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:25 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:25 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:25.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:25 compute-0 podman[274096]: 2025-11-24 09:59:25.01994966 +0000 UTC m=+0.031417456 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:59:25 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:59:25 compute-0 podman[274096]: 2025-11-24 09:59:25.149117488 +0000 UTC m=+0.160585234 container init 9790a6d511e772f825f6b4e21bce13d8e41039369a332318b8eed4485117e62c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_heisenberg, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Nov 24 09:59:25 compute-0 podman[274096]: 2025-11-24 09:59:25.159500001 +0000 UTC m=+0.170967747 container start 9790a6d511e772f825f6b4e21bce13d8e41039369a332318b8eed4485117e62c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Nov 24 09:59:25 compute-0 podman[274096]: 2025-11-24 09:59:25.162542845 +0000 UTC m=+0.174010601 container attach 9790a6d511e772f825f6b4e21bce13d8e41039369a332318b8eed4485117e62c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:59:25 compute-0 intelligent_heisenberg[274112]: 167 167
Nov 24 09:59:25 compute-0 systemd[1]: libpod-9790a6d511e772f825f6b4e21bce13d8e41039369a332318b8eed4485117e62c.scope: Deactivated successfully.
Nov 24 09:59:25 compute-0 podman[274096]: 2025-11-24 09:59:25.168843769 +0000 UTC m=+0.180311525 container died 9790a6d511e772f825f6b4e21bce13d8e41039369a332318b8eed4485117e62c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_heisenberg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Nov 24 09:59:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-71915852ae6e5cd29d232ca6d9114b91b808612178aef4ea582caafc67565fa1-merged.mount: Deactivated successfully.
Nov 24 09:59:25 compute-0 podman[274096]: 2025-11-24 09:59:25.21037904 +0000 UTC m=+0.221846796 container remove 9790a6d511e772f825f6b4e21bce13d8e41039369a332318b8eed4485117e62c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Nov 24 09:59:25 compute-0 systemd[1]: libpod-conmon-9790a6d511e772f825f6b4e21bce13d8e41039369a332318b8eed4485117e62c.scope: Deactivated successfully.
Nov 24 09:59:25 compute-0 podman[274139]: 2025-11-24 09:59:25.421827132 +0000 UTC m=+0.068057769 container create e11585826260989967450ef314eb13bc32029e2cb2f3fd727ac983eb0163d60f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_moser, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid)
Nov 24 09:59:25 compute-0 systemd[1]: Started libpod-conmon-e11585826260989967450ef314eb13bc32029e2cb2f3fd727ac983eb0163d60f.scope.
Nov 24 09:59:25 compute-0 ceph-mon[74331]: pgmap v978: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 15 KiB/s wr, 2 op/s
Nov 24 09:59:25 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:59:25 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 09:59:25 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 09:59:25 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 09:59:25 compute-0 podman[274139]: 2025-11-24 09:59:25.398458323 +0000 UTC m=+0.044688990 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:59:25 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:59:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b14248aedcd51e71c9972586b2fc325b33b191f362401258ad5535b16f5cb8e3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:59:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b14248aedcd51e71c9972586b2fc325b33b191f362401258ad5535b16f5cb8e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:59:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b14248aedcd51e71c9972586b2fc325b33b191f362401258ad5535b16f5cb8e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:59:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b14248aedcd51e71c9972586b2fc325b33b191f362401258ad5535b16f5cb8e3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:59:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b14248aedcd51e71c9972586b2fc325b33b191f362401258ad5535b16f5cb8e3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 09:59:25 compute-0 podman[274139]: 2025-11-24 09:59:25.527145079 +0000 UTC m=+0.173375746 container init e11585826260989967450ef314eb13bc32029e2cb2f3fd727ac983eb0163d60f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_moser, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325)
Nov 24 09:59:25 compute-0 podman[274139]: 2025-11-24 09:59:25.54114697 +0000 UTC m=+0.187377597 container start e11585826260989967450ef314eb13bc32029e2cb2f3fd727ac983eb0163d60f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_moser, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Nov 24 09:59:25 compute-0 podman[274139]: 2025-11-24 09:59:25.545711431 +0000 UTC m=+0.191942098 container attach e11585826260989967450ef314eb13bc32029e2cb2f3fd727ac983eb0163d60f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_moser, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:59:25 compute-0 laughing_moser[274155]: --> passed data devices: 0 physical, 1 LVM
Nov 24 09:59:25 compute-0 laughing_moser[274155]: --> All data devices are unavailable
Nov 24 09:59:25 compute-0 systemd[1]: libpod-e11585826260989967450ef314eb13bc32029e2cb2f3fd727ac983eb0163d60f.scope: Deactivated successfully.
Nov 24 09:59:25 compute-0 podman[274139]: 2025-11-24 09:59:25.919463348 +0000 UTC m=+0.565693985 container died e11585826260989967450ef314eb13bc32029e2cb2f3fd727ac983eb0163d60f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Nov 24 09:59:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-b14248aedcd51e71c9972586b2fc325b33b191f362401258ad5535b16f5cb8e3-merged.mount: Deactivated successfully.
Nov 24 09:59:25 compute-0 podman[274139]: 2025-11-24 09:59:25.960815395 +0000 UTC m=+0.607046012 container remove e11585826260989967450ef314eb13bc32029e2cb2f3fd727ac983eb0163d60f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_moser, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 09:59:25 compute-0 systemd[1]: libpod-conmon-e11585826260989967450ef314eb13bc32029e2cb2f3fd727ac983eb0163d60f.scope: Deactivated successfully.
Nov 24 09:59:25 compute-0 sudo[274031]: pam_unix(sudo:session): session closed for user root
Nov 24 09:59:26 compute-0 nova_compute[257700]: 2025-11-24 09:59:26.017 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:26 compute-0 sudo[274184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:59:26 compute-0 sudo[274184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:59:26 compute-0 sudo[274184]: pam_unix(sudo:session): session closed for user root
Nov 24 09:59:26 compute-0 sudo[274209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- lvm list --format json
Nov 24 09:59:26 compute-0 sudo[274209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:59:26 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:26 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.004000094s ======
Nov 24 09:59:26 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:26.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.004000094s
Nov 24 09:59:26 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v979: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 15 KiB/s wr, 2 op/s
Nov 24 09:59:26 compute-0 podman[274277]: 2025-11-24 09:59:26.623534943 +0000 UTC m=+0.056506628 container create 2facd2b17957415666c93209bbd29f2197c017d89cc0fbc0581cb10cfd5fba1d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_mendel, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:59:26 compute-0 systemd[1]: Started libpod-conmon-2facd2b17957415666c93209bbd29f2197c017d89cc0fbc0581cb10cfd5fba1d.scope.
Nov 24 09:59:26 compute-0 podman[274277]: 2025-11-24 09:59:26.599968789 +0000 UTC m=+0.032940474 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:59:26 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:59:26 compute-0 podman[274277]: 2025-11-24 09:59:26.715813182 +0000 UTC m=+0.148784857 container init 2facd2b17957415666c93209bbd29f2197c017d89cc0fbc0581cb10cfd5fba1d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_mendel, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Nov 24 09:59:26 compute-0 podman[274277]: 2025-11-24 09:59:26.722544725 +0000 UTC m=+0.155516360 container start 2facd2b17957415666c93209bbd29f2197c017d89cc0fbc0581cb10cfd5fba1d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_mendel, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True)
Nov 24 09:59:26 compute-0 podman[274277]: 2025-11-24 09:59:26.725818235 +0000 UTC m=+0.158789870 container attach 2facd2b17957415666c93209bbd29f2197c017d89cc0fbc0581cb10cfd5fba1d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default)
Nov 24 09:59:26 compute-0 pedantic_mendel[274294]: 167 167
Nov 24 09:59:26 compute-0 systemd[1]: libpod-2facd2b17957415666c93209bbd29f2197c017d89cc0fbc0581cb10cfd5fba1d.scope: Deactivated successfully.
Nov 24 09:59:26 compute-0 podman[274277]: 2025-11-24 09:59:26.731392751 +0000 UTC m=+0.164364396 container died 2facd2b17957415666c93209bbd29f2197c017d89cc0fbc0581cb10cfd5fba1d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_mendel, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:59:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-0904414665f8ffe83d90cabe05b94c553543b74c1bd8b380dbbf9289b685ea38-merged.mount: Deactivated successfully.
Nov 24 09:59:26 compute-0 podman[274277]: 2025-11-24 09:59:26.77526093 +0000 UTC m=+0.208232615 container remove 2facd2b17957415666c93209bbd29f2197c017d89cc0fbc0581cb10cfd5fba1d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_mendel, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Nov 24 09:59:26 compute-0 systemd[1]: libpod-conmon-2facd2b17957415666c93209bbd29f2197c017d89cc0fbc0581cb10cfd5fba1d.scope: Deactivated successfully.
Nov 24 09:59:26 compute-0 podman[274315]: 2025-11-24 09:59:26.975281853 +0000 UTC m=+0.054080779 container create 91efd84015a0f1f9b03af45205d5633b1527d2c61acce3307b7512cc25292048 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_tharp, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 09:59:27 compute-0 systemd[1]: Started libpod-conmon-91efd84015a0f1f9b03af45205d5633b1527d2c61acce3307b7512cc25292048.scope.
Nov 24 09:59:27 compute-0 podman[274315]: 2025-11-24 09:59:26.952007456 +0000 UTC m=+0.030806412 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:59:27 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:59:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81887ba7c1e7f277449b06fc85ec7456ed59f5e2e464d4e728cd373c78fd412d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:59:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81887ba7c1e7f277449b06fc85ec7456ed59f5e2e464d4e728cd373c78fd412d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:59:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81887ba7c1e7f277449b06fc85ec7456ed59f5e2e464d4e728cd373c78fd412d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:59:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81887ba7c1e7f277449b06fc85ec7456ed59f5e2e464d4e728cd373c78fd412d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:59:27 compute-0 podman[274315]: 2025-11-24 09:59:27.077230508 +0000 UTC m=+0.156029484 container init 91efd84015a0f1f9b03af45205d5633b1527d2c61acce3307b7512cc25292048 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 09:59:27 compute-0 podman[274315]: 2025-11-24 09:59:27.086628326 +0000 UTC m=+0.165427252 container start 91efd84015a0f1f9b03af45205d5633b1527d2c61acce3307b7512cc25292048 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_tharp, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Nov 24 09:59:27 compute-0 podman[274315]: 2025-11-24 09:59:27.090147192 +0000 UTC m=+0.168946158 container attach 91efd84015a0f1f9b03af45205d5633b1527d2c61acce3307b7512cc25292048 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_tharp, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Nov 24 09:59:27 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:27 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:59:27 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:27.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:59:27 compute-0 sudo[274338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:59:27 compute-0 sudo[274338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:59:27 compute-0 sudo[274338]: pam_unix(sudo:session): session closed for user root
Nov 24 09:59:27 compute-0 nifty_tharp[274332]: {
Nov 24 09:59:27 compute-0 nifty_tharp[274332]:     "0": [
Nov 24 09:59:27 compute-0 nifty_tharp[274332]:         {
Nov 24 09:59:27 compute-0 nifty_tharp[274332]:             "devices": [
Nov 24 09:59:27 compute-0 nifty_tharp[274332]:                 "/dev/loop3"
Nov 24 09:59:27 compute-0 nifty_tharp[274332]:             ],
Nov 24 09:59:27 compute-0 nifty_tharp[274332]:             "lv_name": "ceph_lv0",
Nov 24 09:59:27 compute-0 nifty_tharp[274332]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:59:27 compute-0 nifty_tharp[274332]:             "lv_size": "21470642176",
Nov 24 09:59:27 compute-0 nifty_tharp[274332]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=84a084c3-61a7-5de7-8207-1f88efa59a64,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 24 09:59:27 compute-0 nifty_tharp[274332]:             "lv_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:59:27 compute-0 nifty_tharp[274332]:             "name": "ceph_lv0",
Nov 24 09:59:27 compute-0 nifty_tharp[274332]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:59:27 compute-0 nifty_tharp[274332]:             "tags": {
Nov 24 09:59:27 compute-0 nifty_tharp[274332]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 09:59:27 compute-0 nifty_tharp[274332]:                 "ceph.block_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 09:59:27 compute-0 nifty_tharp[274332]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 09:59:27 compute-0 nifty_tharp[274332]:                 "ceph.cluster_fsid": "84a084c3-61a7-5de7-8207-1f88efa59a64",
Nov 24 09:59:27 compute-0 nifty_tharp[274332]:                 "ceph.cluster_name": "ceph",
Nov 24 09:59:27 compute-0 nifty_tharp[274332]:                 "ceph.crush_device_class": "",
Nov 24 09:59:27 compute-0 nifty_tharp[274332]:                 "ceph.encrypted": "0",
Nov 24 09:59:27 compute-0 nifty_tharp[274332]:                 "ceph.osd_fsid": "4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c",
Nov 24 09:59:27 compute-0 nifty_tharp[274332]:                 "ceph.osd_id": "0",
Nov 24 09:59:27 compute-0 nifty_tharp[274332]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 09:59:27 compute-0 nifty_tharp[274332]:                 "ceph.type": "block",
Nov 24 09:59:27 compute-0 nifty_tharp[274332]:                 "ceph.vdo": "0",
Nov 24 09:59:27 compute-0 nifty_tharp[274332]:                 "ceph.with_tpm": "0"
Nov 24 09:59:27 compute-0 nifty_tharp[274332]:             },
Nov 24 09:59:27 compute-0 nifty_tharp[274332]:             "type": "block",
Nov 24 09:59:27 compute-0 nifty_tharp[274332]:             "vg_name": "ceph_vg0"
Nov 24 09:59:27 compute-0 nifty_tharp[274332]:         }
Nov 24 09:59:27 compute-0 nifty_tharp[274332]:     ]
Nov 24 09:59:27 compute-0 nifty_tharp[274332]: }
Nov 24 09:59:27 compute-0 systemd[1]: libpod-91efd84015a0f1f9b03af45205d5633b1527d2c61acce3307b7512cc25292048.scope: Deactivated successfully.
Nov 24 09:59:27 compute-0 conmon[274332]: conmon 91efd84015a0f1f9b03a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-91efd84015a0f1f9b03af45205d5633b1527d2c61acce3307b7512cc25292048.scope/container/memory.events
Nov 24 09:59:27 compute-0 podman[274315]: 2025-11-24 09:59:27.44469189 +0000 UTC m=+0.523490846 container died 91efd84015a0f1f9b03af45205d5633b1527d2c61acce3307b7512cc25292048 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_tharp, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 09:59:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-81887ba7c1e7f277449b06fc85ec7456ed59f5e2e464d4e728cd373c78fd412d-merged.mount: Deactivated successfully.
Nov 24 09:59:27 compute-0 ceph-mon[74331]: pgmap v979: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 15 KiB/s wr, 2 op/s
Nov 24 09:59:27 compute-0 podman[274315]: 2025-11-24 09:59:27.506634869 +0000 UTC m=+0.585433785 container remove 91efd84015a0f1f9b03af45205d5633b1527d2c61acce3307b7512cc25292048 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:59:27 compute-0 systemd[1]: libpod-conmon-91efd84015a0f1f9b03af45205d5633b1527d2c61acce3307b7512cc25292048.scope: Deactivated successfully.
Nov 24 09:59:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:59:27.534Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:59:27 compute-0 sudo[274209]: pam_unix(sudo:session): session closed for user root
Nov 24 09:59:27 compute-0 sudo[274378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 09:59:27 compute-0 sudo[274378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:59:27 compute-0 sudo[274378]: pam_unix(sudo:session): session closed for user root
Nov 24 09:59:27 compute-0 sudo[274403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- raw list --format json
Nov 24 09:59:27 compute-0 sudo[274403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:59:28 compute-0 podman[274469]: 2025-11-24 09:59:28.109153981 +0000 UTC m=+0.051512816 container create 7e47de1453bae7f4087090de14af524d3be1c9abea3d8cc3929bc9234c02ca12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_dubinsky, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 09:59:28 compute-0 systemd[1]: Started libpod-conmon-7e47de1453bae7f4087090de14af524d3be1c9abea3d8cc3929bc9234c02ca12.scope.
Nov 24 09:59:28 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:59:28 compute-0 podman[274469]: 2025-11-24 09:59:28.086024957 +0000 UTC m=+0.028383802 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:59:28 compute-0 podman[274469]: 2025-11-24 09:59:28.184079736 +0000 UTC m=+0.126438581 container init 7e47de1453bae7f4087090de14af524d3be1c9abea3d8cc3929bc9234c02ca12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_dubinsky, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Nov 24 09:59:28 compute-0 podman[274469]: 2025-11-24 09:59:28.189380655 +0000 UTC m=+0.131739470 container start 7e47de1453bae7f4087090de14af524d3be1c9abea3d8cc3929bc9234c02ca12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_dubinsky, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 09:59:28 compute-0 podman[274469]: 2025-11-24 09:59:28.192547062 +0000 UTC m=+0.134905897 container attach 7e47de1453bae7f4087090de14af524d3be1c9abea3d8cc3929bc9234c02ca12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_dubinsky, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Nov 24 09:59:28 compute-0 gallant_dubinsky[274486]: 167 167
Nov 24 09:59:28 compute-0 systemd[1]: libpod-7e47de1453bae7f4087090de14af524d3be1c9abea3d8cc3929bc9234c02ca12.scope: Deactivated successfully.
Nov 24 09:59:28 compute-0 podman[274469]: 2025-11-24 09:59:28.195182827 +0000 UTC m=+0.137541642 container died 7e47de1453bae7f4087090de14af524d3be1c9abea3d8cc3929bc9234c02ca12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_dubinsky, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:59:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-846409fd6fd6b0c27fa186c51b401fadc6054513fa50cdfdb137051ab8e6147e-merged.mount: Deactivated successfully.
Nov 24 09:59:28 compute-0 podman[274469]: 2025-11-24 09:59:28.225388223 +0000 UTC m=+0.167747038 container remove 7e47de1453bae7f4087090de14af524d3be1c9abea3d8cc3929bc9234c02ca12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_dubinsky, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Nov 24 09:59:28 compute-0 systemd[1]: libpod-conmon-7e47de1453bae7f4087090de14af524d3be1c9abea3d8cc3929bc9234c02ca12.scope: Deactivated successfully.
Nov 24 09:59:28 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:28 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:28 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:28.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:28 compute-0 podman[274511]: 2025-11-24 09:59:28.430283545 +0000 UTC m=+0.047528889 container create e20a7facbe7cecf615097fc54312d80cd16176c7d479a62274cfbf3fba22c192 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_fermi, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Nov 24 09:59:28 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v980: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 13 KiB/s wr, 2 op/s
Nov 24 09:59:28 compute-0 systemd[1]: Started libpod-conmon-e20a7facbe7cecf615097fc54312d80cd16176c7d479a62274cfbf3fba22c192.scope.
Nov 24 09:59:28 compute-0 podman[274511]: 2025-11-24 09:59:28.40871829 +0000 UTC m=+0.025963644 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 09:59:28 compute-0 systemd[1]: Started libcrun container.
Nov 24 09:59:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd43fa065abd1ee110f14c1fbbfae0988800bfad714116854bb0294b4040a253/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 09:59:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd43fa065abd1ee110f14c1fbbfae0988800bfad714116854bb0294b4040a253/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 09:59:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd43fa065abd1ee110f14c1fbbfae0988800bfad714116854bb0294b4040a253/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 09:59:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd43fa065abd1ee110f14c1fbbfae0988800bfad714116854bb0294b4040a253/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 09:59:28 compute-0 podman[274511]: 2025-11-24 09:59:28.522046781 +0000 UTC m=+0.139292125 container init e20a7facbe7cecf615097fc54312d80cd16176c7d479a62274cfbf3fba22c192 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_fermi, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 24 09:59:28 compute-0 podman[274511]: 2025-11-24 09:59:28.529125134 +0000 UTC m=+0.146370478 container start e20a7facbe7cecf615097fc54312d80cd16176c7d479a62274cfbf3fba22c192 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_fermi, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Nov 24 09:59:28 compute-0 podman[274511]: 2025-11-24 09:59:28.532521006 +0000 UTC m=+0.149766380 container attach e20a7facbe7cecf615097fc54312d80cd16176c7d479a62274cfbf3fba22c192 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_fermi, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 09:59:28 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:59:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:59:28.889Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:59:29 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:29 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:59:29 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:29.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:59:29 compute-0 lvm[274603]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 09:59:29 compute-0 lvm[274603]: VG ceph_vg0 finished
Nov 24 09:59:29 compute-0 boring_fermi[274528]: {}
Nov 24 09:59:29 compute-0 systemd[1]: libpod-e20a7facbe7cecf615097fc54312d80cd16176c7d479a62274cfbf3fba22c192.scope: Deactivated successfully.
Nov 24 09:59:29 compute-0 systemd[1]: libpod-e20a7facbe7cecf615097fc54312d80cd16176c7d479a62274cfbf3fba22c192.scope: Consumed 1.100s CPU time.
Nov 24 09:59:29 compute-0 podman[274511]: 2025-11-24 09:59:29.232534582 +0000 UTC m=+0.849779906 container died e20a7facbe7cecf615097fc54312d80cd16176c7d479a62274cfbf3fba22c192 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_fermi, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 09:59:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd43fa065abd1ee110f14c1fbbfae0988800bfad714116854bb0294b4040a253-merged.mount: Deactivated successfully.
Nov 24 09:59:29 compute-0 podman[274511]: 2025-11-24 09:59:29.279090416 +0000 UTC m=+0.896335750 container remove e20a7facbe7cecf615097fc54312d80cd16176c7d479a62274cfbf3fba22c192 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_fermi, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 09:59:29 compute-0 systemd[1]: libpod-conmon-e20a7facbe7cecf615097fc54312d80cd16176c7d479a62274cfbf3fba22c192.scope: Deactivated successfully.
Nov 24 09:59:29 compute-0 sudo[274403]: pam_unix(sudo:session): session closed for user root
Nov 24 09:59:29 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 09:59:29 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:59:29 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 09:59:29 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:59:29 compute-0 sudo[274617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 09:59:29 compute-0 sudo[274617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:59:29 compute-0 sudo[274617]: pam_unix(sudo:session): session closed for user root
Nov 24 09:59:29 compute-0 ceph-mon[74331]: pgmap v980: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 13 KiB/s wr, 2 op/s
Nov 24 09:59:29 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:59:29 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 09:59:29 compute-0 nova_compute[257700]: 2025-11-24 09:59:29.549 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:59:30 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:59:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:59:30 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:59:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:59:30 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:59:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:59:30 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:59:30 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:30 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:59:30 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:30.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:59:30 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v981: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 13 KiB/s wr, 2 op/s
Nov 24 09:59:30 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:59:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:59:30] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Nov 24 09:59:30 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:59:30] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Nov 24 09:59:31 compute-0 nova_compute[257700]: 2025-11-24 09:59:31.018 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:31 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:31 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:59:31 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:31.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:59:31 compute-0 sshd-session[274562]: Received disconnect from 14.215.126.91 port 41624:11: Bye Bye [preauth]
Nov 24 09:59:31 compute-0 sshd-session[274562]: Disconnected from authenticating user root 14.215.126.91 port 41624 [preauth]
Nov 24 09:59:31 compute-0 ceph-mon[74331]: pgmap v981: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 13 KiB/s wr, 2 op/s
Nov 24 09:59:32 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:32 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.002000047s ======
Nov 24 09:59:32 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:32.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Nov 24 09:59:32 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v982: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 15 KiB/s wr, 3 op/s
Nov 24 09:59:32 compute-0 ceph-mon[74331]: pgmap v982: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 15 KiB/s wr, 3 op/s
Nov 24 09:59:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:33.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:59:34 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:34 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:34 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:34.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:34 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v983: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 5.7 KiB/s wr, 1 op/s
Nov 24 09:59:34 compute-0 nova_compute[257700]: 2025-11-24 09:59:34.551 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:59:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:59:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:59:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:59:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:59:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:59:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:59:35 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:59:35 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:35 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:35 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:35.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:35 compute-0 ceph-mon[74331]: pgmap v983: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 5.7 KiB/s wr, 1 op/s
Nov 24 09:59:36 compute-0 nova_compute[257700]: 2025-11-24 09:59:36.021 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:36 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:36 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:36 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:36.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:36 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v984: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 5.3 KiB/s wr, 1 op/s
Nov 24 09:59:37 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:37 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:37 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:37.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:37 compute-0 ceph-mon[74331]: pgmap v984: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 5.3 KiB/s wr, 1 op/s
Nov 24 09:59:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:59:37.536Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 09:59:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:59:37.536Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:59:38 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:38 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:38 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:38.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:38 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v985: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 9.7 KiB/s wr, 3 op/s
Nov 24 09:59:38 compute-0 ceph-mon[74331]: pgmap v985: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 9.7 KiB/s wr, 3 op/s
Nov 24 09:59:38 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:59:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:59:38.891Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:59:39 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:39 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:39 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:39.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:39 compute-0 nova_compute[257700]: 2025-11-24 09:59:39.554 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:39 compute-0 podman[274654]: 2025-11-24 09:59:39.799903083 +0000 UTC m=+0.071238356 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 24 09:59:39 compute-0 podman[274655]: 2025-11-24 09:59:39.828308956 +0000 UTC m=+0.099644279 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 09:59:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:59:39 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:59:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:59:39 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:59:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:59:39 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:59:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:59:40 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:59:40 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:40 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:40 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:40.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:40 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v986: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 6.3 KiB/s wr, 2 op/s
Nov 24 09:59:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:59:40] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Nov 24 09:59:40 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:59:40] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Nov 24 09:59:41 compute-0 nova_compute[257700]: 2025-11-24 09:59:41.023 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:41 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:41 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:41 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:41.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:41 compute-0 ceph-mon[74331]: pgmap v986: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 6.3 KiB/s wr, 2 op/s
Nov 24 09:59:42 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:42 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:42 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:42.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:42 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v987: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 9.7 KiB/s wr, 3 op/s
Nov 24 09:59:42 compute-0 ceph-mon[74331]: pgmap v987: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 9.7 KiB/s wr, 3 op/s
Nov 24 09:59:43 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:43 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:43 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:43.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:59:44 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:44 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:44 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:44.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:44 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v988: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 7.7 KiB/s wr, 2 op/s
Nov 24 09:59:44 compute-0 nova_compute[257700]: 2025-11-24 09:59:44.555 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:44 compute-0 sshd-session[274702]: Received disconnect from 36.255.3.203 port 39939:11: Bye Bye [preauth]
Nov 24 09:59:44 compute-0 sshd-session[274702]: Disconnected from authenticating user daemon 36.255.3.203 port 39939 [preauth]
Nov 24 09:59:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:59:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:59:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:59:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:59:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:59:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:59:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:59:45 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:59:45 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:45 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:59:45 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:45.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:59:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Optimize plan auto_2025-11-24_09:59:45
Nov 24 09:59:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 09:59:45 compute-0 ceph-mgr[74626]: [balancer INFO root] do_upmap
Nov 24 09:59:45 compute-0 ceph-mgr[74626]: [balancer INFO root] pools ['backups', 'default.rgw.control', 'images', 'default.rgw.meta', 'cephfs.cephfs.meta', '.nfs', 'cephfs.cephfs.data', 'vms', 'default.rgw.log', '.rgw.root', '.mgr', 'volumes']
Nov 24 09:59:45 compute-0 ceph-mgr[74626]: [balancer INFO root] prepared 0/10 upmap changes
Nov 24 09:59:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:59:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:59:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:59:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:59:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 09:59:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 09:59:45 compute-0 ceph-mon[74331]: pgmap v988: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 7.7 KiB/s wr, 2 op/s
Nov 24 09:59:45 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 09:59:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 09:59:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:59:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 09:59:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:59:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015229338615940613 of space, bias 1.0, pg target 0.4568801584782184 quantized to 32 (current 32)
Nov 24 09:59:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:59:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:59:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:59:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:59:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:59:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 24 09:59:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:59:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 09:59:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:59:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:59:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:59:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 24 09:59:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:59:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 09:59:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:59:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 09:59:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:59:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 09:59:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 09:59:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 09:59:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 09:59:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:59:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 09:59:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 09:59:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:59:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:59:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 09:59:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:59:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 09:59:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 09:59:46 compute-0 nova_compute[257700]: 2025-11-24 09:59:46.028 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:46 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:46 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:46 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:46.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:46 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v989: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 9.0 KiB/s wr, 3 op/s
Nov 24 09:59:46 compute-0 ovn_controller[155123]: 2025-11-24T09:59:46Z|00055|memory_trim|INFO|Detected inactivity (last active 30008 ms ago): trimming memory
Nov 24 09:59:46 compute-0 ceph-mon[74331]: pgmap v989: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 9.0 KiB/s wr, 3 op/s
Nov 24 09:59:47 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:47 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:47 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:47.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:47 compute-0 ceph-mgr[74626]: [devicehealth INFO root] Check health
Nov 24 09:59:47 compute-0 sudo[274708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 09:59:47 compute-0 sudo[274708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 09:59:47 compute-0 sudo[274708]: pam_unix(sudo:session): session closed for user root
Nov 24 09:59:47 compute-0 podman[274732]: 2025-11-24 09:59:47.495953741 +0000 UTC m=+0.053262889 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 24 09:59:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:59:47.537Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:59:48 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:48 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:48 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:48.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:48 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v990: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 9.0 KiB/s wr, 3 op/s
Nov 24 09:59:48 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:59:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:59:48.892Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:59:49 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:49 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:49 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:49.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:49 compute-0 ceph-mon[74331]: pgmap v990: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 9.0 KiB/s wr, 3 op/s
Nov 24 09:59:49 compute-0 nova_compute[257700]: 2025-11-24 09:59:49.558 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:59:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:59:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:59:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:59:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:59:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:59:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:59:50 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:59:50 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:50 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:50 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:50.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:50 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v991: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.7 KiB/s wr, 1 op/s
Nov 24 09:59:50 compute-0 ceph-mon[74331]: pgmap v991: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.7 KiB/s wr, 1 op/s
Nov 24 09:59:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:09:59:50] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Nov 24 09:59:50 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:09:59:50] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Nov 24 09:59:51 compute-0 nova_compute[257700]: 2025-11-24 09:59:51.030 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:51 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:51 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:59:51 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:51.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:59:52 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:52 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:52 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:52.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:52 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v992: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 6.0 KiB/s wr, 2 op/s
Nov 24 09:59:53 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:53 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:59:53 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:53.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:59:53 compute-0 ceph-mon[74331]: pgmap v992: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 6.0 KiB/s wr, 2 op/s
Nov 24 09:59:53 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:59:54 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:54 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:54 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:54.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:54 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v993: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 2.7 KiB/s wr, 1 op/s
Nov 24 09:59:54 compute-0 nova_compute[257700]: 2025-11-24 09:59:54.593 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:54 compute-0 ceph-mon[74331]: pgmap v993: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 2.7 KiB/s wr, 1 op/s
Nov 24 09:59:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:59:54 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 09:59:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:59:54 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 09:59:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:59:54 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 09:59:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:59:55 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 09:59:55 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:55 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 09:59:55 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:55.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 09:59:55 compute-0 nova_compute[257700]: 2025-11-24 09:59:55.705 257704 DEBUG oslo_concurrency.lockutils [None req-f484551b-bbf7-44c7-9c2c-b29706eb0f4b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "72448c73-f653-4d79-8800-4ac3e9261a45" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:59:55 compute-0 nova_compute[257700]: 2025-11-24 09:59:55.705 257704 DEBUG oslo_concurrency.lockutils [None req-f484551b-bbf7-44c7-9c2c-b29706eb0f4b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "72448c73-f653-4d79-8800-4ac3e9261a45" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:59:55 compute-0 nova_compute[257700]: 2025-11-24 09:59:55.706 257704 DEBUG oslo_concurrency.lockutils [None req-f484551b-bbf7-44c7-9c2c-b29706eb0f4b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "72448c73-f653-4d79-8800-4ac3e9261a45-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:59:55 compute-0 nova_compute[257700]: 2025-11-24 09:59:55.706 257704 DEBUG oslo_concurrency.lockutils [None req-f484551b-bbf7-44c7-9c2c-b29706eb0f4b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "72448c73-f653-4d79-8800-4ac3e9261a45-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:59:55 compute-0 nova_compute[257700]: 2025-11-24 09:59:55.706 257704 DEBUG oslo_concurrency.lockutils [None req-f484551b-bbf7-44c7-9c2c-b29706eb0f4b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "72448c73-f653-4d79-8800-4ac3e9261a45-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:59:55 compute-0 nova_compute[257700]: 2025-11-24 09:59:55.707 257704 INFO nova.compute.manager [None req-f484551b-bbf7-44c7-9c2c-b29706eb0f4b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Terminating instance
Nov 24 09:59:55 compute-0 nova_compute[257700]: 2025-11-24 09:59:55.708 257704 DEBUG nova.compute.manager [None req-f484551b-bbf7-44c7-9c2c-b29706eb0f4b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 24 09:59:55 compute-0 kernel: tapaaa65cd2-1e (unregistering): left promiscuous mode
Nov 24 09:59:55 compute-0 NetworkManager[48883]: <info>  [1763978395.7623] device (tapaaa65cd2-1e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 24 09:59:55 compute-0 ovn_controller[155123]: 2025-11-24T09:59:55Z|00056|binding|INFO|Releasing lport aaa65cd2-1ea3-464c-88bb-de1faf8ae995 from this chassis (sb_readonly=0)
Nov 24 09:59:55 compute-0 ovn_controller[155123]: 2025-11-24T09:59:55Z|00057|binding|INFO|Setting lport aaa65cd2-1ea3-464c-88bb-de1faf8ae995 down in Southbound
Nov 24 09:59:55 compute-0 ovn_controller[155123]: 2025-11-24T09:59:55Z|00058|binding|INFO|Removing iface tapaaa65cd2-1e ovn-installed in OVS
Nov 24 09:59:55 compute-0 nova_compute[257700]: 2025-11-24 09:59:55.817 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:55 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:59:55.825 165073 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:a7:a4 10.100.0.22'], port_security=['fa:16:3e:60:a7:a4 10.100.0.22'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.22/28', 'neutron:device_id': '72448c73-f653-4d79-8800-4ac3e9261a45', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cbb18554-4df6-4004-8b94-6d2a9b50722d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94d069fc040647d5a6e54894eec915fe', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd0b398c0-c649-449a-9c4b-f8d4c7e08ebf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58766ea9-d6bf-4e11-9e8a-1652f6f7c4d5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f45b2855760>], logical_port=aaa65cd2-1ea3-464c-88bb-de1faf8ae995) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f45b2855760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 09:59:55 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:59:55.826 165073 INFO neutron.agent.ovn.metadata.agent [-] Port aaa65cd2-1ea3-464c-88bb-de1faf8ae995 in datapath cbb18554-4df6-4004-8b94-6d2a9b50722d unbound from our chassis
Nov 24 09:59:55 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:59:55.827 165073 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cbb18554-4df6-4004-8b94-6d2a9b50722d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 24 09:59:55 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:59:55.828 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[8d1c7f10-7d1c-4422-89c9-08ed745d8048]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:59:55 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:59:55.828 165073 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d namespace which is not needed anymore
Nov 24 09:59:55 compute-0 nova_compute[257700]: 2025-11-24 09:59:55.835 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:55 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000007.scope: Deactivated successfully.
Nov 24 09:59:55 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000007.scope: Consumed 15.167s CPU time.
Nov 24 09:59:55 compute-0 systemd-machined[219130]: Machine qemu-3-instance-00000007 terminated.
Nov 24 09:59:55 compute-0 nova_compute[257700]: 2025-11-24 09:59:55.946 257704 INFO nova.virt.libvirt.driver [-] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Instance destroyed successfully.
Nov 24 09:59:55 compute-0 nova_compute[257700]: 2025-11-24 09:59:55.947 257704 DEBUG nova.objects.instance [None req-f484551b-bbf7-44c7-9c2c-b29706eb0f4b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'resources' on Instance uuid 72448c73-f653-4d79-8800-4ac3e9261a45 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 09:59:55 compute-0 nova_compute[257700]: 2025-11-24 09:59:55.959 257704 DEBUG nova.virt.libvirt.vif [None req-f484551b-bbf7-44c7-9c2c-b29706eb0f4b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T09:58:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-171812519',display_name='tempest-TestNetworkBasicOps-server-171812519',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-171812519',id=7,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCgmeMHJykGqateSnctRhNcdOGXzWmSb9mhWenyV80u/CYEJehcRd0ODCCJk4df+FrF3/+0nuDoJ1DckUOiB1lj/KcXR6y85li1qYj4UyiOPaprHciIxj1dy7cPFGRzkiA==',key_name='tempest-TestNetworkBasicOps-625568321',keypairs=<?>,launch_index=0,launched_at=2025-11-24T09:58:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-iwncyci4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T09:58:51Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=72448c73-f653-4d79-8800-4ac3e9261a45,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "aaa65cd2-1ea3-464c-88bb-de1faf8ae995", "address": "fa:16:3e:60:a7:a4", "network": {"id": "cbb18554-4df6-4004-8b94-6d2a9b50722d", "bridge": "br-int", "label": "tempest-network-smoke--1864982359", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaaa65cd2-1e", "ovs_interfaceid": "aaa65cd2-1ea3-464c-88bb-de1faf8ae995", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 24 09:59:55 compute-0 nova_compute[257700]: 2025-11-24 09:59:55.960 257704 DEBUG nova.network.os_vif_util [None req-f484551b-bbf7-44c7-9c2c-b29706eb0f4b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "aaa65cd2-1ea3-464c-88bb-de1faf8ae995", "address": "fa:16:3e:60:a7:a4", "network": {"id": "cbb18554-4df6-4004-8b94-6d2a9b50722d", "bridge": "br-int", "label": "tempest-network-smoke--1864982359", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaaa65cd2-1e", "ovs_interfaceid": "aaa65cd2-1ea3-464c-88bb-de1faf8ae995", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 09:59:55 compute-0 nova_compute[257700]: 2025-11-24 09:59:55.961 257704 DEBUG nova.network.os_vif_util [None req-f484551b-bbf7-44c7-9c2c-b29706eb0f4b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:60:a7:a4,bridge_name='br-int',has_traffic_filtering=True,id=aaa65cd2-1ea3-464c-88bb-de1faf8ae995,network=Network(cbb18554-4df6-4004-8b94-6d2a9b50722d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaaa65cd2-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 09:59:55 compute-0 nova_compute[257700]: 2025-11-24 09:59:55.962 257704 DEBUG os_vif [None req-f484551b-bbf7-44c7-9c2c-b29706eb0f4b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:60:a7:a4,bridge_name='br-int',has_traffic_filtering=True,id=aaa65cd2-1ea3-464c-88bb-de1faf8ae995,network=Network(cbb18554-4df6-4004-8b94-6d2a9b50722d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaaa65cd2-1e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 24 09:59:55 compute-0 nova_compute[257700]: 2025-11-24 09:59:55.963 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:55 compute-0 nova_compute[257700]: 2025-11-24 09:59:55.963 257704 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaaa65cd2-1e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:59:55 compute-0 nova_compute[257700]: 2025-11-24 09:59:55.965 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:55 compute-0 nova_compute[257700]: 2025-11-24 09:59:55.966 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:55 compute-0 nova_compute[257700]: 2025-11-24 09:59:55.968 257704 INFO os_vif [None req-f484551b-bbf7-44c7-9c2c-b29706eb0f4b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:60:a7:a4,bridge_name='br-int',has_traffic_filtering=True,id=aaa65cd2-1ea3-464c-88bb-de1faf8ae995,network=Network(cbb18554-4df6-4004-8b94-6d2a9b50722d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaaa65cd2-1e')
Nov 24 09:59:55 compute-0 neutron-haproxy-ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d[273735]: [NOTICE]   (273739) : haproxy version is 2.8.14-c23fe91
Nov 24 09:59:55 compute-0 neutron-haproxy-ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d[273735]: [NOTICE]   (273739) : path to executable is /usr/sbin/haproxy
Nov 24 09:59:55 compute-0 neutron-haproxy-ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d[273735]: [WARNING]  (273739) : Exiting Master process...
Nov 24 09:59:55 compute-0 neutron-haproxy-ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d[273735]: [ALERT]    (273739) : Current worker (273741) exited with code 143 (Terminated)
Nov 24 09:59:55 compute-0 neutron-haproxy-ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d[273735]: [WARNING]  (273739) : All workers exited. Exiting... (0)
Nov 24 09:59:55 compute-0 systemd[1]: libpod-40d498fbdcd7a31af38e9bb51d76485f007db97b01470fad413a9eeef2909304.scope: Deactivated successfully.
Nov 24 09:59:55 compute-0 conmon[273735]: conmon 40d498fbdcd7a31af38e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-40d498fbdcd7a31af38e9bb51d76485f007db97b01470fad413a9eeef2909304.scope/container/memory.events
Nov 24 09:59:55 compute-0 podman[274787]: 2025-11-24 09:59:55.991495969 +0000 UTC m=+0.054576571 container died 40d498fbdcd7a31af38e9bb51d76485f007db97b01470fad413a9eeef2909304 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 24 09:59:56 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-40d498fbdcd7a31af38e9bb51d76485f007db97b01470fad413a9eeef2909304-userdata-shm.mount: Deactivated successfully.
Nov 24 09:59:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-807471785bf64d89d388596599806772ebc0ffb18d232d146376ea154b497c05-merged.mount: Deactivated successfully.
Nov 24 09:59:56 compute-0 nova_compute[257700]: 2025-11-24 09:59:56.033 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:56 compute-0 podman[274787]: 2025-11-24 09:59:56.037373746 +0000 UTC m=+0.100454318 container cleanup 40d498fbdcd7a31af38e9bb51d76485f007db97b01470fad413a9eeef2909304 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 24 09:59:56 compute-0 systemd[1]: libpod-conmon-40d498fbdcd7a31af38e9bb51d76485f007db97b01470fad413a9eeef2909304.scope: Deactivated successfully.
Nov 24 09:59:56 compute-0 podman[274848]: 2025-11-24 09:59:56.118665216 +0000 UTC m=+0.048980883 container remove 40d498fbdcd7a31af38e9bb51d76485f007db97b01470fad413a9eeef2909304 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 24 09:59:56 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:59:56.126 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[c1c78290-7ee8-4495-9fa6-674f2dbb9101]: (4, ('Mon Nov 24 09:59:55 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d (40d498fbdcd7a31af38e9bb51d76485f007db97b01470fad413a9eeef2909304)\n40d498fbdcd7a31af38e9bb51d76485f007db97b01470fad413a9eeef2909304\nMon Nov 24 09:59:56 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d (40d498fbdcd7a31af38e9bb51d76485f007db97b01470fad413a9eeef2909304)\n40d498fbdcd7a31af38e9bb51d76485f007db97b01470fad413a9eeef2909304\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:59:56 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:59:56.128 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[41082d56-74ff-4933-bddc-bac75f49bf99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:59:56 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:59:56.129 165073 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcbb18554-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 09:59:56 compute-0 nova_compute[257700]: 2025-11-24 09:59:56.131 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:56 compute-0 kernel: tapcbb18554-40: left promiscuous mode
Nov 24 09:59:56 compute-0 nova_compute[257700]: 2025-11-24 09:59:56.147 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:56 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:59:56.150 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[2bb8099b-0e27-4902-a3b0-7d49e5c4e4a3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:59:56 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:59:56.171 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[df8adafa-62d2-49be-98b2-f92ce2ca60fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:59:56 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:59:56.172 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[bcb821db-65c6-4d80-9312-697155987c70]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:59:56 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:59:56.189 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[070fd90a-140d-4644-bb97-44e3ee0bab0c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 427861, 'reachable_time': 40670, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274863, 'error': None, 'target': 'ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:59:56 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:59:56.191 165227 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cbb18554-4df6-4004-8b94-6d2a9b50722d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 24 09:59:56 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:59:56.192 165227 DEBUG oslo.privsep.daemon [-] privsep: reply[96b896d9-1a50-4a61-9a18-40879ece20bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 09:59:56 compute-0 systemd[1]: run-netns-ovnmeta\x2dcbb18554\x2d4df6\x2d4004\x2d8b94\x2d6d2a9b50722d.mount: Deactivated successfully.
Nov 24 09:59:56 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:56 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:56 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:56.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:56 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v994: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 3.4 KiB/s wr, 9 op/s
Nov 24 09:59:56 compute-0 nova_compute[257700]: 2025-11-24 09:59:56.623 257704 DEBUG nova.compute.manager [req-ec1552b9-4e1f-4967-ab4d-3c380cce6cc4 req-bfa762a7-260d-4dc0-9d38-0dc741e51ce2 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Received event network-vif-unplugged-aaa65cd2-1ea3-464c-88bb-de1faf8ae995 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 09:59:56 compute-0 nova_compute[257700]: 2025-11-24 09:59:56.623 257704 DEBUG oslo_concurrency.lockutils [req-ec1552b9-4e1f-4967-ab4d-3c380cce6cc4 req-bfa762a7-260d-4dc0-9d38-0dc741e51ce2 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "72448c73-f653-4d79-8800-4ac3e9261a45-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:59:56 compute-0 nova_compute[257700]: 2025-11-24 09:59:56.623 257704 DEBUG oslo_concurrency.lockutils [req-ec1552b9-4e1f-4967-ab4d-3c380cce6cc4 req-bfa762a7-260d-4dc0-9d38-0dc741e51ce2 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "72448c73-f653-4d79-8800-4ac3e9261a45-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:59:56 compute-0 nova_compute[257700]: 2025-11-24 09:59:56.624 257704 DEBUG oslo_concurrency.lockutils [req-ec1552b9-4e1f-4967-ab4d-3c380cce6cc4 req-bfa762a7-260d-4dc0-9d38-0dc741e51ce2 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "72448c73-f653-4d79-8800-4ac3e9261a45-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:59:56 compute-0 nova_compute[257700]: 2025-11-24 09:59:56.624 257704 DEBUG nova.compute.manager [req-ec1552b9-4e1f-4967-ab4d-3c380cce6cc4 req-bfa762a7-260d-4dc0-9d38-0dc741e51ce2 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] No waiting events found dispatching network-vif-unplugged-aaa65cd2-1ea3-464c-88bb-de1faf8ae995 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 09:59:56 compute-0 nova_compute[257700]: 2025-11-24 09:59:56.624 257704 DEBUG nova.compute.manager [req-ec1552b9-4e1f-4967-ab4d-3c380cce6cc4 req-bfa762a7-260d-4dc0-9d38-0dc741e51ce2 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Received event network-vif-unplugged-aaa65cd2-1ea3-464c-88bb-de1faf8ae995 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 24 09:59:56 compute-0 nova_compute[257700]: 2025-11-24 09:59:56.737 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 09:59:56 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:59:56.737 165073 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:13:51', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4e:f0:a8:6f:5e:1b'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 09:59:56 compute-0 ovn_metadata_agent[165067]: 2025-11-24 09:59:56.738 165073 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 09:59:56 compute-0 nova_compute[257700]: 2025-11-24 09:59:56.904 257704 INFO nova.virt.libvirt.driver [None req-f484551b-bbf7-44c7-9c2c-b29706eb0f4b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Deleting instance files /var/lib/nova/instances/72448c73-f653-4d79-8800-4ac3e9261a45_del
Nov 24 09:59:56 compute-0 nova_compute[257700]: 2025-11-24 09:59:56.904 257704 INFO nova.virt.libvirt.driver [None req-f484551b-bbf7-44c7-9c2c-b29706eb0f4b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Deletion of /var/lib/nova/instances/72448c73-f653-4d79-8800-4ac3e9261a45_del complete
Nov 24 09:59:56 compute-0 nova_compute[257700]: 2025-11-24 09:59:56.956 257704 INFO nova.compute.manager [None req-f484551b-bbf7-44c7-9c2c-b29706eb0f4b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Took 1.25 seconds to destroy the instance on the hypervisor.
Nov 24 09:59:56 compute-0 nova_compute[257700]: 2025-11-24 09:59:56.957 257704 DEBUG oslo.service.loopingcall [None req-f484551b-bbf7-44c7-9c2c-b29706eb0f4b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 24 09:59:56 compute-0 nova_compute[257700]: 2025-11-24 09:59:56.957 257704 DEBUG nova.compute.manager [-] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 24 09:59:56 compute-0 nova_compute[257700]: 2025-11-24 09:59:56.958 257704 DEBUG nova.network.neutron [-] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 24 09:59:57 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:57 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 09:59:57 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:57.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 09:59:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:59:57.538Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:59:57 compute-0 ceph-mon[74331]: pgmap v994: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 3.4 KiB/s wr, 9 op/s
Nov 24 09:59:57 compute-0 nova_compute[257700]: 2025-11-24 09:59:57.755 257704 DEBUG nova.network.neutron [-] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 09:59:57 compute-0 nova_compute[257700]: 2025-11-24 09:59:57.767 257704 INFO nova.compute.manager [-] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Took 0.81 seconds to deallocate network for instance.
Nov 24 09:59:57 compute-0 nova_compute[257700]: 2025-11-24 09:59:57.811 257704 DEBUG oslo_concurrency.lockutils [None req-f484551b-bbf7-44c7-9c2c-b29706eb0f4b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:59:57 compute-0 nova_compute[257700]: 2025-11-24 09:59:57.811 257704 DEBUG oslo_concurrency.lockutils [None req-f484551b-bbf7-44c7-9c2c-b29706eb0f4b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:59:57 compute-0 nova_compute[257700]: 2025-11-24 09:59:57.814 257704 DEBUG nova.compute.manager [req-d1358ecd-8a10-4ab9-af74-a491e6490155 req-5b41dd75-af7c-4493-9b9a-5f913db80c23 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Received event network-vif-deleted-aaa65cd2-1ea3-464c-88bb-de1faf8ae995 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 09:59:58 compute-0 nova_compute[257700]: 2025-11-24 09:59:58.028 257704 DEBUG oslo_concurrency.processutils [None req-f484551b-bbf7-44c7-9c2c-b29706eb0f4b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 09:59:58 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:58 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:58 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:09:59:58.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 09:59:58 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 09:59:58 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1125735957' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:59:58 compute-0 nova_compute[257700]: 2025-11-24 09:59:58.461 257704 DEBUG oslo_concurrency.processutils [None req-f484551b-bbf7-44c7-9c2c-b29706eb0f4b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 09:59:58 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v995: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 6.7 KiB/s rd, 2.1 KiB/s wr, 8 op/s
Nov 24 09:59:58 compute-0 nova_compute[257700]: 2025-11-24 09:59:58.469 257704 DEBUG nova.compute.provider_tree [None req-f484551b-bbf7-44c7-9c2c-b29706eb0f4b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed in ProviderTree for provider: a50ce3b5-7e9e-4263-a4aa-c35573ac7257 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 09:59:58 compute-0 nova_compute[257700]: 2025-11-24 09:59:58.483 257704 DEBUG nova.scheduler.client.report [None req-f484551b-bbf7-44c7-9c2c-b29706eb0f4b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 09:59:58 compute-0 nova_compute[257700]: 2025-11-24 09:59:58.501 257704 DEBUG oslo_concurrency.lockutils [None req-f484551b-bbf7-44c7-9c2c-b29706eb0f4b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:59:58 compute-0 nova_compute[257700]: 2025-11-24 09:59:58.536 257704 INFO nova.scheduler.client.report [None req-f484551b-bbf7-44c7-9c2c-b29706eb0f4b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Deleted allocations for instance 72448c73-f653-4d79-8800-4ac3e9261a45
Nov 24 09:59:58 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1125735957' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 09:59:58 compute-0 ceph-mon[74331]: pgmap v995: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 6.7 KiB/s rd, 2.1 KiB/s wr, 8 op/s
Nov 24 09:59:58 compute-0 nova_compute[257700]: 2025-11-24 09:59:58.592 257704 DEBUG oslo_concurrency.lockutils [None req-f484551b-bbf7-44c7-9c2c-b29706eb0f4b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "72448c73-f653-4d79-8800-4ac3e9261a45" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.886s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:59:58 compute-0 nova_compute[257700]: 2025-11-24 09:59:58.678 257704 DEBUG nova.compute.manager [req-6d6d540f-8a45-47cd-9ad6-62e1cc47371b req-eac8bbfa-341f-436a-9ff1-eb4b1ecc0cdc 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Received event network-vif-plugged-aaa65cd2-1ea3-464c-88bb-de1faf8ae995 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 09:59:58 compute-0 nova_compute[257700]: 2025-11-24 09:59:58.678 257704 DEBUG oslo_concurrency.lockutils [req-6d6d540f-8a45-47cd-9ad6-62e1cc47371b req-eac8bbfa-341f-436a-9ff1-eb4b1ecc0cdc 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "72448c73-f653-4d79-8800-4ac3e9261a45-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 09:59:58 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 09:59:58 compute-0 nova_compute[257700]: 2025-11-24 09:59:58.679 257704 DEBUG oslo_concurrency.lockutils [req-6d6d540f-8a45-47cd-9ad6-62e1cc47371b req-eac8bbfa-341f-436a-9ff1-eb4b1ecc0cdc 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "72448c73-f653-4d79-8800-4ac3e9261a45-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 09:59:58 compute-0 nova_compute[257700]: 2025-11-24 09:59:58.679 257704 DEBUG oslo_concurrency.lockutils [req-6d6d540f-8a45-47cd-9ad6-62e1cc47371b req-eac8bbfa-341f-436a-9ff1-eb4b1ecc0cdc 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "72448c73-f653-4d79-8800-4ac3e9261a45-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 09:59:58 compute-0 nova_compute[257700]: 2025-11-24 09:59:58.680 257704 DEBUG nova.compute.manager [req-6d6d540f-8a45-47cd-9ad6-62e1cc47371b req-eac8bbfa-341f-436a-9ff1-eb4b1ecc0cdc 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] No waiting events found dispatching network-vif-plugged-aaa65cd2-1ea3-464c-88bb-de1faf8ae995 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 09:59:58 compute-0 nova_compute[257700]: 2025-11-24 09:59:58.680 257704 WARNING nova.compute.manager [req-6d6d540f-8a45-47cd-9ad6-62e1cc47371b req-eac8bbfa-341f-436a-9ff1-eb4b1ecc0cdc 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Received unexpected event network-vif-plugged-aaa65cd2-1ea3-464c-88bb-de1faf8ae995 for instance with vm_state deleted and task_state None.
Nov 24 09:59:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T09:59:58.893Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 09:59:59 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 09:59:59 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 09:59:59 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:09:59:59.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:00 compute-0 ceph-mon[74331]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1 failed cephadm daemon(s)
Nov 24 10:00:00 compute-0 ceph-mon[74331]: log_channel(cluster) log [WRN] : [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
Nov 24 10:00:00 compute-0 ceph-mon[74331]: log_channel(cluster) log [WRN] :     daemon nfs.cephfs.0.0.compute-1.vvoanr on compute-1 is in unknown state
Nov 24 10:00:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:59:59 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:00:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:59:59 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:00:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 09:59:59 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:00:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:00:00 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:00:00 compute-0 ceph-mon[74331]: Health detail: HEALTH_WARN 1 failed cephadm daemon(s)
Nov 24 10:00:00 compute-0 ceph-mon[74331]: [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
Nov 24 10:00:00 compute-0 ceph-mon[74331]:     daemon nfs.cephfs.0.0.compute-1.vvoanr on compute-1 is in unknown state
Nov 24 10:00:00 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:00 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:00:00 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:00.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:00:00 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v996: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 6.7 KiB/s rd, 2.1 KiB/s wr, 8 op/s
Nov 24 10:00:00 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:00:00.741 165073 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feb242b9-6422-4c37-bc7a-5c14a79beaf8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:00:00 compute-0 nova_compute[257700]: 2025-11-24 10:00:00.967 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:00:00] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Nov 24 10:00:00 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:00:00] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Nov 24 10:00:01 compute-0 nova_compute[257700]: 2025-11-24 10:00:01.034 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:01 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:00:01 compute-0 ceph-mon[74331]: pgmap v996: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 6.7 KiB/s rd, 2.1 KiB/s wr, 8 op/s
Nov 24 10:00:01 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:01 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:00:01 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:01.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:00:02 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/1474587093' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 10:00:02 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/1474587093' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 10:00:02 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:02 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:02 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:02.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:02 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v997: 353 pgs: 353 active+clean; 121 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 4.8 KiB/s wr, 29 op/s
Nov 24 10:00:03 compute-0 nova_compute[257700]: 2025-11-24 10:00:03.058 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:03 compute-0 ceph-mon[74331]: pgmap v997: 353 pgs: 353 active+clean; 121 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 4.8 KiB/s wr, 29 op/s
Nov 24 10:00:03 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:03 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:03 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:03.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:03 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:00:04 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:04 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:04 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:04.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:04 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v998: 353 pgs: 353 active+clean; 121 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.5 KiB/s wr, 28 op/s
Nov 24 10:00:04 compute-0 ceph-mon[74331]: pgmap v998: 353 pgs: 353 active+clean; 121 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.5 KiB/s wr, 28 op/s
Nov 24 10:00:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:00:04 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:00:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:00:04 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:00:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:00:04 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:00:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:00:05 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:00:05 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:05 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:05 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:05.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:05 compute-0 nova_compute[257700]: 2025-11-24 10:00:05.972 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:06 compute-0 nova_compute[257700]: 2025-11-24 10:00:06.037 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:06 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/1218318765' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:00:06 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:06 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:06 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:06.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:06 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v999: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 5.7 KiB/s wr, 57 op/s
Nov 24 10:00:07 compute-0 ceph-mon[74331]: pgmap v999: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 5.7 KiB/s wr, 57 op/s
Nov 24 10:00:07 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:07 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:07 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:07.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:00:07.539Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:00:07 compute-0 sudo[274901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:00:07 compute-0 sudo[274901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:00:07 compute-0 sudo[274901]: pam_unix(sudo:session): session closed for user root
Nov 24 10:00:08 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:08 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:08 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:08.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:08 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1000: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.0 KiB/s wr, 49 op/s
Nov 24 10:00:08 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:00:08 compute-0 sshd-session[274899]: Invalid user tomcat from 14.215.126.91 port 49026
Nov 24 10:00:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:00:08.893Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 10:00:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:00:08.894Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:00:08 compute-0 sshd-session[274899]: Received disconnect from 14.215.126.91 port 49026:11: Bye Bye [preauth]
Nov 24 10:00:08 compute-0 sshd-session[274899]: Disconnected from invalid user tomcat 14.215.126.91 port 49026 [preauth]
Nov 24 10:00:09 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:09 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:00:09 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:09.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:00:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=cleanup t=2025-11-24T10:00:09.170120751Z level=info msg="Completed cleanup jobs" duration=12.330721ms
Nov 24 10:00:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=plugins.update.checker t=2025-11-24T10:00:09.309032266Z level=info msg="Update check succeeded" duration=51.611247ms
Nov 24 10:00:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=grafana.update.checker t=2025-11-24T10:00:09.310076091Z level=info msg="Update check succeeded" duration=60.516555ms
Nov 24 10:00:09 compute-0 ceph-mon[74331]: pgmap v1000: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.0 KiB/s wr, 49 op/s
Nov 24 10:00:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:00:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:00:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:00:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:00:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:00:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:00:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:00:10 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:00:10 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:10 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:00:10 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:10.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:00:10 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1001: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.0 KiB/s wr, 49 op/s
Nov 24 10:00:10 compute-0 ceph-mon[74331]: pgmap v1001: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.0 KiB/s wr, 49 op/s
Nov 24 10:00:10 compute-0 podman[274931]: 2025-11-24 10:00:10.818970087 +0000 UTC m=+0.090640860 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 10:00:10 compute-0 podman[274932]: 2025-11-24 10:00:10.863410689 +0000 UTC m=+0.134985290 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 24 10:00:10 compute-0 nova_compute[257700]: 2025-11-24 10:00:10.945 257704 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763978395.9431973, 72448c73-f653-4d79-8800-4ac3e9261a45 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 10:00:10 compute-0 nova_compute[257700]: 2025-11-24 10:00:10.945 257704 INFO nova.compute.manager [-] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] VM Stopped (Lifecycle Event)
Nov 24 10:00:10 compute-0 nova_compute[257700]: 2025-11-24 10:00:10.969 257704 DEBUG nova.compute.manager [None req-ba55db08-0f6c-4b6f-a15d-6875f2788edc - - - - - -] [instance: 72448c73-f653-4d79-8800-4ac3e9261a45] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 10:00:10 compute-0 nova_compute[257700]: 2025-11-24 10:00:10.976 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:00:10] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Nov 24 10:00:10 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:00:10] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Nov 24 10:00:11 compute-0 nova_compute[257700]: 2025-11-24 10:00:11.040 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:11 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:11 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:11 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:11.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:12 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:12 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:12 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:12.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:12 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1002: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.0 KiB/s wr, 50 op/s
Nov 24 10:00:13 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:13 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:13 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:13.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:13 compute-0 ceph-mon[74331]: pgmap v1002: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.0 KiB/s wr, 50 op/s
Nov 24 10:00:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:00:14 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:14 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:14 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:14.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:14 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1003: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.2 KiB/s wr, 29 op/s
Nov 24 10:00:14 compute-0 ceph-mon[74331]: pgmap v1003: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.2 KiB/s wr, 29 op/s
Nov 24 10:00:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:00:14 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:00:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:00:15 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:00:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:00:15 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:00:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:00:15 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:00:15 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:15 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:15 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:15.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:00:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:00:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:00:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:00:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:00:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:00:15 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:00:15 compute-0 nova_compute[257700]: 2025-11-24 10:00:15.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:00:15 compute-0 nova_compute[257700]: 2025-11-24 10:00:15.979 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:16 compute-0 nova_compute[257700]: 2025-11-24 10:00:16.041 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:16 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:16 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:16 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:16.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:16 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1004: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.2 KiB/s wr, 29 op/s
Nov 24 10:00:16 compute-0 ceph-mon[74331]: pgmap v1004: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.2 KiB/s wr, 29 op/s
Nov 24 10:00:16 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Nov 24 10:00:16 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:00:16.612284) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 10:00:16 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Nov 24 10:00:16 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978416612335, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 1080, "num_deletes": 502, "total_data_size": 1243455, "memory_usage": 1276256, "flush_reason": "Manual Compaction"}
Nov 24 10:00:16 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Nov 24 10:00:16 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978416624054, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 916203, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28478, "largest_seqno": 29557, "table_properties": {"data_size": 911957, "index_size": 1386, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 13933, "raw_average_key_size": 19, "raw_value_size": 901005, "raw_average_value_size": 1261, "num_data_blocks": 60, "num_entries": 714, "num_filter_entries": 714, "num_deletions": 502, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763978354, "oldest_key_time": 1763978354, "file_creation_time": 1763978416, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "42aa12d2-c531-4ddc-8c4c-bc0b5971346b", "db_session_id": "RORHLERH15LC1QL8D0I4", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Nov 24 10:00:16 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 11878 microseconds, and 7072 cpu microseconds.
Nov 24 10:00:16 compute-0 ceph-mon[74331]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 10:00:16 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:00:16.624160) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 916203 bytes OK
Nov 24 10:00:16 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:00:16.624191) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Nov 24 10:00:16 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:00:16.626385) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Nov 24 10:00:16 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:00:16.626431) EVENT_LOG_v1 {"time_micros": 1763978416626420, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 10:00:16 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:00:16.626458) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 10:00:16 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 1237409, prev total WAL file size 1237409, number of live WAL files 2.
Nov 24 10:00:16 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 10:00:16 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:00:16.627170) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Nov 24 10:00:16 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 10:00:16 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(894KB)], [62(16MB)]
Nov 24 10:00:16 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978416627205, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 18037542, "oldest_snapshot_seqno": -1}
Nov 24 10:00:16 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5793 keys, 12163204 bytes, temperature: kUnknown
Nov 24 10:00:16 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978416701037, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 12163204, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12126585, "index_size": 21021, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14533, "raw_key_size": 149904, "raw_average_key_size": 25, "raw_value_size": 12024081, "raw_average_value_size": 2075, "num_data_blocks": 839, "num_entries": 5793, "num_filter_entries": 5793, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976305, "oldest_key_time": 0, "file_creation_time": 1763978416, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "42aa12d2-c531-4ddc-8c4c-bc0b5971346b", "db_session_id": "RORHLERH15LC1QL8D0I4", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Nov 24 10:00:16 compute-0 ceph-mon[74331]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 10:00:16 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:00:16.701486) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 12163204 bytes
Nov 24 10:00:16 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:00:16.703662) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 243.7 rd, 164.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 16.3 +0.0 blob) out(11.6 +0.0 blob), read-write-amplify(33.0) write-amplify(13.3) OK, records in: 6793, records dropped: 1000 output_compression: NoCompression
Nov 24 10:00:16 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:00:16.703700) EVENT_LOG_v1 {"time_micros": 1763978416703685, "job": 34, "event": "compaction_finished", "compaction_time_micros": 74003, "compaction_time_cpu_micros": 25762, "output_level": 6, "num_output_files": 1, "total_output_size": 12163204, "num_input_records": 6793, "num_output_records": 5793, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 10:00:16 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 10:00:16 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978416704091, "job": 34, "event": "table_file_deletion", "file_number": 64}
Nov 24 10:00:16 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 10:00:16 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978416708838, "job": 34, "event": "table_file_deletion", "file_number": 62}
Nov 24 10:00:16 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:00:16.627113) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:00:16 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:00:16.708887) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:00:16 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:00:16.708894) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:00:16 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:00:16.708896) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:00:16 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:00:16.708899) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:00:16 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:00:16.708901) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:00:17 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:17 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:17 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:17.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:00:17.540Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 10:00:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:00:17.541Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:00:17 compute-0 podman[274987]: 2025-11-24 10:00:17.828503036 +0000 UTC m=+0.082940412 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 10:00:17 compute-0 nova_compute[257700]: 2025-11-24 10:00:17.920 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:00:18 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:18 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:18 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:18.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:18 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1005: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:00:18 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:00:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:00:18.895Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:00:18 compute-0 nova_compute[257700]: 2025-11-24 10:00:18.920 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:00:19 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:19 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:19 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:19.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:19 compute-0 ceph-mon[74331]: pgmap v1005: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:00:19 compute-0 nova_compute[257700]: 2025-11-24 10:00:19.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:00:19 compute-0 nova_compute[257700]: 2025-11-24 10:00:19.921 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 10:00:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:00:19 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:00:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:00:19 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:00:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:00:19 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:00:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:00:20 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:00:20 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:20 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:20 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:20.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:20 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1006: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:00:20 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/3007477117' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:00:20 compute-0 ceph-mon[74331]: pgmap v1006: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:00:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:00:20.571 165073 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:00:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:00:20.571 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:00:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:00:20.572 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:00:20 compute-0 nova_compute[257700]: 2025-11-24 10:00:20.983 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:00:20] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 24 10:00:20 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:00:20] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 24 10:00:21 compute-0 nova_compute[257700]: 2025-11-24 10:00:21.043 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:21 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:21 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:00:21 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:21.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:00:21 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/397015019' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:00:21 compute-0 nova_compute[257700]: 2025-11-24 10:00:21.920 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:00:21 compute-0 nova_compute[257700]: 2025-11-24 10:00:21.921 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 10:00:21 compute-0 nova_compute[257700]: 2025-11-24 10:00:21.921 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 10:00:21 compute-0 nova_compute[257700]: 2025-11-24 10:00:21.931 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 10:00:21 compute-0 nova_compute[257700]: 2025-11-24 10:00:21.932 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:00:21 compute-0 nova_compute[257700]: 2025-11-24 10:00:21.948 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:00:21 compute-0 nova_compute[257700]: 2025-11-24 10:00:21.948 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:00:21 compute-0 nova_compute[257700]: 2025-11-24 10:00:21.948 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:00:21 compute-0 nova_compute[257700]: 2025-11-24 10:00:21.949 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 10:00:21 compute-0 nova_compute[257700]: 2025-11-24 10:00:21.949 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:00:22 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:00:22 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/492539947' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:00:22 compute-0 nova_compute[257700]: 2025-11-24 10:00:22.371 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:00:22 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:22 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:22 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:22.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:22 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1007: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:00:22 compute-0 nova_compute[257700]: 2025-11-24 10:00:22.515 257704 WARNING nova.virt.libvirt.driver [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 10:00:22 compute-0 nova_compute[257700]: 2025-11-24 10:00:22.516 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4607MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 10:00:22 compute-0 nova_compute[257700]: 2025-11-24 10:00:22.517 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:00:22 compute-0 nova_compute[257700]: 2025-11-24 10:00:22.517 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:00:22 compute-0 nova_compute[257700]: 2025-11-24 10:00:22.577 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 10:00:22 compute-0 nova_compute[257700]: 2025-11-24 10:00:22.577 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 10:00:22 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/492539947' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:00:22 compute-0 ceph-mon[74331]: pgmap v1007: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:00:22 compute-0 nova_compute[257700]: 2025-11-24 10:00:22.595 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:00:23 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:00:23 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2622083022' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:00:23 compute-0 nova_compute[257700]: 2025-11-24 10:00:23.063 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:00:23 compute-0 nova_compute[257700]: 2025-11-24 10:00:23.068 257704 DEBUG nova.compute.provider_tree [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Inventory has not changed in ProviderTree for provider: a50ce3b5-7e9e-4263-a4aa-c35573ac7257 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 10:00:23 compute-0 nova_compute[257700]: 2025-11-24 10:00:23.081 257704 DEBUG nova.scheduler.client.report [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Inventory has not changed for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 10:00:23 compute-0 nova_compute[257700]: 2025-11-24 10:00:23.101 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 10:00:23 compute-0 nova_compute[257700]: 2025-11-24 10:00:23.102 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.585s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:00:23 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:23 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:23 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:23.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:23 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2622083022' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:00:23 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/2206353367' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:00:23 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:00:24 compute-0 nova_compute[257700]: 2025-11-24 10:00:24.098 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:00:24 compute-0 nova_compute[257700]: 2025-11-24 10:00:24.098 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:00:24 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:24 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.002000047s ======
Nov 24 10:00:24 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:24.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Nov 24 10:00:24 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1008: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:00:24 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/1856495997' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:00:24 compute-0 ceph-mon[74331]: pgmap v1008: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:00:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:00:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:00:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:00:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:00:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:00:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:00:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:00:25 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:00:25 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:25 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:25 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:25.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:25 compute-0 nova_compute[257700]: 2025-11-24 10:00:25.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:00:25 compute-0 nova_compute[257700]: 2025-11-24 10:00:25.988 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:26 compute-0 nova_compute[257700]: 2025-11-24 10:00:26.044 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:26 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:26 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:00:26 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:26.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:00:26 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1009: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:00:26 compute-0 ceph-mon[74331]: pgmap v1009: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:00:27 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:27 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:27 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:27.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:00:27.542Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:00:27 compute-0 sudo[275063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:00:27 compute-0 sudo[275063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:00:27 compute-0 sudo[275063]: pam_unix(sudo:session): session closed for user root
Nov 24 10:00:28 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:28 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:28 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:28.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:28 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1010: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:00:28 compute-0 ceph-mon[74331]: pgmap v1010: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:00:28 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:00:28 compute-0 nova_compute[257700]: 2025-11-24 10:00:28.806 257704 DEBUG oslo_concurrency.lockutils [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:00:28 compute-0 nova_compute[257700]: 2025-11-24 10:00:28.807 257704 DEBUG oslo_concurrency.lockutils [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:00:28 compute-0 nova_compute[257700]: 2025-11-24 10:00:28.820 257704 DEBUG nova.compute.manager [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 24 10:00:28 compute-0 nova_compute[257700]: 2025-11-24 10:00:28.881 257704 DEBUG oslo_concurrency.lockutils [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:00:28 compute-0 nova_compute[257700]: 2025-11-24 10:00:28.882 257704 DEBUG oslo_concurrency.lockutils [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:00:28 compute-0 nova_compute[257700]: 2025-11-24 10:00:28.887 257704 DEBUG nova.virt.hardware [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 24 10:00:28 compute-0 nova_compute[257700]: 2025-11-24 10:00:28.888 257704 INFO nova.compute.claims [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Claim successful on node compute-0.ctlplane.example.com
Nov 24 10:00:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:00:28.896Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:00:28 compute-0 nova_compute[257700]: 2025-11-24 10:00:28.971 257704 DEBUG oslo_concurrency.processutils [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:00:29 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:29 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:29 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:29.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:29 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:00:29 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2796252596' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:00:29 compute-0 nova_compute[257700]: 2025-11-24 10:00:29.425 257704 DEBUG oslo_concurrency.processutils [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:00:29 compute-0 nova_compute[257700]: 2025-11-24 10:00:29.433 257704 DEBUG nova.compute.provider_tree [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed in ProviderTree for provider: a50ce3b5-7e9e-4263-a4aa-c35573ac7257 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 10:00:29 compute-0 nova_compute[257700]: 2025-11-24 10:00:29.447 257704 DEBUG nova.scheduler.client.report [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 10:00:29 compute-0 nova_compute[257700]: 2025-11-24 10:00:29.466 257704 DEBUG oslo_concurrency.lockutils [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.585s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:00:29 compute-0 nova_compute[257700]: 2025-11-24 10:00:29.467 257704 DEBUG nova.compute.manager [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 24 10:00:29 compute-0 nova_compute[257700]: 2025-11-24 10:00:29.514 257704 DEBUG nova.compute.manager [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 24 10:00:29 compute-0 nova_compute[257700]: 2025-11-24 10:00:29.514 257704 DEBUG nova.network.neutron [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 24 10:00:29 compute-0 nova_compute[257700]: 2025-11-24 10:00:29.535 257704 INFO nova.virt.libvirt.driver [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 24 10:00:29 compute-0 nova_compute[257700]: 2025-11-24 10:00:29.553 257704 DEBUG nova.compute.manager [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 24 10:00:29 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2796252596' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:00:29 compute-0 nova_compute[257700]: 2025-11-24 10:00:29.650 257704 DEBUG nova.compute.manager [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 24 10:00:29 compute-0 nova_compute[257700]: 2025-11-24 10:00:29.652 257704 DEBUG nova.virt.libvirt.driver [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 24 10:00:29 compute-0 nova_compute[257700]: 2025-11-24 10:00:29.652 257704 INFO nova.virt.libvirt.driver [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Creating image(s)
Nov 24 10:00:29 compute-0 nova_compute[257700]: 2025-11-24 10:00:29.680 257704 DEBUG nova.storage.rbd_utils [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 10:00:29 compute-0 sudo[275112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:00:29 compute-0 sudo[275112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:00:29 compute-0 sudo[275112]: pam_unix(sudo:session): session closed for user root
Nov 24 10:00:29 compute-0 nova_compute[257700]: 2025-11-24 10:00:29.711 257704 DEBUG nova.storage.rbd_utils [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 10:00:29 compute-0 nova_compute[257700]: 2025-11-24 10:00:29.746 257704 DEBUG nova.storage.rbd_utils [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 10:00:29 compute-0 nova_compute[257700]: 2025-11-24 10:00:29.751 257704 DEBUG oslo_concurrency.processutils [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:00:29 compute-0 sudo[275173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 10:00:29 compute-0 sudo[275173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:00:29 compute-0 nova_compute[257700]: 2025-11-24 10:00:29.809 257704 DEBUG oslo_concurrency.processutils [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:00:29 compute-0 nova_compute[257700]: 2025-11-24 10:00:29.810 257704 DEBUG oslo_concurrency.lockutils [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "2ed5c667523487159c4c4503c82babbc95dbae40" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:00:29 compute-0 nova_compute[257700]: 2025-11-24 10:00:29.811 257704 DEBUG oslo_concurrency.lockutils [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "2ed5c667523487159c4c4503c82babbc95dbae40" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:00:29 compute-0 nova_compute[257700]: 2025-11-24 10:00:29.811 257704 DEBUG oslo_concurrency.lockutils [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "2ed5c667523487159c4c4503c82babbc95dbae40" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:00:29 compute-0 nova_compute[257700]: 2025-11-24 10:00:29.838 257704 DEBUG nova.storage.rbd_utils [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 10:00:29 compute-0 nova_compute[257700]: 2025-11-24 10:00:29.841 257704 DEBUG oslo_concurrency.processutils [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:00:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:00:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:00:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:00:30 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:00:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:00:30 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:00:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:00:30 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:00:30 compute-0 nova_compute[257700]: 2025-11-24 10:00:30.088 257704 DEBUG oslo_concurrency.processutils [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.246s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:00:30 compute-0 nova_compute[257700]: 2025-11-24 10:00:30.185 257704 DEBUG nova.storage.rbd_utils [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] resizing rbd image 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 24 10:00:30 compute-0 sudo[275173]: pam_unix(sudo:session): session closed for user root
Nov 24 10:00:30 compute-0 nova_compute[257700]: 2025-11-24 10:00:30.334 257704 DEBUG nova.objects.instance [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'migration_context' on Instance uuid 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 10:00:30 compute-0 nova_compute[257700]: 2025-11-24 10:00:30.348 257704 DEBUG nova.virt.libvirt.driver [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 24 10:00:30 compute-0 nova_compute[257700]: 2025-11-24 10:00:30.348 257704 DEBUG nova.virt.libvirt.driver [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Ensure instance console log exists: /var/lib/nova/instances/7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 24 10:00:30 compute-0 nova_compute[257700]: 2025-11-24 10:00:30.349 257704 DEBUG oslo_concurrency.lockutils [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:00:30 compute-0 nova_compute[257700]: 2025-11-24 10:00:30.349 257704 DEBUG oslo_concurrency.lockutils [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:00:30 compute-0 nova_compute[257700]: 2025-11-24 10:00:30.349 257704 DEBUG oslo_concurrency.lockutils [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:00:30 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:30 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:00:30 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:30.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:00:30 compute-0 sudo[275360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:00:30 compute-0 sudo[275360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:00:30 compute-0 sudo[275360]: pam_unix(sudo:session): session closed for user root
Nov 24 10:00:30 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1011: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:00:30 compute-0 nova_compute[257700]: 2025-11-24 10:00:30.573 257704 DEBUG nova.policy [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '43f79ff3105e4372a3c095e8057d4f1f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '94d069fc040647d5a6e54894eec915fe', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 24 10:00:30 compute-0 sudo[275385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Nov 24 10:00:30 compute-0 sudo[275385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:00:30 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:00:30 compute-0 ceph-mon[74331]: pgmap v1011: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:00:30 compute-0 sudo[275385]: pam_unix(sudo:session): session closed for user root
Nov 24 10:00:30 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 10:00:30 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:00:30 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 10:00:30 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 10:00:30 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:00:30 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:00:30 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 10:00:30 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Nov 24 10:00:30 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 10:00:30 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:00:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:00:30] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 24 10:00:30 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:00:30] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 24 10:00:31 compute-0 nova_compute[257700]: 2025-11-24 10:00:31.032 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:31 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 10:00:31 compute-0 nova_compute[257700]: 2025-11-24 10:00:31.046 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:31 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:00:31 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 10:00:31 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:00:31 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1012: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 24 10:00:31 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 10:00:31 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:00:31 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 10:00:31 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:00:31 compute-0 sudo[275428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:00:31 compute-0 sudo[275428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:00:31 compute-0 sudo[275428]: pam_unix(sudo:session): session closed for user root
Nov 24 10:00:31 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:31 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:31 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:31.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:31 compute-0 sudo[275453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 24 10:00:31 compute-0 sudo[275453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:00:31 compute-0 podman[275523]: 2025-11-24 10:00:31.62555497 +0000 UTC m=+0.053957757 container create 731969de6cae526991bdf5546497a6267eb39786750c41aebad484b7c7297a2e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_banach, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Nov 24 10:00:31 compute-0 systemd[1]: Started libpod-conmon-731969de6cae526991bdf5546497a6267eb39786750c41aebad484b7c7297a2e.scope.
Nov 24 10:00:31 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:00:31 compute-0 podman[275523]: 2025-11-24 10:00:31.608605877 +0000 UTC m=+0.037008694 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:00:31 compute-0 podman[275523]: 2025-11-24 10:00:31.724285846 +0000 UTC m=+0.152688653 container init 731969de6cae526991bdf5546497a6267eb39786750c41aebad484b7c7297a2e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_banach, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 10:00:31 compute-0 podman[275523]: 2025-11-24 10:00:31.738017399 +0000 UTC m=+0.166420186 container start 731969de6cae526991bdf5546497a6267eb39786750c41aebad484b7c7297a2e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_banach, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 10:00:31 compute-0 podman[275523]: 2025-11-24 10:00:31.741895854 +0000 UTC m=+0.170298641 container attach 731969de6cae526991bdf5546497a6267eb39786750c41aebad484b7c7297a2e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_banach, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Nov 24 10:00:31 compute-0 youthful_banach[275539]: 167 167
Nov 24 10:00:31 compute-0 systemd[1]: libpod-731969de6cae526991bdf5546497a6267eb39786750c41aebad484b7c7297a2e.scope: Deactivated successfully.
Nov 24 10:00:31 compute-0 podman[275523]: 2025-11-24 10:00:31.749353276 +0000 UTC m=+0.177756113 container died 731969de6cae526991bdf5546497a6267eb39786750c41aebad484b7c7297a2e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_banach, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Nov 24 10:00:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b12fd3118e197aab9408be16438d5a21ff30cf5e0772bb55e424dee4c5beccf-merged.mount: Deactivated successfully.
Nov 24 10:00:31 compute-0 podman[275523]: 2025-11-24 10:00:31.794750862 +0000 UTC m=+0.223153649 container remove 731969de6cae526991bdf5546497a6267eb39786750c41aebad484b7c7297a2e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 10:00:31 compute-0 systemd[1]: libpod-conmon-731969de6cae526991bdf5546497a6267eb39786750c41aebad484b7c7297a2e.scope: Deactivated successfully.
Nov 24 10:00:31 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:00:31 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:00:31 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:00:31 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 10:00:31 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 10:00:31 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:00:31 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:00:31 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:00:31 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:00:31 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 10:00:31 compute-0 ceph-mon[74331]: pgmap v1012: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 24 10:00:31 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:00:31 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:00:31 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 10:00:31 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 10:00:31 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:00:32 compute-0 podman[275563]: 2025-11-24 10:00:32.008476929 +0000 UTC m=+0.052101090 container create be2668c8c63fea7dd82ceb01e8093f7eb34ddf28b1c2171442271bd6fbc1dda4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_gagarin, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Nov 24 10:00:32 compute-0 systemd[1]: Started libpod-conmon-be2668c8c63fea7dd82ceb01e8093f7eb34ddf28b1c2171442271bd6fbc1dda4.scope.
Nov 24 10:00:32 compute-0 podman[275563]: 2025-11-24 10:00:31.985546991 +0000 UTC m=+0.029171192 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:00:32 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:00:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a69f2b24232e01abd90adb53897945929adb4fefa787f8216899744f5044d2e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 10:00:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a69f2b24232e01abd90adb53897945929adb4fefa787f8216899744f5044d2e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 10:00:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a69f2b24232e01abd90adb53897945929adb4fefa787f8216899744f5044d2e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 10:00:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a69f2b24232e01abd90adb53897945929adb4fefa787f8216899744f5044d2e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 10:00:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a69f2b24232e01abd90adb53897945929adb4fefa787f8216899744f5044d2e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 10:00:32 compute-0 podman[275563]: 2025-11-24 10:00:32.109596924 +0000 UTC m=+0.153221115 container init be2668c8c63fea7dd82ceb01e8093f7eb34ddf28b1c2171442271bd6fbc1dda4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_gagarin, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 10:00:32 compute-0 podman[275563]: 2025-11-24 10:00:32.118425748 +0000 UTC m=+0.162049909 container start be2668c8c63fea7dd82ceb01e8093f7eb34ddf28b1c2171442271bd6fbc1dda4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_gagarin, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 10:00:32 compute-0 podman[275563]: 2025-11-24 10:00:32.122018066 +0000 UTC m=+0.165642257 container attach be2668c8c63fea7dd82ceb01e8093f7eb34ddf28b1c2171442271bd6fbc1dda4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_gagarin, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True)
Nov 24 10:00:32 compute-0 serene_gagarin[275580]: --> passed data devices: 0 physical, 1 LVM
Nov 24 10:00:32 compute-0 serene_gagarin[275580]: --> All data devices are unavailable
Nov 24 10:00:32 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:32 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:32 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:32.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:32 compute-0 systemd[1]: libpod-be2668c8c63fea7dd82ceb01e8093f7eb34ddf28b1c2171442271bd6fbc1dda4.scope: Deactivated successfully.
Nov 24 10:00:32 compute-0 podman[275563]: 2025-11-24 10:00:32.460274558 +0000 UTC m=+0.503898819 container died be2668c8c63fea7dd82ceb01e8093f7eb34ddf28b1c2171442271bd6fbc1dda4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_gagarin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 10:00:32 compute-0 nova_compute[257700]: 2025-11-24 10:00:32.480 257704 DEBUG nova.network.neutron [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Successfully updated port: fe53799e-0d96-417b-8153-212f65cd709e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 24 10:00:32 compute-0 nova_compute[257700]: 2025-11-24 10:00:32.496 257704 DEBUG oslo_concurrency.lockutils [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "refresh_cache-7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 10:00:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a69f2b24232e01abd90adb53897945929adb4fefa787f8216899744f5044d2e-merged.mount: Deactivated successfully.
Nov 24 10:00:32 compute-0 nova_compute[257700]: 2025-11-24 10:00:32.497 257704 DEBUG oslo_concurrency.lockutils [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquired lock "refresh_cache-7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 10:00:32 compute-0 nova_compute[257700]: 2025-11-24 10:00:32.497 257704 DEBUG nova.network.neutron [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 24 10:00:32 compute-0 podman[275563]: 2025-11-24 10:00:32.516547419 +0000 UTC m=+0.560171600 container remove be2668c8c63fea7dd82ceb01e8093f7eb34ddf28b1c2171442271bd6fbc1dda4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Nov 24 10:00:32 compute-0 systemd[1]: libpod-conmon-be2668c8c63fea7dd82ceb01e8093f7eb34ddf28b1c2171442271bd6fbc1dda4.scope: Deactivated successfully.
Nov 24 10:00:32 compute-0 sudo[275453]: pam_unix(sudo:session): session closed for user root
Nov 24 10:00:32 compute-0 nova_compute[257700]: 2025-11-24 10:00:32.581 257704 DEBUG nova.compute.manager [req-5c28b6e1-6cce-49d1-af46-1008995bf0d1 req-0dda6d61-db08-473d-ac57-f7ef815a27ca 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Received event network-changed-fe53799e-0d96-417b-8153-212f65cd709e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 10:00:32 compute-0 nova_compute[257700]: 2025-11-24 10:00:32.582 257704 DEBUG nova.compute.manager [req-5c28b6e1-6cce-49d1-af46-1008995bf0d1 req-0dda6d61-db08-473d-ac57-f7ef815a27ca 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Refreshing instance network info cache due to event network-changed-fe53799e-0d96-417b-8153-212f65cd709e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 10:00:32 compute-0 nova_compute[257700]: 2025-11-24 10:00:32.582 257704 DEBUG oslo_concurrency.lockutils [req-5c28b6e1-6cce-49d1-af46-1008995bf0d1 req-0dda6d61-db08-473d-ac57-f7ef815a27ca 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "refresh_cache-7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 10:00:32 compute-0 sudo[275609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:00:32 compute-0 sudo[275609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:00:32 compute-0 sudo[275609]: pam_unix(sudo:session): session closed for user root
Nov 24 10:00:32 compute-0 nova_compute[257700]: 2025-11-24 10:00:32.648 257704 DEBUG nova.network.neutron [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 24 10:00:32 compute-0 sudo[275634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- lvm list --format json
Nov 24 10:00:32 compute-0 sudo[275634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:00:33 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1013: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.0 MiB/s wr, 31 op/s
Nov 24 10:00:33 compute-0 podman[275701]: 2025-11-24 10:00:33.088373882 +0000 UTC m=+0.051225450 container create 765a22406deb557fc7442a7f2b28ba5b7d8e95b366ecf7e8891299550a8876fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_shirley, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2)
Nov 24 10:00:33 compute-0 systemd[1]: Started libpod-conmon-765a22406deb557fc7442a7f2b28ba5b7d8e95b366ecf7e8891299550a8876fd.scope.
Nov 24 10:00:33 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:00:33 compute-0 podman[275701]: 2025-11-24 10:00:33.0702289 +0000 UTC m=+0.033080458 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:00:33 compute-0 podman[275701]: 2025-11-24 10:00:33.167660793 +0000 UTC m=+0.130512361 container init 765a22406deb557fc7442a7f2b28ba5b7d8e95b366ecf7e8891299550a8876fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Nov 24 10:00:33 compute-0 podman[275701]: 2025-11-24 10:00:33.173334731 +0000 UTC m=+0.136186269 container start 765a22406deb557fc7442a7f2b28ba5b7d8e95b366ecf7e8891299550a8876fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 24 10:00:33 compute-0 podman[275701]: 2025-11-24 10:00:33.176333535 +0000 UTC m=+0.139185093 container attach 765a22406deb557fc7442a7f2b28ba5b7d8e95b366ecf7e8891299550a8876fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_shirley, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Nov 24 10:00:33 compute-0 jovial_shirley[275717]: 167 167
Nov 24 10:00:33 compute-0 systemd[1]: libpod-765a22406deb557fc7442a7f2b28ba5b7d8e95b366ecf7e8891299550a8876fd.scope: Deactivated successfully.
Nov 24 10:00:33 compute-0 podman[275701]: 2025-11-24 10:00:33.178849146 +0000 UTC m=+0.141700694 container died 765a22406deb557fc7442a7f2b28ba5b7d8e95b366ecf7e8891299550a8876fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_shirley, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 24 10:00:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:00:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:33.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:00:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-ccc442538893571b6a8fd9127607430cc4c501a67491a8952f04a07c7343eda2-merged.mount: Deactivated successfully.
Nov 24 10:00:33 compute-0 podman[275701]: 2025-11-24 10:00:33.211633965 +0000 UTC m=+0.174485503 container remove 765a22406deb557fc7442a7f2b28ba5b7d8e95b366ecf7e8891299550a8876fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Nov 24 10:00:33 compute-0 systemd[1]: libpod-conmon-765a22406deb557fc7442a7f2b28ba5b7d8e95b366ecf7e8891299550a8876fd.scope: Deactivated successfully.
Nov 24 10:00:33 compute-0 podman[275740]: 2025-11-24 10:00:33.364686584 +0000 UTC m=+0.042203489 container create 03d43c45a5b3038bc1096f6dabcbc7e3e24f0409eff171e412bdd1c84b9b9318 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_moser, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 10:00:33 compute-0 systemd[1]: Started libpod-conmon-03d43c45a5b3038bc1096f6dabcbc7e3e24f0409eff171e412bdd1c84b9b9318.scope.
Nov 24 10:00:33 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:00:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77e31528dea3b101d0c822b422e2d6211837c3a41b3f41a1aa56058f10f79f3a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 10:00:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77e31528dea3b101d0c822b422e2d6211837c3a41b3f41a1aa56058f10f79f3a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 10:00:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77e31528dea3b101d0c822b422e2d6211837c3a41b3f41a1aa56058f10f79f3a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 10:00:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77e31528dea3b101d0c822b422e2d6211837c3a41b3f41a1aa56058f10f79f3a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 10:00:33 compute-0 podman[275740]: 2025-11-24 10:00:33.346650255 +0000 UTC m=+0.024167190 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:00:33 compute-0 podman[275740]: 2025-11-24 10:00:33.444771555 +0000 UTC m=+0.122288460 container init 03d43c45a5b3038bc1096f6dabcbc7e3e24f0409eff171e412bdd1c84b9b9318 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_moser, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 10:00:33 compute-0 sshd-session[275587]: Invalid user ansible from 83.229.122.23 port 33744
Nov 24 10:00:33 compute-0 podman[275740]: 2025-11-24 10:00:33.460562881 +0000 UTC m=+0.138079786 container start 03d43c45a5b3038bc1096f6dabcbc7e3e24f0409eff171e412bdd1c84b9b9318 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Nov 24 10:00:33 compute-0 podman[275740]: 2025-11-24 10:00:33.463649016 +0000 UTC m=+0.141165911 container attach 03d43c45a5b3038bc1096f6dabcbc7e3e24f0409eff171e412bdd1c84b9b9318 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_moser, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 10:00:33 compute-0 sshd-session[275587]: Received disconnect from 83.229.122.23 port 33744:11: Bye Bye [preauth]
Nov 24 10:00:33 compute-0 sshd-session[275587]: Disconnected from invalid user ansible 83.229.122.23 port 33744 [preauth]
Nov 24 10:00:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:00:33 compute-0 charming_moser[275757]: {
Nov 24 10:00:33 compute-0 charming_moser[275757]:     "0": [
Nov 24 10:00:33 compute-0 charming_moser[275757]:         {
Nov 24 10:00:33 compute-0 charming_moser[275757]:             "devices": [
Nov 24 10:00:33 compute-0 charming_moser[275757]:                 "/dev/loop3"
Nov 24 10:00:33 compute-0 charming_moser[275757]:             ],
Nov 24 10:00:33 compute-0 charming_moser[275757]:             "lv_name": "ceph_lv0",
Nov 24 10:00:33 compute-0 charming_moser[275757]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 10:00:33 compute-0 charming_moser[275757]:             "lv_size": "21470642176",
Nov 24 10:00:33 compute-0 charming_moser[275757]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=84a084c3-61a7-5de7-8207-1f88efa59a64,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 24 10:00:33 compute-0 charming_moser[275757]:             "lv_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 10:00:33 compute-0 charming_moser[275757]:             "name": "ceph_lv0",
Nov 24 10:00:33 compute-0 charming_moser[275757]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 10:00:33 compute-0 charming_moser[275757]:             "tags": {
Nov 24 10:00:33 compute-0 charming_moser[275757]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 10:00:33 compute-0 charming_moser[275757]:                 "ceph.block_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 10:00:33 compute-0 charming_moser[275757]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 10:00:33 compute-0 charming_moser[275757]:                 "ceph.cluster_fsid": "84a084c3-61a7-5de7-8207-1f88efa59a64",
Nov 24 10:00:33 compute-0 charming_moser[275757]:                 "ceph.cluster_name": "ceph",
Nov 24 10:00:33 compute-0 charming_moser[275757]:                 "ceph.crush_device_class": "",
Nov 24 10:00:33 compute-0 charming_moser[275757]:                 "ceph.encrypted": "0",
Nov 24 10:00:33 compute-0 charming_moser[275757]:                 "ceph.osd_fsid": "4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c",
Nov 24 10:00:33 compute-0 charming_moser[275757]:                 "ceph.osd_id": "0",
Nov 24 10:00:33 compute-0 charming_moser[275757]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 10:00:33 compute-0 charming_moser[275757]:                 "ceph.type": "block",
Nov 24 10:00:33 compute-0 charming_moser[275757]:                 "ceph.vdo": "0",
Nov 24 10:00:33 compute-0 charming_moser[275757]:                 "ceph.with_tpm": "0"
Nov 24 10:00:33 compute-0 charming_moser[275757]:             },
Nov 24 10:00:33 compute-0 charming_moser[275757]:             "type": "block",
Nov 24 10:00:33 compute-0 charming_moser[275757]:             "vg_name": "ceph_vg0"
Nov 24 10:00:33 compute-0 charming_moser[275757]:         }
Nov 24 10:00:33 compute-0 charming_moser[275757]:     ]
Nov 24 10:00:33 compute-0 charming_moser[275757]: }
Nov 24 10:00:33 compute-0 systemd[1]: libpod-03d43c45a5b3038bc1096f6dabcbc7e3e24f0409eff171e412bdd1c84b9b9318.scope: Deactivated successfully.
Nov 24 10:00:33 compute-0 podman[275740]: 2025-11-24 10:00:33.793152344 +0000 UTC m=+0.470669269 container died 03d43c45a5b3038bc1096f6dabcbc7e3e24f0409eff171e412bdd1c84b9b9318 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_moser, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Nov 24 10:00:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-77e31528dea3b101d0c822b422e2d6211837c3a41b3f41a1aa56058f10f79f3a-merged.mount: Deactivated successfully.
Nov 24 10:00:33 compute-0 podman[275740]: 2025-11-24 10:00:33.836809217 +0000 UTC m=+0.514326142 container remove 03d43c45a5b3038bc1096f6dabcbc7e3e24f0409eff171e412bdd1c84b9b9318 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_moser, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Nov 24 10:00:33 compute-0 systemd[1]: libpod-conmon-03d43c45a5b3038bc1096f6dabcbc7e3e24f0409eff171e412bdd1c84b9b9318.scope: Deactivated successfully.
Nov 24 10:00:33 compute-0 sudo[275634]: pam_unix(sudo:session): session closed for user root
Nov 24 10:00:33 compute-0 sudo[275780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:00:33 compute-0 sudo[275780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:00:33 compute-0 sudo[275780]: pam_unix(sudo:session): session closed for user root
Nov 24 10:00:34 compute-0 sudo[275805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- raw list --format json
Nov 24 10:00:34 compute-0 sudo[275805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:00:34 compute-0 ceph-mon[74331]: pgmap v1013: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.0 MiB/s wr, 31 op/s
Nov 24 10:00:34 compute-0 podman[275874]: 2025-11-24 10:00:34.412753461 +0000 UTC m=+0.043045720 container create 14a6277f5c12c4a1ddc35acb7e408f9531c5c42d6a053c02e0cbdcdb81ca5b54 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_driscoll, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid)
Nov 24 10:00:34 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:34 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:34 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:34.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:34 compute-0 systemd[1]: Started libpod-conmon-14a6277f5c12c4a1ddc35acb7e408f9531c5c42d6a053c02e0cbdcdb81ca5b54.scope.
Nov 24 10:00:34 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:00:34 compute-0 podman[275874]: 2025-11-24 10:00:34.487910292 +0000 UTC m=+0.118202601 container init 14a6277f5c12c4a1ddc35acb7e408f9531c5c42d6a053c02e0cbdcdb81ca5b54 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Nov 24 10:00:34 compute-0 podman[275874]: 2025-11-24 10:00:34.393758218 +0000 UTC m=+0.024050497 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:00:34 compute-0 podman[275874]: 2025-11-24 10:00:34.496701827 +0000 UTC m=+0.126994076 container start 14a6277f5c12c4a1ddc35acb7e408f9531c5c42d6a053c02e0cbdcdb81ca5b54 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Nov 24 10:00:34 compute-0 podman[275874]: 2025-11-24 10:00:34.500814307 +0000 UTC m=+0.131106616 container attach 14a6277f5c12c4a1ddc35acb7e408f9531c5c42d6a053c02e0cbdcdb81ca5b54 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_driscoll, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 24 10:00:34 compute-0 pedantic_driscoll[275890]: 167 167
Nov 24 10:00:34 compute-0 systemd[1]: libpod-14a6277f5c12c4a1ddc35acb7e408f9531c5c42d6a053c02e0cbdcdb81ca5b54.scope: Deactivated successfully.
Nov 24 10:00:34 compute-0 conmon[275890]: conmon 14a6277f5c12c4a1ddc3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-14a6277f5c12c4a1ddc35acb7e408f9531c5c42d6a053c02e0cbdcdb81ca5b54.scope/container/memory.events
Nov 24 10:00:34 compute-0 podman[275874]: 2025-11-24 10:00:34.506974617 +0000 UTC m=+0.137266866 container died 14a6277f5c12c4a1ddc35acb7e408f9531c5c42d6a053c02e0cbdcdb81ca5b54 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 10:00:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-1aab80a19d090844346da03d81e445c69d9b5f995327bfb3702ad3d07cd66a88-merged.mount: Deactivated successfully.
Nov 24 10:00:34 compute-0 podman[275874]: 2025-11-24 10:00:34.537966952 +0000 UTC m=+0.168259201 container remove 14a6277f5c12c4a1ddc35acb7e408f9531c5c42d6a053c02e0cbdcdb81ca5b54 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 10:00:34 compute-0 systemd[1]: libpod-conmon-14a6277f5c12c4a1ddc35acb7e408f9531c5c42d6a053c02e0cbdcdb81ca5b54.scope: Deactivated successfully.
Nov 24 10:00:34 compute-0 nova_compute[257700]: 2025-11-24 10:00:34.642 257704 DEBUG nova.network.neutron [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Updating instance_info_cache with network_info: [{"id": "fe53799e-0d96-417b-8153-212f65cd709e", "address": "fa:16:3e:19:3d:30", "network": {"id": "2d64d66d-0f9e-4429-a21c-7e55f44b1e68", "bridge": "br-int", "label": "tempest-network-smoke--1571889183", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe53799e-0d", "ovs_interfaceid": "fe53799e-0d96-417b-8153-212f65cd709e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 10:00:34 compute-0 nova_compute[257700]: 2025-11-24 10:00:34.662 257704 DEBUG oslo_concurrency.lockutils [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Releasing lock "refresh_cache-7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 10:00:34 compute-0 nova_compute[257700]: 2025-11-24 10:00:34.663 257704 DEBUG nova.compute.manager [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Instance network_info: |[{"id": "fe53799e-0d96-417b-8153-212f65cd709e", "address": "fa:16:3e:19:3d:30", "network": {"id": "2d64d66d-0f9e-4429-a21c-7e55f44b1e68", "bridge": "br-int", "label": "tempest-network-smoke--1571889183", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe53799e-0d", "ovs_interfaceid": "fe53799e-0d96-417b-8153-212f65cd709e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 24 10:00:34 compute-0 nova_compute[257700]: 2025-11-24 10:00:34.663 257704 DEBUG oslo_concurrency.lockutils [req-5c28b6e1-6cce-49d1-af46-1008995bf0d1 req-0dda6d61-db08-473d-ac57-f7ef815a27ca 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquired lock "refresh_cache-7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 10:00:34 compute-0 nova_compute[257700]: 2025-11-24 10:00:34.663 257704 DEBUG nova.network.neutron [req-5c28b6e1-6cce-49d1-af46-1008995bf0d1 req-0dda6d61-db08-473d-ac57-f7ef815a27ca 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Refreshing network info cache for port fe53799e-0d96-417b-8153-212f65cd709e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 10:00:34 compute-0 nova_compute[257700]: 2025-11-24 10:00:34.666 257704 DEBUG nova.virt.libvirt.driver [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Start _get_guest_xml network_info=[{"id": "fe53799e-0d96-417b-8153-212f65cd709e", "address": "fa:16:3e:19:3d:30", "network": {"id": "2d64d66d-0f9e-4429-a21c-7e55f44b1e68", "bridge": "br-int", "label": "tempest-network-smoke--1571889183", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe53799e-0d", "ovs_interfaceid": "fe53799e-0d96-417b-8153-212f65cd709e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T09:52:37Z,direct_url=<?>,disk_format='qcow2',id=6ef14bdf-4f04-4400-8040-4409d9d5271e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='cf636babb68a4ebe9bf137d3fe0e4c0c',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T09:52:41Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'device_name': '/dev/vda', 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_format': None, 'boot_index': 0, 'device_type': 'disk', 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': '6ef14bdf-4f04-4400-8040-4409d9d5271e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 24 10:00:34 compute-0 nova_compute[257700]: 2025-11-24 10:00:34.672 257704 WARNING nova.virt.libvirt.driver [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 10:00:34 compute-0 nova_compute[257700]: 2025-11-24 10:00:34.679 257704 DEBUG nova.virt.libvirt.host [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 24 10:00:34 compute-0 nova_compute[257700]: 2025-11-24 10:00:34.680 257704 DEBUG nova.virt.libvirt.host [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 24 10:00:34 compute-0 nova_compute[257700]: 2025-11-24 10:00:34.683 257704 DEBUG nova.virt.libvirt.host [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 24 10:00:34 compute-0 nova_compute[257700]: 2025-11-24 10:00:34.684 257704 DEBUG nova.virt.libvirt.host [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 24 10:00:34 compute-0 nova_compute[257700]: 2025-11-24 10:00:34.684 257704 DEBUG nova.virt.libvirt.driver [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 24 10:00:34 compute-0 nova_compute[257700]: 2025-11-24 10:00:34.685 257704 DEBUG nova.virt.hardware [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-24T09:52:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='4a5d03ad-925b-45f1-89bd-f1325f9f3292',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T09:52:37Z,direct_url=<?>,disk_format='qcow2',id=6ef14bdf-4f04-4400-8040-4409d9d5271e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='cf636babb68a4ebe9bf137d3fe0e4c0c',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T09:52:41Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 24 10:00:34 compute-0 nova_compute[257700]: 2025-11-24 10:00:34.685 257704 DEBUG nova.virt.hardware [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 24 10:00:34 compute-0 nova_compute[257700]: 2025-11-24 10:00:34.685 257704 DEBUG nova.virt.hardware [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 24 10:00:34 compute-0 nova_compute[257700]: 2025-11-24 10:00:34.685 257704 DEBUG nova.virt.hardware [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 24 10:00:34 compute-0 nova_compute[257700]: 2025-11-24 10:00:34.686 257704 DEBUG nova.virt.hardware [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 24 10:00:34 compute-0 nova_compute[257700]: 2025-11-24 10:00:34.686 257704 DEBUG nova.virt.hardware [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 24 10:00:34 compute-0 nova_compute[257700]: 2025-11-24 10:00:34.686 257704 DEBUG nova.virt.hardware [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 24 10:00:34 compute-0 nova_compute[257700]: 2025-11-24 10:00:34.686 257704 DEBUG nova.virt.hardware [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 24 10:00:34 compute-0 nova_compute[257700]: 2025-11-24 10:00:34.686 257704 DEBUG nova.virt.hardware [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 24 10:00:34 compute-0 nova_compute[257700]: 2025-11-24 10:00:34.687 257704 DEBUG nova.virt.hardware [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 24 10:00:34 compute-0 nova_compute[257700]: 2025-11-24 10:00:34.687 257704 DEBUG nova.virt.hardware [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 24 10:00:34 compute-0 nova_compute[257700]: 2025-11-24 10:00:34.689 257704 DEBUG oslo_concurrency.processutils [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:00:34 compute-0 podman[275914]: 2025-11-24 10:00:34.731699602 +0000 UTC m=+0.054730315 container create 1797f1a06eedbd2c83b29c3c09a86e8eb1be28aab6a3e4e29ac586d034a35f75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 10:00:34 compute-0 systemd[1]: Started libpod-conmon-1797f1a06eedbd2c83b29c3c09a86e8eb1be28aab6a3e4e29ac586d034a35f75.scope.
Nov 24 10:00:34 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:00:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/399cab4b67970caf0a2f3390e065ae7ce3760daab3c330215aacb17e5a82972b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 10:00:34 compute-0 podman[275914]: 2025-11-24 10:00:34.705006501 +0000 UTC m=+0.028037244 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:00:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/399cab4b67970caf0a2f3390e065ae7ce3760daab3c330215aacb17e5a82972b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 10:00:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/399cab4b67970caf0a2f3390e065ae7ce3760daab3c330215aacb17e5a82972b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 10:00:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/399cab4b67970caf0a2f3390e065ae7ce3760daab3c330215aacb17e5a82972b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 10:00:34 compute-0 podman[275914]: 2025-11-24 10:00:34.809277133 +0000 UTC m=+0.132307856 container init 1797f1a06eedbd2c83b29c3c09a86e8eb1be28aab6a3e4e29ac586d034a35f75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_euler, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2)
Nov 24 10:00:34 compute-0 podman[275914]: 2025-11-24 10:00:34.816940549 +0000 UTC m=+0.139971252 container start 1797f1a06eedbd2c83b29c3c09a86e8eb1be28aab6a3e4e29ac586d034a35f75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 10:00:34 compute-0 podman[275914]: 2025-11-24 10:00:34.820882585 +0000 UTC m=+0.143913318 container attach 1797f1a06eedbd2c83b29c3c09a86e8eb1be28aab6a3e4e29ac586d034a35f75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_euler, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 10:00:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:00:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:00:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:00:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:00:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:00:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:00:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:00:35 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:00:35 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1014: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.0 MiB/s wr, 31 op/s
Nov 24 10:00:35 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 24 10:00:35 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2929576563' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 10:00:35 compute-0 nova_compute[257700]: 2025-11-24 10:00:35.134 257704 DEBUG oslo_concurrency.processutils [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:00:35 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2929576563' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 10:00:35 compute-0 nova_compute[257700]: 2025-11-24 10:00:35.175 257704 DEBUG nova.storage.rbd_utils [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 10:00:35 compute-0 nova_compute[257700]: 2025-11-24 10:00:35.184 257704 DEBUG oslo_concurrency.processutils [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:00:35 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:35 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:00:35 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:35.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:00:35 compute-0 lvm[276065]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 10:00:35 compute-0 lvm[276065]: VG ceph_vg0 finished
Nov 24 10:00:35 compute-0 loving_euler[275931]: {}
Nov 24 10:00:35 compute-0 systemd[1]: libpod-1797f1a06eedbd2c83b29c3c09a86e8eb1be28aab6a3e4e29ac586d034a35f75.scope: Deactivated successfully.
Nov 24 10:00:35 compute-0 systemd[1]: libpod-1797f1a06eedbd2c83b29c3c09a86e8eb1be28aab6a3e4e29ac586d034a35f75.scope: Consumed 1.122s CPU time.
Nov 24 10:00:35 compute-0 podman[276070]: 2025-11-24 10:00:35.604267793 +0000 UTC m=+0.023758330 container died 1797f1a06eedbd2c83b29c3c09a86e8eb1be28aab6a3e4e29ac586d034a35f75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_euler, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 10:00:35 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 24 10:00:35 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2075905231' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 10:00:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-399cab4b67970caf0a2f3390e065ae7ce3760daab3c330215aacb17e5a82972b-merged.mount: Deactivated successfully.
Nov 24 10:00:35 compute-0 podman[276070]: 2025-11-24 10:00:35.642829972 +0000 UTC m=+0.062320479 container remove 1797f1a06eedbd2c83b29c3c09a86e8eb1be28aab6a3e4e29ac586d034a35f75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_euler, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 10:00:35 compute-0 nova_compute[257700]: 2025-11-24 10:00:35.642 257704 DEBUG oslo_concurrency.processutils [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:00:35 compute-0 nova_compute[257700]: 2025-11-24 10:00:35.645 257704 DEBUG nova.virt.libvirt.vif [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T10:00:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-734446179',display_name='tempest-TestNetworkBasicOps-server-734446179',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-734446179',id=8,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMX9u2lFDZUos4ohC6nHDtLaMV1Dff0qDObdMLHI+iAm7eXVRiPlcJ4pkSJ+46hrR/OGkTm0t1XXhDa/sS7OeQ7rGlUJHCv/4ZQR1ERnCZh2xC95FcEXuADWxLoiaB7L3w==',key_name='tempest-TestNetworkBasicOps-1247570914',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-y0zr1jt7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T10:00:29Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fe53799e-0d96-417b-8153-212f65cd709e", "address": "fa:16:3e:19:3d:30", "network": {"id": "2d64d66d-0f9e-4429-a21c-7e55f44b1e68", "bridge": "br-int", "label": "tempest-network-smoke--1571889183", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe53799e-0d", "ovs_interfaceid": "fe53799e-0d96-417b-8153-212f65cd709e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 24 10:00:35 compute-0 nova_compute[257700]: 2025-11-24 10:00:35.646 257704 DEBUG nova.network.os_vif_util [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "fe53799e-0d96-417b-8153-212f65cd709e", "address": "fa:16:3e:19:3d:30", "network": {"id": "2d64d66d-0f9e-4429-a21c-7e55f44b1e68", "bridge": "br-int", "label": "tempest-network-smoke--1571889183", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe53799e-0d", "ovs_interfaceid": "fe53799e-0d96-417b-8153-212f65cd709e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 10:00:35 compute-0 nova_compute[257700]: 2025-11-24 10:00:35.647 257704 DEBUG nova.network.os_vif_util [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:3d:30,bridge_name='br-int',has_traffic_filtering=True,id=fe53799e-0d96-417b-8153-212f65cd709e,network=Network(2d64d66d-0f9e-4429-a21c-7e55f44b1e68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfe53799e-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 10:00:35 compute-0 systemd[1]: libpod-conmon-1797f1a06eedbd2c83b29c3c09a86e8eb1be28aab6a3e4e29ac586d034a35f75.scope: Deactivated successfully.
Nov 24 10:00:35 compute-0 nova_compute[257700]: 2025-11-24 10:00:35.648 257704 DEBUG nova.objects.instance [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'pci_devices' on Instance uuid 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 10:00:35 compute-0 nova_compute[257700]: 2025-11-24 10:00:35.661 257704 DEBUG nova.virt.libvirt.driver [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] End _get_guest_xml xml=<domain type="kvm">
Nov 24 10:00:35 compute-0 nova_compute[257700]:   <uuid>7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e</uuid>
Nov 24 10:00:35 compute-0 nova_compute[257700]:   <name>instance-00000008</name>
Nov 24 10:00:35 compute-0 nova_compute[257700]:   <memory>131072</memory>
Nov 24 10:00:35 compute-0 nova_compute[257700]:   <vcpu>1</vcpu>
Nov 24 10:00:35 compute-0 nova_compute[257700]:   <metadata>
Nov 24 10:00:35 compute-0 nova_compute[257700]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 10:00:35 compute-0 nova_compute[257700]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:       <nova:name>tempest-TestNetworkBasicOps-server-734446179</nova:name>
Nov 24 10:00:35 compute-0 nova_compute[257700]:       <nova:creationTime>2025-11-24 10:00:34</nova:creationTime>
Nov 24 10:00:35 compute-0 nova_compute[257700]:       <nova:flavor name="m1.nano">
Nov 24 10:00:35 compute-0 nova_compute[257700]:         <nova:memory>128</nova:memory>
Nov 24 10:00:35 compute-0 nova_compute[257700]:         <nova:disk>1</nova:disk>
Nov 24 10:00:35 compute-0 nova_compute[257700]:         <nova:swap>0</nova:swap>
Nov 24 10:00:35 compute-0 nova_compute[257700]:         <nova:ephemeral>0</nova:ephemeral>
Nov 24 10:00:35 compute-0 nova_compute[257700]:         <nova:vcpus>1</nova:vcpus>
Nov 24 10:00:35 compute-0 nova_compute[257700]:       </nova:flavor>
Nov 24 10:00:35 compute-0 nova_compute[257700]:       <nova:owner>
Nov 24 10:00:35 compute-0 nova_compute[257700]:         <nova:user uuid="43f79ff3105e4372a3c095e8057d4f1f">tempest-TestNetworkBasicOps-1844071378-project-member</nova:user>
Nov 24 10:00:35 compute-0 nova_compute[257700]:         <nova:project uuid="94d069fc040647d5a6e54894eec915fe">tempest-TestNetworkBasicOps-1844071378</nova:project>
Nov 24 10:00:35 compute-0 nova_compute[257700]:       </nova:owner>
Nov 24 10:00:35 compute-0 nova_compute[257700]:       <nova:root type="image" uuid="6ef14bdf-4f04-4400-8040-4409d9d5271e"/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:       <nova:ports>
Nov 24 10:00:35 compute-0 nova_compute[257700]:         <nova:port uuid="fe53799e-0d96-417b-8153-212f65cd709e">
Nov 24 10:00:35 compute-0 nova_compute[257700]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:         </nova:port>
Nov 24 10:00:35 compute-0 nova_compute[257700]:       </nova:ports>
Nov 24 10:00:35 compute-0 nova_compute[257700]:     </nova:instance>
Nov 24 10:00:35 compute-0 nova_compute[257700]:   </metadata>
Nov 24 10:00:35 compute-0 nova_compute[257700]:   <sysinfo type="smbios">
Nov 24 10:00:35 compute-0 nova_compute[257700]:     <system>
Nov 24 10:00:35 compute-0 nova_compute[257700]:       <entry name="manufacturer">RDO</entry>
Nov 24 10:00:35 compute-0 nova_compute[257700]:       <entry name="product">OpenStack Compute</entry>
Nov 24 10:00:35 compute-0 nova_compute[257700]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 10:00:35 compute-0 nova_compute[257700]:       <entry name="serial">7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e</entry>
Nov 24 10:00:35 compute-0 nova_compute[257700]:       <entry name="uuid">7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e</entry>
Nov 24 10:00:35 compute-0 nova_compute[257700]:       <entry name="family">Virtual Machine</entry>
Nov 24 10:00:35 compute-0 nova_compute[257700]:     </system>
Nov 24 10:00:35 compute-0 nova_compute[257700]:   </sysinfo>
Nov 24 10:00:35 compute-0 nova_compute[257700]:   <os>
Nov 24 10:00:35 compute-0 nova_compute[257700]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 24 10:00:35 compute-0 nova_compute[257700]:     <boot dev="hd"/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:     <smbios mode="sysinfo"/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:   </os>
Nov 24 10:00:35 compute-0 nova_compute[257700]:   <features>
Nov 24 10:00:35 compute-0 nova_compute[257700]:     <acpi/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:     <apic/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:     <vmcoreinfo/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:   </features>
Nov 24 10:00:35 compute-0 nova_compute[257700]:   <clock offset="utc">
Nov 24 10:00:35 compute-0 nova_compute[257700]:     <timer name="pit" tickpolicy="delay"/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:     <timer name="hpet" present="no"/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:   </clock>
Nov 24 10:00:35 compute-0 nova_compute[257700]:   <cpu mode="host-model" match="exact">
Nov 24 10:00:35 compute-0 nova_compute[257700]:     <topology sockets="1" cores="1" threads="1"/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:   </cpu>
Nov 24 10:00:35 compute-0 nova_compute[257700]:   <devices>
Nov 24 10:00:35 compute-0 nova_compute[257700]:     <disk type="network" device="disk">
Nov 24 10:00:35 compute-0 nova_compute[257700]:       <driver type="raw" cache="none"/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:       <source protocol="rbd" name="vms/7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e_disk">
Nov 24 10:00:35 compute-0 nova_compute[257700]:         <host name="192.168.122.100" port="6789"/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:         <host name="192.168.122.102" port="6789"/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:         <host name="192.168.122.101" port="6789"/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:       </source>
Nov 24 10:00:35 compute-0 nova_compute[257700]:       <auth username="openstack">
Nov 24 10:00:35 compute-0 nova_compute[257700]:         <secret type="ceph" uuid="84a084c3-61a7-5de7-8207-1f88efa59a64"/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:       </auth>
Nov 24 10:00:35 compute-0 nova_compute[257700]:       <target dev="vda" bus="virtio"/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:     </disk>
Nov 24 10:00:35 compute-0 nova_compute[257700]:     <disk type="network" device="cdrom">
Nov 24 10:00:35 compute-0 nova_compute[257700]:       <driver type="raw" cache="none"/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:       <source protocol="rbd" name="vms/7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e_disk.config">
Nov 24 10:00:35 compute-0 nova_compute[257700]:         <host name="192.168.122.100" port="6789"/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:         <host name="192.168.122.102" port="6789"/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:         <host name="192.168.122.101" port="6789"/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:       </source>
Nov 24 10:00:35 compute-0 nova_compute[257700]:       <auth username="openstack">
Nov 24 10:00:35 compute-0 nova_compute[257700]:         <secret type="ceph" uuid="84a084c3-61a7-5de7-8207-1f88efa59a64"/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:       </auth>
Nov 24 10:00:35 compute-0 nova_compute[257700]:       <target dev="sda" bus="sata"/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:     </disk>
Nov 24 10:00:35 compute-0 nova_compute[257700]:     <interface type="ethernet">
Nov 24 10:00:35 compute-0 nova_compute[257700]:       <mac address="fa:16:3e:19:3d:30"/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:       <model type="virtio"/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:       <driver name="vhost" rx_queue_size="512"/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:       <mtu size="1442"/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:       <target dev="tapfe53799e-0d"/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:     </interface>
Nov 24 10:00:35 compute-0 nova_compute[257700]:     <serial type="pty">
Nov 24 10:00:35 compute-0 nova_compute[257700]:       <log file="/var/lib/nova/instances/7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e/console.log" append="off"/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:     </serial>
Nov 24 10:00:35 compute-0 nova_compute[257700]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:     <video>
Nov 24 10:00:35 compute-0 nova_compute[257700]:       <model type="virtio"/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:     </video>
Nov 24 10:00:35 compute-0 nova_compute[257700]:     <input type="tablet" bus="usb"/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:     <rng model="virtio">
Nov 24 10:00:35 compute-0 nova_compute[257700]:       <backend model="random">/dev/urandom</backend>
Nov 24 10:00:35 compute-0 nova_compute[257700]:     </rng>
Nov 24 10:00:35 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root"/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:     <controller type="usb" index="0"/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:     <memballoon model="virtio">
Nov 24 10:00:35 compute-0 nova_compute[257700]:       <stats period="10"/>
Nov 24 10:00:35 compute-0 nova_compute[257700]:     </memballoon>
Nov 24 10:00:35 compute-0 nova_compute[257700]:   </devices>
Nov 24 10:00:35 compute-0 nova_compute[257700]: </domain>
Nov 24 10:00:35 compute-0 nova_compute[257700]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 24 10:00:35 compute-0 nova_compute[257700]: 2025-11-24 10:00:35.663 257704 DEBUG nova.compute.manager [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Preparing to wait for external event network-vif-plugged-fe53799e-0d96-417b-8153-212f65cd709e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 24 10:00:35 compute-0 nova_compute[257700]: 2025-11-24 10:00:35.663 257704 DEBUG oslo_concurrency.lockutils [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:00:35 compute-0 nova_compute[257700]: 2025-11-24 10:00:35.663 257704 DEBUG oslo_concurrency.lockutils [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:00:35 compute-0 nova_compute[257700]: 2025-11-24 10:00:35.663 257704 DEBUG oslo_concurrency.lockutils [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:00:35 compute-0 nova_compute[257700]: 2025-11-24 10:00:35.664 257704 DEBUG nova.virt.libvirt.vif [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T10:00:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-734446179',display_name='tempest-TestNetworkBasicOps-server-734446179',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-734446179',id=8,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMX9u2lFDZUos4ohC6nHDtLaMV1Dff0qDObdMLHI+iAm7eXVRiPlcJ4pkSJ+46hrR/OGkTm0t1XXhDa/sS7OeQ7rGlUJHCv/4ZQR1ERnCZh2xC95FcEXuADWxLoiaB7L3w==',key_name='tempest-TestNetworkBasicOps-1247570914',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-y0zr1jt7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T10:00:29Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fe53799e-0d96-417b-8153-212f65cd709e", "address": "fa:16:3e:19:3d:30", "network": {"id": "2d64d66d-0f9e-4429-a21c-7e55f44b1e68", "bridge": "br-int", "label": "tempest-network-smoke--1571889183", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe53799e-0d", "ovs_interfaceid": "fe53799e-0d96-417b-8153-212f65cd709e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 24 10:00:35 compute-0 nova_compute[257700]: 2025-11-24 10:00:35.664 257704 DEBUG nova.network.os_vif_util [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "fe53799e-0d96-417b-8153-212f65cd709e", "address": "fa:16:3e:19:3d:30", "network": {"id": "2d64d66d-0f9e-4429-a21c-7e55f44b1e68", "bridge": "br-int", "label": "tempest-network-smoke--1571889183", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe53799e-0d", "ovs_interfaceid": "fe53799e-0d96-417b-8153-212f65cd709e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 10:00:35 compute-0 nova_compute[257700]: 2025-11-24 10:00:35.665 257704 DEBUG nova.network.os_vif_util [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:3d:30,bridge_name='br-int',has_traffic_filtering=True,id=fe53799e-0d96-417b-8153-212f65cd709e,network=Network(2d64d66d-0f9e-4429-a21c-7e55f44b1e68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfe53799e-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 10:00:35 compute-0 nova_compute[257700]: 2025-11-24 10:00:35.665 257704 DEBUG os_vif [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:3d:30,bridge_name='br-int',has_traffic_filtering=True,id=fe53799e-0d96-417b-8153-212f65cd709e,network=Network(2d64d66d-0f9e-4429-a21c-7e55f44b1e68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfe53799e-0d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 24 10:00:35 compute-0 nova_compute[257700]: 2025-11-24 10:00:35.666 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:35 compute-0 nova_compute[257700]: 2025-11-24 10:00:35.666 257704 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:00:35 compute-0 nova_compute[257700]: 2025-11-24 10:00:35.666 257704 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 10:00:35 compute-0 nova_compute[257700]: 2025-11-24 10:00:35.669 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:35 compute-0 nova_compute[257700]: 2025-11-24 10:00:35.669 257704 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfe53799e-0d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:00:35 compute-0 nova_compute[257700]: 2025-11-24 10:00:35.670 257704 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfe53799e-0d, col_values=(('external_ids', {'iface-id': 'fe53799e-0d96-417b-8153-212f65cd709e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:19:3d:30', 'vm-uuid': '7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:00:35 compute-0 sudo[275805]: pam_unix(sudo:session): session closed for user root
Nov 24 10:00:35 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 10:00:35 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:00:35 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 10:00:35 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:00:35 compute-0 nova_compute[257700]: 2025-11-24 10:00:35.727 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:35 compute-0 nova_compute[257700]: 2025-11-24 10:00:35.729 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 10:00:35 compute-0 NetworkManager[48883]: <info>  [1763978435.7298] manager: (tapfe53799e-0d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Nov 24 10:00:35 compute-0 nova_compute[257700]: 2025-11-24 10:00:35.735 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:35 compute-0 nova_compute[257700]: 2025-11-24 10:00:35.735 257704 INFO os_vif [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:3d:30,bridge_name='br-int',has_traffic_filtering=True,id=fe53799e-0d96-417b-8153-212f65cd709e,network=Network(2d64d66d-0f9e-4429-a21c-7e55f44b1e68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfe53799e-0d')
Nov 24 10:00:35 compute-0 sudo[276084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 10:00:35 compute-0 nova_compute[257700]: 2025-11-24 10:00:35.784 257704 DEBUG nova.virt.libvirt.driver [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 10:00:35 compute-0 nova_compute[257700]: 2025-11-24 10:00:35.784 257704 DEBUG nova.virt.libvirt.driver [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 10:00:35 compute-0 nova_compute[257700]: 2025-11-24 10:00:35.785 257704 DEBUG nova.virt.libvirt.driver [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No VIF found with MAC fa:16:3e:19:3d:30, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 24 10:00:35 compute-0 sudo[276084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:00:35 compute-0 nova_compute[257700]: 2025-11-24 10:00:35.785 257704 INFO nova.virt.libvirt.driver [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Using config drive
Nov 24 10:00:35 compute-0 sudo[276084]: pam_unix(sudo:session): session closed for user root
Nov 24 10:00:35 compute-0 nova_compute[257700]: 2025-11-24 10:00:35.807 257704 DEBUG nova.storage.rbd_utils [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 10:00:36 compute-0 nova_compute[257700]: 2025-11-24 10:00:36.049 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:36 compute-0 ceph-mon[74331]: pgmap v1014: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.0 MiB/s wr, 31 op/s
Nov 24 10:00:36 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2075905231' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 10:00:36 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:00:36 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:00:36 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:36 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:36 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:36.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:36 compute-0 nova_compute[257700]: 2025-11-24 10:00:36.586 257704 INFO nova.virt.libvirt.driver [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Creating config drive at /var/lib/nova/instances/7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e/disk.config
Nov 24 10:00:36 compute-0 nova_compute[257700]: 2025-11-24 10:00:36.593 257704 DEBUG oslo_concurrency.processutils [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2rckk3l1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:00:36 compute-0 nova_compute[257700]: 2025-11-24 10:00:36.658 257704 DEBUG nova.network.neutron [req-5c28b6e1-6cce-49d1-af46-1008995bf0d1 req-0dda6d61-db08-473d-ac57-f7ef815a27ca 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Updated VIF entry in instance network info cache for port fe53799e-0d96-417b-8153-212f65cd709e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 10:00:36 compute-0 nova_compute[257700]: 2025-11-24 10:00:36.660 257704 DEBUG nova.network.neutron [req-5c28b6e1-6cce-49d1-af46-1008995bf0d1 req-0dda6d61-db08-473d-ac57-f7ef815a27ca 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Updating instance_info_cache with network_info: [{"id": "fe53799e-0d96-417b-8153-212f65cd709e", "address": "fa:16:3e:19:3d:30", "network": {"id": "2d64d66d-0f9e-4429-a21c-7e55f44b1e68", "bridge": "br-int", "label": "tempest-network-smoke--1571889183", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe53799e-0d", "ovs_interfaceid": "fe53799e-0d96-417b-8153-212f65cd709e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 10:00:36 compute-0 nova_compute[257700]: 2025-11-24 10:00:36.673 257704 DEBUG oslo_concurrency.lockutils [req-5c28b6e1-6cce-49d1-af46-1008995bf0d1 req-0dda6d61-db08-473d-ac57-f7ef815a27ca 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Releasing lock "refresh_cache-7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 10:00:36 compute-0 nova_compute[257700]: 2025-11-24 10:00:36.731 257704 DEBUG oslo_concurrency.processutils [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2rckk3l1" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:00:36 compute-0 nova_compute[257700]: 2025-11-24 10:00:36.766 257704 DEBUG nova.storage.rbd_utils [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 10:00:36 compute-0 nova_compute[257700]: 2025-11-24 10:00:36.773 257704 DEBUG oslo_concurrency.processutils [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e/disk.config 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:00:36 compute-0 nova_compute[257700]: 2025-11-24 10:00:36.953 257704 DEBUG oslo_concurrency.processutils [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e/disk.config 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:00:36 compute-0 nova_compute[257700]: 2025-11-24 10:00:36.954 257704 INFO nova.virt.libvirt.driver [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Deleting local config drive /var/lib/nova/instances/7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e/disk.config because it was imported into RBD.
Nov 24 10:00:37 compute-0 kernel: tapfe53799e-0d: entered promiscuous mode
Nov 24 10:00:37 compute-0 NetworkManager[48883]: <info>  [1763978437.0157] manager: (tapfe53799e-0d): new Tun device (/org/freedesktop/NetworkManager/Devices/43)
Nov 24 10:00:37 compute-0 ovn_controller[155123]: 2025-11-24T10:00:37Z|00059|binding|INFO|Claiming lport fe53799e-0d96-417b-8153-212f65cd709e for this chassis.
Nov 24 10:00:37 compute-0 ovn_controller[155123]: 2025-11-24T10:00:37Z|00060|binding|INFO|fe53799e-0d96-417b-8153-212f65cd709e: Claiming fa:16:3e:19:3d:30 10.100.0.9
Nov 24 10:00:37 compute-0 nova_compute[257700]: 2025-11-24 10:00:37.016 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:37 compute-0 systemd-udevd[276064]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 10:00:37 compute-0 nova_compute[257700]: 2025-11-24 10:00:37.020 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:37 compute-0 nova_compute[257700]: 2025-11-24 10:00:37.022 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:37 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:00:37.035 165073 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:3d:30 10.100.0.9'], port_security=['fa:16:3e:19:3d:30 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1496723339', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2d64d66d-0f9e-4429-a21c-7e55f44b1e68', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1496723339', 'neutron:project_id': '94d069fc040647d5a6e54894eec915fe', 'neutron:revision_number': '2', 'neutron:security_group_ids': '33c3a403-57a0-4b88-8817-f12f4bfc92ae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd5e84a2-6af3-4d25-9e2e-39e01701962b, chassis=[<ovs.db.idl.Row object at 0x7f45b2855760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f45b2855760>], logical_port=fe53799e-0d96-417b-8153-212f65cd709e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 10:00:37 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:00:37.036 165073 INFO neutron.agent.ovn.metadata.agent [-] Port fe53799e-0d96-417b-8153-212f65cd709e in datapath 2d64d66d-0f9e-4429-a21c-7e55f44b1e68 bound to our chassis
Nov 24 10:00:37 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:00:37.037 165073 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2d64d66d-0f9e-4429-a21c-7e55f44b1e68
Nov 24 10:00:37 compute-0 NetworkManager[48883]: <info>  [1763978437.0448] device (tapfe53799e-0d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 10:00:37 compute-0 systemd-machined[219130]: New machine qemu-4-instance-00000008.
Nov 24 10:00:37 compute-0 NetworkManager[48883]: <info>  [1763978437.0458] device (tapfe53799e-0d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 24 10:00:37 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:00:37.052 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[64d58a62-dbdb-49c4-906a-a07acde89f2e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:00:37 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:00:37.053 165073 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2d64d66d-01 in ovnmeta-2d64d66d-0f9e-4429-a21c-7e55f44b1e68 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 24 10:00:37 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:00:37.055 264910 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2d64d66d-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 24 10:00:37 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:00:37.056 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[d462aa92-6673-4bf8-a098-113c2ea68b14]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:00:37 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:00:37.057 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[2bad7c55-b747-4714-b97f-0a9349508337]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:00:37 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:00:37.072 165227 DEBUG oslo.privsep.daemon [-] privsep: reply[9ac6bc5b-8f7f-402f-84f4-140136509bcc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:00:37 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1015: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.0 MiB/s wr, 31 op/s
Nov 24 10:00:37 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000008.
Nov 24 10:00:37 compute-0 nova_compute[257700]: 2025-11-24 10:00:37.088 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:37 compute-0 ovn_controller[155123]: 2025-11-24T10:00:37Z|00061|binding|INFO|Setting lport fe53799e-0d96-417b-8153-212f65cd709e ovn-installed in OVS
Nov 24 10:00:37 compute-0 ovn_controller[155123]: 2025-11-24T10:00:37Z|00062|binding|INFO|Setting lport fe53799e-0d96-417b-8153-212f65cd709e up in Southbound
Nov 24 10:00:37 compute-0 nova_compute[257700]: 2025-11-24 10:00:37.092 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:37 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:00:37.101 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[0420dcd1-593e-46d9-a662-6a453d4ac557]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:00:37 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:00:37.138 264951 DEBUG oslo.privsep.daemon [-] privsep: reply[9d43ee17-1906-4ab9-95fd-dd6dfa48ec54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:00:37 compute-0 NetworkManager[48883]: <info>  [1763978437.1454] manager: (tap2d64d66d-00): new Veth device (/org/freedesktop/NetworkManager/Devices/44)
Nov 24 10:00:37 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:00:37.144 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[a4e7f956-8071-47da-b933-78c6165dd8f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:00:37 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:00:37.188 264951 DEBUG oslo.privsep.daemon [-] privsep: reply[aee4838f-5e2a-482a-a503-17ded52ca488]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:00:37 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:00:37.192 264951 DEBUG oslo.privsep.daemon [-] privsep: reply[ab41eba7-c0b7-4db6-ac11-0ea4b02d3f89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:00:37 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:37 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:37 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:37.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:37 compute-0 NetworkManager[48883]: <info>  [1763978437.2229] device (tap2d64d66d-00): carrier: link connected
Nov 24 10:00:37 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:00:37.232 264951 DEBUG oslo.privsep.daemon [-] privsep: reply[2ccf528c-15fd-47a9-a033-e8d680e04c03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:00:37 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:00:37.251 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[4b328613-613f-49c4-bf19-5c6a530651bd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2d64d66d-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:58:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 438479, 'reachable_time': 25238, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276214, 'error': None, 'target': 'ovnmeta-2d64d66d-0f9e-4429-a21c-7e55f44b1e68', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:00:37 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:00:37.271 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[0d589363-0ae4-4e80-8c4a-b464d628ef3b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe52:5850'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 438479, 'tstamp': 438479}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276215, 'error': None, 'target': 'ovnmeta-2d64d66d-0f9e-4429-a21c-7e55f44b1e68', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:00:37 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:00:37.305 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[22240c47-95e8-46b2-bdc7-cd026d68cbb0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2d64d66d-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:58:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 438479, 'reachable_time': 25238, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 276223, 'error': None, 'target': 'ovnmeta-2d64d66d-0f9e-4429-a21c-7e55f44b1e68', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:00:37 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:00:37.347 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[cbe13f61-c791-480e-8d02-8441b159b62e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:00:37 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:00:37.407 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[c2497c74-ad5c-48c8-862d-9e81239e6671]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:00:37 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:00:37.409 165073 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2d64d66d-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:00:37 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:00:37.410 165073 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 10:00:37 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:00:37.410 165073 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2d64d66d-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:00:37 compute-0 NetworkManager[48883]: <info>  [1763978437.4128] manager: (tap2d64d66d-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Nov 24 10:00:37 compute-0 nova_compute[257700]: 2025-11-24 10:00:37.412 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:37 compute-0 kernel: tap2d64d66d-00: entered promiscuous mode
Nov 24 10:00:37 compute-0 nova_compute[257700]: 2025-11-24 10:00:37.414 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:37 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:00:37.415 165073 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2d64d66d-00, col_values=(('external_ids', {'iface-id': '711ae8ab-4c6e-4296-ba4f-192226ad0d42'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:00:37 compute-0 nova_compute[257700]: 2025-11-24 10:00:37.416 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:37 compute-0 ovn_controller[155123]: 2025-11-24T10:00:37Z|00063|binding|INFO|Releasing lport 711ae8ab-4c6e-4296-ba4f-192226ad0d42 from this chassis (sb_readonly=0)
Nov 24 10:00:37 compute-0 nova_compute[257700]: 2025-11-24 10:00:37.430 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:37 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:00:37.432 165073 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2d64d66d-0f9e-4429-a21c-7e55f44b1e68.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2d64d66d-0f9e-4429-a21c-7e55f44b1e68.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 24 10:00:37 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:00:37.433 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[ca57966c-01ed-4566-8908-1eeb6eb152e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:00:37 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:00:37.434 165073 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 24 10:00:37 compute-0 ovn_metadata_agent[165067]: global
Nov 24 10:00:37 compute-0 ovn_metadata_agent[165067]:     log         /dev/log local0 debug
Nov 24 10:00:37 compute-0 ovn_metadata_agent[165067]:     log-tag     haproxy-metadata-proxy-2d64d66d-0f9e-4429-a21c-7e55f44b1e68
Nov 24 10:00:37 compute-0 ovn_metadata_agent[165067]:     user        root
Nov 24 10:00:37 compute-0 ovn_metadata_agent[165067]:     group       root
Nov 24 10:00:37 compute-0 ovn_metadata_agent[165067]:     maxconn     1024
Nov 24 10:00:37 compute-0 ovn_metadata_agent[165067]:     pidfile     /var/lib/neutron/external/pids/2d64d66d-0f9e-4429-a21c-7e55f44b1e68.pid.haproxy
Nov 24 10:00:37 compute-0 ovn_metadata_agent[165067]:     daemon
Nov 24 10:00:37 compute-0 ovn_metadata_agent[165067]: 
Nov 24 10:00:37 compute-0 ovn_metadata_agent[165067]: defaults
Nov 24 10:00:37 compute-0 ovn_metadata_agent[165067]:     log global
Nov 24 10:00:37 compute-0 ovn_metadata_agent[165067]:     mode http
Nov 24 10:00:37 compute-0 ovn_metadata_agent[165067]:     option httplog
Nov 24 10:00:37 compute-0 ovn_metadata_agent[165067]:     option dontlognull
Nov 24 10:00:37 compute-0 ovn_metadata_agent[165067]:     option http-server-close
Nov 24 10:00:37 compute-0 ovn_metadata_agent[165067]:     option forwardfor
Nov 24 10:00:37 compute-0 ovn_metadata_agent[165067]:     retries                 3
Nov 24 10:00:37 compute-0 ovn_metadata_agent[165067]:     timeout http-request    30s
Nov 24 10:00:37 compute-0 ovn_metadata_agent[165067]:     timeout connect         30s
Nov 24 10:00:37 compute-0 ovn_metadata_agent[165067]:     timeout client          32s
Nov 24 10:00:37 compute-0 ovn_metadata_agent[165067]:     timeout server          32s
Nov 24 10:00:37 compute-0 ovn_metadata_agent[165067]:     timeout http-keep-alive 30s
Nov 24 10:00:37 compute-0 ovn_metadata_agent[165067]: 
Nov 24 10:00:37 compute-0 ovn_metadata_agent[165067]: 
Nov 24 10:00:37 compute-0 ovn_metadata_agent[165067]: listen listener
Nov 24 10:00:37 compute-0 ovn_metadata_agent[165067]:     bind 169.254.169.254:80
Nov 24 10:00:37 compute-0 ovn_metadata_agent[165067]:     server metadata /var/lib/neutron/metadata_proxy
Nov 24 10:00:37 compute-0 ovn_metadata_agent[165067]:     http-request add-header X-OVN-Network-ID 2d64d66d-0f9e-4429-a21c-7e55f44b1e68
Nov 24 10:00:37 compute-0 ovn_metadata_agent[165067]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 24 10:00:37 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:00:37.436 165073 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2d64d66d-0f9e-4429-a21c-7e55f44b1e68', 'env', 'PROCESS_TAG=haproxy-2d64d66d-0f9e-4429-a21c-7e55f44b1e68', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2d64d66d-0f9e-4429-a21c-7e55f44b1e68.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 24 10:00:37 compute-0 nova_compute[257700]: 2025-11-24 10:00:37.467 257704 DEBUG nova.virt.driver [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Emitting event <LifecycleEvent: 1763978437.4670396, 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 10:00:37 compute-0 nova_compute[257700]: 2025-11-24 10:00:37.467 257704 INFO nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] VM Started (Lifecycle Event)
Nov 24 10:00:37 compute-0 nova_compute[257700]: 2025-11-24 10:00:37.485 257704 DEBUG nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 10:00:37 compute-0 nova_compute[257700]: 2025-11-24 10:00:37.489 257704 DEBUG nova.virt.driver [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Emitting event <LifecycleEvent: 1763978437.4672096, 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 10:00:37 compute-0 nova_compute[257700]: 2025-11-24 10:00:37.490 257704 INFO nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] VM Paused (Lifecycle Event)
Nov 24 10:00:37 compute-0 nova_compute[257700]: 2025-11-24 10:00:37.516 257704 DEBUG nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 10:00:37 compute-0 nova_compute[257700]: 2025-11-24 10:00:37.519 257704 DEBUG nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 10:00:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:00:37.543Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 10:00:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:00:37.543Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 10:00:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:00:37.543Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 10:00:37 compute-0 nova_compute[257700]: 2025-11-24 10:00:37.558 257704 INFO nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 10:00:37 compute-0 nova_compute[257700]: 2025-11-24 10:00:37.768 257704 DEBUG nova.compute.manager [req-5f44e4d8-7ce8-48e2-bc08-1921eeb3df3b req-97e3529e-5f6c-4ba9-9477-59718daf6db4 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Received event network-vif-plugged-fe53799e-0d96-417b-8153-212f65cd709e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 10:00:37 compute-0 nova_compute[257700]: 2025-11-24 10:00:37.768 257704 DEBUG oslo_concurrency.lockutils [req-5f44e4d8-7ce8-48e2-bc08-1921eeb3df3b req-97e3529e-5f6c-4ba9-9477-59718daf6db4 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:00:37 compute-0 nova_compute[257700]: 2025-11-24 10:00:37.768 257704 DEBUG oslo_concurrency.lockutils [req-5f44e4d8-7ce8-48e2-bc08-1921eeb3df3b req-97e3529e-5f6c-4ba9-9477-59718daf6db4 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:00:37 compute-0 nova_compute[257700]: 2025-11-24 10:00:37.769 257704 DEBUG oslo_concurrency.lockutils [req-5f44e4d8-7ce8-48e2-bc08-1921eeb3df3b req-97e3529e-5f6c-4ba9-9477-59718daf6db4 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:00:37 compute-0 nova_compute[257700]: 2025-11-24 10:00:37.769 257704 DEBUG nova.compute.manager [req-5f44e4d8-7ce8-48e2-bc08-1921eeb3df3b req-97e3529e-5f6c-4ba9-9477-59718daf6db4 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Processing event network-vif-plugged-fe53799e-0d96-417b-8153-212f65cd709e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 24 10:00:37 compute-0 nova_compute[257700]: 2025-11-24 10:00:37.769 257704 DEBUG nova.compute.manager [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 24 10:00:37 compute-0 nova_compute[257700]: 2025-11-24 10:00:37.773 257704 DEBUG nova.virt.driver [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Emitting event <LifecycleEvent: 1763978437.7724643, 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 10:00:37 compute-0 nova_compute[257700]: 2025-11-24 10:00:37.773 257704 INFO nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] VM Resumed (Lifecycle Event)
Nov 24 10:00:37 compute-0 nova_compute[257700]: 2025-11-24 10:00:37.775 257704 DEBUG nova.virt.libvirt.driver [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 24 10:00:37 compute-0 nova_compute[257700]: 2025-11-24 10:00:37.779 257704 INFO nova.virt.libvirt.driver [-] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Instance spawned successfully.
Nov 24 10:00:37 compute-0 nova_compute[257700]: 2025-11-24 10:00:37.779 257704 DEBUG nova.virt.libvirt.driver [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 24 10:00:37 compute-0 nova_compute[257700]: 2025-11-24 10:00:37.798 257704 DEBUG nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 10:00:37 compute-0 nova_compute[257700]: 2025-11-24 10:00:37.804 257704 DEBUG nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 10:00:37 compute-0 nova_compute[257700]: 2025-11-24 10:00:37.808 257704 DEBUG nova.virt.libvirt.driver [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 10:00:37 compute-0 nova_compute[257700]: 2025-11-24 10:00:37.808 257704 DEBUG nova.virt.libvirt.driver [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 10:00:37 compute-0 nova_compute[257700]: 2025-11-24 10:00:37.809 257704 DEBUG nova.virt.libvirt.driver [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 10:00:37 compute-0 nova_compute[257700]: 2025-11-24 10:00:37.809 257704 DEBUG nova.virt.libvirt.driver [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 10:00:37 compute-0 nova_compute[257700]: 2025-11-24 10:00:37.810 257704 DEBUG nova.virt.libvirt.driver [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 10:00:37 compute-0 nova_compute[257700]: 2025-11-24 10:00:37.810 257704 DEBUG nova.virt.libvirt.driver [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 10:00:37 compute-0 podman[276291]: 2025-11-24 10:00:37.815405729 +0000 UTC m=+0.055200417 container create 0cdf0435fcf6b75392fde3ac93f1dae3d5f5b89a497ec31518aef23e78e4742b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2d64d66d-0f9e-4429-a21c-7e55f44b1e68, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 24 10:00:37 compute-0 nova_compute[257700]: 2025-11-24 10:00:37.832 257704 INFO nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 10:00:37 compute-0 systemd[1]: Started libpod-conmon-0cdf0435fcf6b75392fde3ac93f1dae3d5f5b89a497ec31518aef23e78e4742b.scope.
Nov 24 10:00:37 compute-0 nova_compute[257700]: 2025-11-24 10:00:37.859 257704 INFO nova.compute.manager [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Took 8.21 seconds to spawn the instance on the hypervisor.
Nov 24 10:00:37 compute-0 nova_compute[257700]: 2025-11-24 10:00:37.860 257704 DEBUG nova.compute.manager [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 10:00:37 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:00:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ba2ec2ccf865a3f06a0be7dd7dda71ce28d6cb02b4cad89c277cf49f68ec410/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 24 10:00:37 compute-0 podman[276291]: 2025-11-24 10:00:37.78593484 +0000 UTC m=+0.025729548 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 24 10:00:37 compute-0 podman[276291]: 2025-11-24 10:00:37.893394229 +0000 UTC m=+0.133188927 container init 0cdf0435fcf6b75392fde3ac93f1dae3d5f5b89a497ec31518aef23e78e4742b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2d64d66d-0f9e-4429-a21c-7e55f44b1e68, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 24 10:00:37 compute-0 podman[276291]: 2025-11-24 10:00:37.899550109 +0000 UTC m=+0.139344807 container start 0cdf0435fcf6b75392fde3ac93f1dae3d5f5b89a497ec31518aef23e78e4742b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2d64d66d-0f9e-4429-a21c-7e55f44b1e68, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 24 10:00:37 compute-0 nova_compute[257700]: 2025-11-24 10:00:37.910 257704 INFO nova.compute.manager [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Took 9.05 seconds to build instance.
Nov 24 10:00:37 compute-0 neutron-haproxy-ovnmeta-2d64d66d-0f9e-4429-a21c-7e55f44b1e68[276306]: [NOTICE]   (276310) : New worker (276312) forked
Nov 24 10:00:37 compute-0 neutron-haproxy-ovnmeta-2d64d66d-0f9e-4429-a21c-7e55f44b1e68[276306]: [NOTICE]   (276310) : Loading success.
Nov 24 10:00:37 compute-0 nova_compute[257700]: 2025-11-24 10:00:37.925 257704 DEBUG oslo_concurrency.lockutils [None req-2399daf8-a90a-4b21-aba2-b4b78c4eca2f 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.118s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:00:38 compute-0 ceph-mon[74331]: pgmap v1015: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.0 MiB/s wr, 31 op/s
Nov 24 10:00:38 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:38 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:38 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:38.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:38 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:00:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:00:38.897Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:00:39 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1016: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.0 MiB/s wr, 31 op/s
Nov 24 10:00:39 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:39 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:00:39 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:39.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:00:39 compute-0 nova_compute[257700]: 2025-11-24 10:00:39.853 257704 DEBUG nova.compute.manager [req-462c32f2-9e1a-4c86-a384-a3518a067de6 req-e0ed80e1-d163-4d1f-ae90-ebdc1943621d 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Received event network-vif-plugged-fe53799e-0d96-417b-8153-212f65cd709e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 10:00:39 compute-0 nova_compute[257700]: 2025-11-24 10:00:39.853 257704 DEBUG oslo_concurrency.lockutils [req-462c32f2-9e1a-4c86-a384-a3518a067de6 req-e0ed80e1-d163-4d1f-ae90-ebdc1943621d 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:00:39 compute-0 nova_compute[257700]: 2025-11-24 10:00:39.853 257704 DEBUG oslo_concurrency.lockutils [req-462c32f2-9e1a-4c86-a384-a3518a067de6 req-e0ed80e1-d163-4d1f-ae90-ebdc1943621d 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:00:39 compute-0 nova_compute[257700]: 2025-11-24 10:00:39.854 257704 DEBUG oslo_concurrency.lockutils [req-462c32f2-9e1a-4c86-a384-a3518a067de6 req-e0ed80e1-d163-4d1f-ae90-ebdc1943621d 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:00:39 compute-0 nova_compute[257700]: 2025-11-24 10:00:39.854 257704 DEBUG nova.compute.manager [req-462c32f2-9e1a-4c86-a384-a3518a067de6 req-e0ed80e1-d163-4d1f-ae90-ebdc1943621d 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] No waiting events found dispatching network-vif-plugged-fe53799e-0d96-417b-8153-212f65cd709e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 10:00:39 compute-0 nova_compute[257700]: 2025-11-24 10:00:39.854 257704 WARNING nova.compute.manager [req-462c32f2-9e1a-4c86-a384-a3518a067de6 req-e0ed80e1-d163-4d1f-ae90-ebdc1943621d 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Received unexpected event network-vif-plugged-fe53799e-0d96-417b-8153-212f65cd709e for instance with vm_state active and task_state None.
Nov 24 10:00:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:00:39 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:00:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:00:39 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:00:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:00:39 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:00:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:00:40 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:00:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=infra.usagestats t=2025-11-24T10:00:40.191632046Z level=info msg="Usage stats are ready to report"
Nov 24 10:00:40 compute-0 ceph-mon[74331]: pgmap v1016: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.0 MiB/s wr, 31 op/s
Nov 24 10:00:40 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:40 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:00:40 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:40.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:00:40 compute-0 nova_compute[257700]: 2025-11-24 10:00:40.728 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:00:40] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 24 10:00:40 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:00:40] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 24 10:00:41 compute-0 nova_compute[257700]: 2025-11-24 10:00:41.051 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:41 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1017: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.0 MiB/s wr, 31 op/s
Nov 24 10:00:41 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:41 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:00:41 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:41.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:00:41 compute-0 nova_compute[257700]: 2025-11-24 10:00:41.377 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:41 compute-0 NetworkManager[48883]: <info>  [1763978441.3782] manager: (patch-br-int-to-provnet-aec09a4d-39ae-42d2-80ba-0cd5b53fed5d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Nov 24 10:00:41 compute-0 NetworkManager[48883]: <info>  [1763978441.3794] manager: (patch-provnet-aec09a4d-39ae-42d2-80ba-0cd5b53fed5d-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Nov 24 10:00:41 compute-0 ovn_controller[155123]: 2025-11-24T10:00:41Z|00064|binding|INFO|Releasing lport 711ae8ab-4c6e-4296-ba4f-192226ad0d42 from this chassis (sb_readonly=0)
Nov 24 10:00:41 compute-0 nova_compute[257700]: 2025-11-24 10:00:41.415 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:41 compute-0 ovn_controller[155123]: 2025-11-24T10:00:41Z|00065|binding|INFO|Releasing lport 711ae8ab-4c6e-4296-ba4f-192226ad0d42 from this chassis (sb_readonly=0)
Nov 24 10:00:41 compute-0 nova_compute[257700]: 2025-11-24 10:00:41.421 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:41 compute-0 nova_compute[257700]: 2025-11-24 10:00:41.768 257704 DEBUG nova.compute.manager [req-78790f93-a2a9-4c72-bde8-40d50b048ffa req-a6a350c1-c723-4ddd-8c5b-b6e8b060d42d 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Received event network-changed-fe53799e-0d96-417b-8153-212f65cd709e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 10:00:41 compute-0 nova_compute[257700]: 2025-11-24 10:00:41.769 257704 DEBUG nova.compute.manager [req-78790f93-a2a9-4c72-bde8-40d50b048ffa req-a6a350c1-c723-4ddd-8c5b-b6e8b060d42d 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Refreshing instance network info cache due to event network-changed-fe53799e-0d96-417b-8153-212f65cd709e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 10:00:41 compute-0 nova_compute[257700]: 2025-11-24 10:00:41.769 257704 DEBUG oslo_concurrency.lockutils [req-78790f93-a2a9-4c72-bde8-40d50b048ffa req-a6a350c1-c723-4ddd-8c5b-b6e8b060d42d 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "refresh_cache-7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 10:00:41 compute-0 nova_compute[257700]: 2025-11-24 10:00:41.769 257704 DEBUG oslo_concurrency.lockutils [req-78790f93-a2a9-4c72-bde8-40d50b048ffa req-a6a350c1-c723-4ddd-8c5b-b6e8b060d42d 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquired lock "refresh_cache-7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 10:00:41 compute-0 nova_compute[257700]: 2025-11-24 10:00:41.769 257704 DEBUG nova.network.neutron [req-78790f93-a2a9-4c72-bde8-40d50b048ffa req-a6a350c1-c723-4ddd-8c5b-b6e8b060d42d 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Refreshing network info cache for port fe53799e-0d96-417b-8153-212f65cd709e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 10:00:41 compute-0 podman[276326]: 2025-11-24 10:00:41.819339817 +0000 UTC m=+0.085132416 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 24 10:00:41 compute-0 podman[276327]: 2025-11-24 10:00:41.846279593 +0000 UTC m=+0.109361406 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 24 10:00:42 compute-0 nova_compute[257700]: 2025-11-24 10:00:42.000 257704 DEBUG oslo_concurrency.lockutils [None req-e7c1862a-13ca-4d4f-a077-ad4ba70c2730 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:00:42 compute-0 nova_compute[257700]: 2025-11-24 10:00:42.001 257704 DEBUG oslo_concurrency.lockutils [None req-e7c1862a-13ca-4d4f-a077-ad4ba70c2730 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:00:42 compute-0 nova_compute[257700]: 2025-11-24 10:00:42.001 257704 DEBUG oslo_concurrency.lockutils [None req-e7c1862a-13ca-4d4f-a077-ad4ba70c2730 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:00:42 compute-0 nova_compute[257700]: 2025-11-24 10:00:42.002 257704 DEBUG oslo_concurrency.lockutils [None req-e7c1862a-13ca-4d4f-a077-ad4ba70c2730 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:00:42 compute-0 nova_compute[257700]: 2025-11-24 10:00:42.002 257704 DEBUG oslo_concurrency.lockutils [None req-e7c1862a-13ca-4d4f-a077-ad4ba70c2730 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:00:42 compute-0 nova_compute[257700]: 2025-11-24 10:00:42.004 257704 INFO nova.compute.manager [None req-e7c1862a-13ca-4d4f-a077-ad4ba70c2730 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Terminating instance
Nov 24 10:00:42 compute-0 nova_compute[257700]: 2025-11-24 10:00:42.005 257704 DEBUG nova.compute.manager [None req-e7c1862a-13ca-4d4f-a077-ad4ba70c2730 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 24 10:00:42 compute-0 kernel: tapfe53799e-0d (unregistering): left promiscuous mode
Nov 24 10:00:42 compute-0 NetworkManager[48883]: <info>  [1763978442.0517] device (tapfe53799e-0d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 24 10:00:42 compute-0 ovn_controller[155123]: 2025-11-24T10:00:42Z|00066|binding|INFO|Releasing lport fe53799e-0d96-417b-8153-212f65cd709e from this chassis (sb_readonly=0)
Nov 24 10:00:42 compute-0 ovn_controller[155123]: 2025-11-24T10:00:42Z|00067|binding|INFO|Setting lport fe53799e-0d96-417b-8153-212f65cd709e down in Southbound
Nov 24 10:00:42 compute-0 nova_compute[257700]: 2025-11-24 10:00:42.109 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:42 compute-0 ovn_controller[155123]: 2025-11-24T10:00:42Z|00068|binding|INFO|Removing iface tapfe53799e-0d ovn-installed in OVS
Nov 24 10:00:42 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:00:42.120 165073 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:3d:30 10.100.0.9'], port_security=['fa:16:3e:19:3d:30 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1496723339', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2d64d66d-0f9e-4429-a21c-7e55f44b1e68', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1496723339', 'neutron:project_id': '94d069fc040647d5a6e54894eec915fe', 'neutron:revision_number': '4', 'neutron:security_group_ids': '33c3a403-57a0-4b88-8817-f12f4bfc92ae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.219'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd5e84a2-6af3-4d25-9e2e-39e01701962b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f45b2855760>], logical_port=fe53799e-0d96-417b-8153-212f65cd709e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f45b2855760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 10:00:42 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:00:42.122 165073 INFO neutron.agent.ovn.metadata.agent [-] Port fe53799e-0d96-417b-8153-212f65cd709e in datapath 2d64d66d-0f9e-4429-a21c-7e55f44b1e68 unbound from our chassis
Nov 24 10:00:42 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:00:42.123 165073 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2d64d66d-0f9e-4429-a21c-7e55f44b1e68, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 24 10:00:42 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:00:42.124 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[645381f1-0adb-4594-9359-7890f78633a7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:00:42 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:00:42.125 165073 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2d64d66d-0f9e-4429-a21c-7e55f44b1e68 namespace which is not needed anymore
Nov 24 10:00:42 compute-0 nova_compute[257700]: 2025-11-24 10:00:42.126 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:42 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000008.scope: Deactivated successfully.
Nov 24 10:00:42 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000008.scope: Consumed 4.743s CPU time.
Nov 24 10:00:42 compute-0 systemd-machined[219130]: Machine qemu-4-instance-00000008 terminated.
Nov 24 10:00:42 compute-0 ceph-mon[74331]: pgmap v1017: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.0 MiB/s wr, 31 op/s
Nov 24 10:00:42 compute-0 nova_compute[257700]: 2025-11-24 10:00:42.255 257704 INFO nova.virt.libvirt.driver [-] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Instance destroyed successfully.
Nov 24 10:00:42 compute-0 nova_compute[257700]: 2025-11-24 10:00:42.258 257704 DEBUG nova.objects.instance [None req-e7c1862a-13ca-4d4f-a077-ad4ba70c2730 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'resources' on Instance uuid 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 10:00:42 compute-0 nova_compute[257700]: 2025-11-24 10:00:42.268 257704 DEBUG nova.virt.libvirt.vif [None req-e7c1862a-13ca-4d4f-a077-ad4ba70c2730 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T10:00:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-734446179',display_name='tempest-TestNetworkBasicOps-server-734446179',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-734446179',id=8,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMX9u2lFDZUos4ohC6nHDtLaMV1Dff0qDObdMLHI+iAm7eXVRiPlcJ4pkSJ+46hrR/OGkTm0t1XXhDa/sS7OeQ7rGlUJHCv/4ZQR1ERnCZh2xC95FcEXuADWxLoiaB7L3w==',key_name='tempest-TestNetworkBasicOps-1247570914',keypairs=<?>,launch_index=0,launched_at=2025-11-24T10:00:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-y0zr1jt7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T10:00:37Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fe53799e-0d96-417b-8153-212f65cd709e", "address": "fa:16:3e:19:3d:30", "network": {"id": "2d64d66d-0f9e-4429-a21c-7e55f44b1e68", "bridge": "br-int", "label": "tempest-network-smoke--1571889183", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe53799e-0d", "ovs_interfaceid": "fe53799e-0d96-417b-8153-212f65cd709e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 24 10:00:42 compute-0 nova_compute[257700]: 2025-11-24 10:00:42.269 257704 DEBUG nova.network.os_vif_util [None req-e7c1862a-13ca-4d4f-a077-ad4ba70c2730 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "fe53799e-0d96-417b-8153-212f65cd709e", "address": "fa:16:3e:19:3d:30", "network": {"id": "2d64d66d-0f9e-4429-a21c-7e55f44b1e68", "bridge": "br-int", "label": "tempest-network-smoke--1571889183", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe53799e-0d", "ovs_interfaceid": "fe53799e-0d96-417b-8153-212f65cd709e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 10:00:42 compute-0 nova_compute[257700]: 2025-11-24 10:00:42.270 257704 DEBUG nova.network.os_vif_util [None req-e7c1862a-13ca-4d4f-a077-ad4ba70c2730 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:3d:30,bridge_name='br-int',has_traffic_filtering=True,id=fe53799e-0d96-417b-8153-212f65cd709e,network=Network(2d64d66d-0f9e-4429-a21c-7e55f44b1e68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfe53799e-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 10:00:42 compute-0 nova_compute[257700]: 2025-11-24 10:00:42.270 257704 DEBUG os_vif [None req-e7c1862a-13ca-4d4f-a077-ad4ba70c2730 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:3d:30,bridge_name='br-int',has_traffic_filtering=True,id=fe53799e-0d96-417b-8153-212f65cd709e,network=Network(2d64d66d-0f9e-4429-a21c-7e55f44b1e68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfe53799e-0d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 24 10:00:42 compute-0 nova_compute[257700]: 2025-11-24 10:00:42.272 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:42 compute-0 nova_compute[257700]: 2025-11-24 10:00:42.273 257704 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfe53799e-0d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:00:42 compute-0 nova_compute[257700]: 2025-11-24 10:00:42.274 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:42 compute-0 nova_compute[257700]: 2025-11-24 10:00:42.276 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:42 compute-0 nova_compute[257700]: 2025-11-24 10:00:42.279 257704 INFO os_vif [None req-e7c1862a-13ca-4d4f-a077-ad4ba70c2730 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:3d:30,bridge_name='br-int',has_traffic_filtering=True,id=fe53799e-0d96-417b-8153-212f65cd709e,network=Network(2d64d66d-0f9e-4429-a21c-7e55f44b1e68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfe53799e-0d')
Nov 24 10:00:42 compute-0 neutron-haproxy-ovnmeta-2d64d66d-0f9e-4429-a21c-7e55f44b1e68[276306]: [NOTICE]   (276310) : haproxy version is 2.8.14-c23fe91
Nov 24 10:00:42 compute-0 neutron-haproxy-ovnmeta-2d64d66d-0f9e-4429-a21c-7e55f44b1e68[276306]: [NOTICE]   (276310) : path to executable is /usr/sbin/haproxy
Nov 24 10:00:42 compute-0 neutron-haproxy-ovnmeta-2d64d66d-0f9e-4429-a21c-7e55f44b1e68[276306]: [WARNING]  (276310) : Exiting Master process...
Nov 24 10:00:42 compute-0 neutron-haproxy-ovnmeta-2d64d66d-0f9e-4429-a21c-7e55f44b1e68[276306]: [ALERT]    (276310) : Current worker (276312) exited with code 143 (Terminated)
Nov 24 10:00:42 compute-0 neutron-haproxy-ovnmeta-2d64d66d-0f9e-4429-a21c-7e55f44b1e68[276306]: [WARNING]  (276310) : All workers exited. Exiting... (0)
Nov 24 10:00:42 compute-0 systemd[1]: libpod-0cdf0435fcf6b75392fde3ac93f1dae3d5f5b89a497ec31518aef23e78e4742b.scope: Deactivated successfully.
Nov 24 10:00:42 compute-0 podman[276395]: 2025-11-24 10:00:42.293440198 +0000 UTC m=+0.054077988 container died 0cdf0435fcf6b75392fde3ac93f1dae3d5f5b89a497ec31518aef23e78e4742b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2d64d66d-0f9e-4429-a21c-7e55f44b1e68, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 24 10:00:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ba2ec2ccf865a3f06a0be7dd7dda71ce28d6cb02b4cad89c277cf49f68ec410-merged.mount: Deactivated successfully.
Nov 24 10:00:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0cdf0435fcf6b75392fde3ac93f1dae3d5f5b89a497ec31518aef23e78e4742b-userdata-shm.mount: Deactivated successfully.
Nov 24 10:00:42 compute-0 podman[276395]: 2025-11-24 10:00:42.377204739 +0000 UTC m=+0.137842509 container cleanup 0cdf0435fcf6b75392fde3ac93f1dae3d5f5b89a497ec31518aef23e78e4742b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2d64d66d-0f9e-4429-a21c-7e55f44b1e68, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 10:00:42 compute-0 systemd[1]: libpod-conmon-0cdf0435fcf6b75392fde3ac93f1dae3d5f5b89a497ec31518aef23e78e4742b.scope: Deactivated successfully.
Nov 24 10:00:42 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:42 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:42 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:42.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:42 compute-0 podman[276455]: 2025-11-24 10:00:42.464215799 +0000 UTC m=+0.065597290 container remove 0cdf0435fcf6b75392fde3ac93f1dae3d5f5b89a497ec31518aef23e78e4742b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2d64d66d-0f9e-4429-a21c-7e55f44b1e68, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 10:00:42 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:00:42.471 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[2ea99a63-f156-4f55-9d79-1598f93f2ec4]: (4, ('Mon Nov 24 10:00:42 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2d64d66d-0f9e-4429-a21c-7e55f44b1e68 (0cdf0435fcf6b75392fde3ac93f1dae3d5f5b89a497ec31518aef23e78e4742b)\n0cdf0435fcf6b75392fde3ac93f1dae3d5f5b89a497ec31518aef23e78e4742b\nMon Nov 24 10:00:42 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2d64d66d-0f9e-4429-a21c-7e55f44b1e68 (0cdf0435fcf6b75392fde3ac93f1dae3d5f5b89a497ec31518aef23e78e4742b)\n0cdf0435fcf6b75392fde3ac93f1dae3d5f5b89a497ec31518aef23e78e4742b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:00:42 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:00:42.472 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[36627da5-3bb5-45af-983c-98fa32e185ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:00:42 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:00:42.473 165073 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2d64d66d-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:00:42 compute-0 kernel: tap2d64d66d-00: left promiscuous mode
Nov 24 10:00:42 compute-0 nova_compute[257700]: 2025-11-24 10:00:42.475 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:42 compute-0 nova_compute[257700]: 2025-11-24 10:00:42.491 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:42 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:00:42.493 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[09e813e8-d5e7-467a-94eb-72a5cdca5397]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:00:42 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:00:42.507 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[5be24e21-6fae-418b-965c-e12dd315481b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:00:42 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:00:42.508 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[9d6960fb-0c16-43f2-bd6f-90edc2917508]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:00:42 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:00:42.525 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[606ad0dd-96dd-4376-912b-8bb883d9c8be]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 438470, 'reachable_time': 29392, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276471, 'error': None, 'target': 'ovnmeta-2d64d66d-0f9e-4429-a21c-7e55f44b1e68', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:00:42 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:00:42.528 165227 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2d64d66d-0f9e-4429-a21c-7e55f44b1e68 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 24 10:00:42 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:00:42.528 165227 DEBUG oslo.privsep.daemon [-] privsep: reply[9eedbbb5-cd16-4376-b59f-853e90746809]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:00:42 compute-0 systemd[1]: run-netns-ovnmeta\x2d2d64d66d\x2d0f9e\x2d4429\x2da21c\x2d7e55f44b1e68.mount: Deactivated successfully.
Nov 24 10:00:42 compute-0 nova_compute[257700]: 2025-11-24 10:00:42.668 257704 DEBUG nova.compute.manager [req-2f5d0eef-b324-4c61-b85d-4472fd8cb341 req-4c02a809-8295-417d-b767-b7b3c1036c5c 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Received event network-vif-unplugged-fe53799e-0d96-417b-8153-212f65cd709e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 10:00:42 compute-0 nova_compute[257700]: 2025-11-24 10:00:42.669 257704 DEBUG oslo_concurrency.lockutils [req-2f5d0eef-b324-4c61-b85d-4472fd8cb341 req-4c02a809-8295-417d-b767-b7b3c1036c5c 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:00:42 compute-0 nova_compute[257700]: 2025-11-24 10:00:42.669 257704 DEBUG oslo_concurrency.lockutils [req-2f5d0eef-b324-4c61-b85d-4472fd8cb341 req-4c02a809-8295-417d-b767-b7b3c1036c5c 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:00:42 compute-0 nova_compute[257700]: 2025-11-24 10:00:42.669 257704 DEBUG oslo_concurrency.lockutils [req-2f5d0eef-b324-4c61-b85d-4472fd8cb341 req-4c02a809-8295-417d-b767-b7b3c1036c5c 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:00:42 compute-0 nova_compute[257700]: 2025-11-24 10:00:42.669 257704 DEBUG nova.compute.manager [req-2f5d0eef-b324-4c61-b85d-4472fd8cb341 req-4c02a809-8295-417d-b767-b7b3c1036c5c 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] No waiting events found dispatching network-vif-unplugged-fe53799e-0d96-417b-8153-212f65cd709e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 10:00:42 compute-0 nova_compute[257700]: 2025-11-24 10:00:42.670 257704 DEBUG nova.compute.manager [req-2f5d0eef-b324-4c61-b85d-4472fd8cb341 req-4c02a809-8295-417d-b767-b7b3c1036c5c 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Received event network-vif-unplugged-fe53799e-0d96-417b-8153-212f65cd709e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 24 10:00:42 compute-0 nova_compute[257700]: 2025-11-24 10:00:42.902 257704 INFO nova.virt.libvirt.driver [None req-e7c1862a-13ca-4d4f-a077-ad4ba70c2730 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Deleting instance files /var/lib/nova/instances/7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e_del
Nov 24 10:00:42 compute-0 nova_compute[257700]: 2025-11-24 10:00:42.903 257704 INFO nova.virt.libvirt.driver [None req-e7c1862a-13ca-4d4f-a077-ad4ba70c2730 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Deletion of /var/lib/nova/instances/7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e_del complete
Nov 24 10:00:42 compute-0 nova_compute[257700]: 2025-11-24 10:00:42.951 257704 INFO nova.compute.manager [None req-e7c1862a-13ca-4d4f-a077-ad4ba70c2730 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Took 0.95 seconds to destroy the instance on the hypervisor.
Nov 24 10:00:42 compute-0 nova_compute[257700]: 2025-11-24 10:00:42.951 257704 DEBUG oslo.service.loopingcall [None req-e7c1862a-13ca-4d4f-a077-ad4ba70c2730 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 24 10:00:42 compute-0 nova_compute[257700]: 2025-11-24 10:00:42.951 257704 DEBUG nova.compute.manager [-] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 24 10:00:42 compute-0 nova_compute[257700]: 2025-11-24 10:00:42.952 257704 DEBUG nova.network.neutron [-] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 24 10:00:43 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1018: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Nov 24 10:00:43 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:43 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:43 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:43.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:43 compute-0 nova_compute[257700]: 2025-11-24 10:00:43.430 257704 DEBUG nova.network.neutron [req-78790f93-a2a9-4c72-bde8-40d50b048ffa req-a6a350c1-c723-4ddd-8c5b-b6e8b060d42d 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Updated VIF entry in instance network info cache for port fe53799e-0d96-417b-8153-212f65cd709e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 10:00:43 compute-0 nova_compute[257700]: 2025-11-24 10:00:43.431 257704 DEBUG nova.network.neutron [req-78790f93-a2a9-4c72-bde8-40d50b048ffa req-a6a350c1-c723-4ddd-8c5b-b6e8b060d42d 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Updating instance_info_cache with network_info: [{"id": "fe53799e-0d96-417b-8153-212f65cd709e", "address": "fa:16:3e:19:3d:30", "network": {"id": "2d64d66d-0f9e-4429-a21c-7e55f44b1e68", "bridge": "br-int", "label": "tempest-network-smoke--1571889183", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe53799e-0d", "ovs_interfaceid": "fe53799e-0d96-417b-8153-212f65cd709e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 10:00:43 compute-0 nova_compute[257700]: 2025-11-24 10:00:43.445 257704 DEBUG oslo_concurrency.lockutils [req-78790f93-a2a9-4c72-bde8-40d50b048ffa req-a6a350c1-c723-4ddd-8c5b-b6e8b060d42d 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Releasing lock "refresh_cache-7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 10:00:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:00:44 compute-0 ceph-mon[74331]: pgmap v1018: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Nov 24 10:00:44 compute-0 nova_compute[257700]: 2025-11-24 10:00:44.436 257704 DEBUG nova.network.neutron [-] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 10:00:44 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:44 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:00:44 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:44.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:00:44 compute-0 nova_compute[257700]: 2025-11-24 10:00:44.456 257704 INFO nova.compute.manager [-] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Took 1.50 seconds to deallocate network for instance.
Nov 24 10:00:44 compute-0 nova_compute[257700]: 2025-11-24 10:00:44.495 257704 DEBUG oslo_concurrency.lockutils [None req-e7c1862a-13ca-4d4f-a077-ad4ba70c2730 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:00:44 compute-0 nova_compute[257700]: 2025-11-24 10:00:44.495 257704 DEBUG oslo_concurrency.lockutils [None req-e7c1862a-13ca-4d4f-a077-ad4ba70c2730 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:00:44 compute-0 nova_compute[257700]: 2025-11-24 10:00:44.554 257704 DEBUG oslo_concurrency.processutils [None req-e7c1862a-13ca-4d4f-a077-ad4ba70c2730 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:00:44 compute-0 nova_compute[257700]: 2025-11-24 10:00:44.733 257704 DEBUG nova.compute.manager [req-4d14ae38-d1d3-449a-8920-bdf78514212a req-49012bbb-ca81-4217-afbf-2af1fae0eae0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Received event network-vif-plugged-fe53799e-0d96-417b-8153-212f65cd709e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 10:00:44 compute-0 nova_compute[257700]: 2025-11-24 10:00:44.734 257704 DEBUG oslo_concurrency.lockutils [req-4d14ae38-d1d3-449a-8920-bdf78514212a req-49012bbb-ca81-4217-afbf-2af1fae0eae0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:00:44 compute-0 nova_compute[257700]: 2025-11-24 10:00:44.734 257704 DEBUG oslo_concurrency.lockutils [req-4d14ae38-d1d3-449a-8920-bdf78514212a req-49012bbb-ca81-4217-afbf-2af1fae0eae0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:00:44 compute-0 nova_compute[257700]: 2025-11-24 10:00:44.735 257704 DEBUG oslo_concurrency.lockutils [req-4d14ae38-d1d3-449a-8920-bdf78514212a req-49012bbb-ca81-4217-afbf-2af1fae0eae0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:00:44 compute-0 nova_compute[257700]: 2025-11-24 10:00:44.735 257704 DEBUG nova.compute.manager [req-4d14ae38-d1d3-449a-8920-bdf78514212a req-49012bbb-ca81-4217-afbf-2af1fae0eae0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] No waiting events found dispatching network-vif-plugged-fe53799e-0d96-417b-8153-212f65cd709e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 10:00:44 compute-0 nova_compute[257700]: 2025-11-24 10:00:44.735 257704 WARNING nova.compute.manager [req-4d14ae38-d1d3-449a-8920-bdf78514212a req-49012bbb-ca81-4217-afbf-2af1fae0eae0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Received unexpected event network-vif-plugged-fe53799e-0d96-417b-8153-212f65cd709e for instance with vm_state deleted and task_state None.
Nov 24 10:00:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:00:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:00:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:00:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:00:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:00:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:00:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:00:45 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:00:45 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:00:45 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4136452768' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:00:45 compute-0 nova_compute[257700]: 2025-11-24 10:00:45.037 257704 DEBUG oslo_concurrency.processutils [None req-e7c1862a-13ca-4d4f-a077-ad4ba70c2730 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:00:45 compute-0 nova_compute[257700]: 2025-11-24 10:00:45.042 257704 DEBUG nova.compute.provider_tree [None req-e7c1862a-13ca-4d4f-a077-ad4ba70c2730 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed in ProviderTree for provider: a50ce3b5-7e9e-4263-a4aa-c35573ac7257 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 10:00:45 compute-0 nova_compute[257700]: 2025-11-24 10:00:45.056 257704 DEBUG nova.scheduler.client.report [None req-e7c1862a-13ca-4d4f-a077-ad4ba70c2730 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 10:00:45 compute-0 nova_compute[257700]: 2025-11-24 10:00:45.073 257704 DEBUG oslo_concurrency.lockutils [None req-e7c1862a-13ca-4d4f-a077-ad4ba70c2730 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:00:45 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1019: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 24 10:00:45 compute-0 nova_compute[257700]: 2025-11-24 10:00:45.100 257704 INFO nova.scheduler.client.report [None req-e7c1862a-13ca-4d4f-a077-ad4ba70c2730 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Deleted allocations for instance 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e
Nov 24 10:00:45 compute-0 nova_compute[257700]: 2025-11-24 10:00:45.159 257704 DEBUG oslo_concurrency.lockutils [None req-e7c1862a-13ca-4d4f-a077-ad4ba70c2730 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.158s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:00:45 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:45 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:45 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:45.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Optimize plan auto_2025-11-24_10:00:45
Nov 24 10:00:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 10:00:45 compute-0 ceph-mgr[74626]: [balancer INFO root] do_upmap
Nov 24 10:00:45 compute-0 ceph-mgr[74626]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'backups', 'images', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log', '.nfs', 'default.rgw.meta', 'vms']
Nov 24 10:00:45 compute-0 ceph-mgr[74626]: [balancer INFO root] prepared 0/10 upmap changes
Nov 24 10:00:45 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/4136452768' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:00:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:00:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:00:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:00:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:00:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:00:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:00:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 10:00:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:00:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 10:00:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:00:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Nov 24 10:00:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:00:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:00:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:00:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:00:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:00:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 24 10:00:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:00:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 10:00:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:00:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:00:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:00:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 24 10:00:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:00:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 10:00:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:00:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:00:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:00:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 10:00:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:00:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 10:00:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 10:00:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 10:00:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 10:00:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 10:00:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 10:00:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 10:00:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 10:00:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 10:00:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 10:00:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 10:00:46 compute-0 nova_compute[257700]: 2025-11-24 10:00:46.053 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:46 compute-0 ceph-mon[74331]: pgmap v1019: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 24 10:00:46 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:00:46 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:46 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:46 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:46.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:47 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1020: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Nov 24 10:00:47 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:47 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:47 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:47.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:47 compute-0 nova_compute[257700]: 2025-11-24 10:00:47.276 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:00:47.544Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:00:47 compute-0 sudo[276501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:00:47 compute-0 sudo[276501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:00:47 compute-0 sudo[276501]: pam_unix(sudo:session): session closed for user root
Nov 24 10:00:48 compute-0 ceph-mon[74331]: pgmap v1020: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Nov 24 10:00:48 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:48 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:48 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:48.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:48 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:00:48 compute-0 podman[276529]: 2025-11-24 10:00:48.787891329 +0000 UTC m=+0.058581069 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 24 10:00:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:00:48.898Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 10:00:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:00:48.898Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:00:49 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1021: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Nov 24 10:00:49 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:49 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:49 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:49.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:49 compute-0 sshd-session[276526]: Received disconnect from 36.255.3.203 port 52094:11: Bye Bye [preauth]
Nov 24 10:00:49 compute-0 sshd-session[276526]: Disconnected from authenticating user root 36.255.3.203 port 52094 [preauth]
Nov 24 10:00:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:00:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:00:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:00:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:00:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:00:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:00:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:00:50 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:00:50 compute-0 ceph-mon[74331]: pgmap v1021: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Nov 24 10:00:50 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:50 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:00:50 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:50.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:00:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:00:50] "GET /metrics HTTP/1.1" 200 48471 "" "Prometheus/2.51.0"
Nov 24 10:00:50 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:00:50] "GET /metrics HTTP/1.1" 200 48471 "" "Prometheus/2.51.0"
Nov 24 10:00:51 compute-0 nova_compute[257700]: 2025-11-24 10:00:51.056 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:51 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1022: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Nov 24 10:00:51 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:51 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:00:51 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:51.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:00:52 compute-0 nova_compute[257700]: 2025-11-24 10:00:52.280 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:52 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:52 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:00:52 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:52.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:00:52 compute-0 ceph-mon[74331]: pgmap v1022: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Nov 24 10:00:53 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1023: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Nov 24 10:00:53 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:53 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:53 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:53.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:53 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:00:54 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:54 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:00:54 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:54.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:00:54 compute-0 ceph-mon[74331]: pgmap v1023: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Nov 24 10:00:54 compute-0 sshd-session[276474]: error: kex_exchange_identification: read: Connection timed out
Nov 24 10:00:54 compute-0 sshd-session[276474]: banner exchange: Connection from 14.215.126.91 port 56306: Connection timed out
Nov 24 10:00:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:00:55 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:00:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:00:55 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:00:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:00:55 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:00:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:00:55 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:00:55 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1024: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 24 10:00:55 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:55 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:00:55 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:55.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:00:56 compute-0 nova_compute[257700]: 2025-11-24 10:00:56.059 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:56 compute-0 ceph-mon[74331]: pgmap v1024: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 24 10:00:56 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:56 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:00:56 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:56.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:00:56 compute-0 nova_compute[257700]: 2025-11-24 10:00:56.879 257704 DEBUG oslo_concurrency.lockutils [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "7209acb4-1927-431b-ad9e-0838a25f1f80" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:00:56 compute-0 nova_compute[257700]: 2025-11-24 10:00:56.880 257704 DEBUG oslo_concurrency.lockutils [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "7209acb4-1927-431b-ad9e-0838a25f1f80" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:00:56 compute-0 nova_compute[257700]: 2025-11-24 10:00:56.908 257704 DEBUG nova.compute.manager [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 24 10:00:57 compute-0 nova_compute[257700]: 2025-11-24 10:00:57.016 257704 DEBUG oslo_concurrency.lockutils [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:00:57 compute-0 nova_compute[257700]: 2025-11-24 10:00:57.018 257704 DEBUG oslo_concurrency.lockutils [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:00:57 compute-0 nova_compute[257700]: 2025-11-24 10:00:57.027 257704 DEBUG nova.virt.hardware [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 24 10:00:57 compute-0 nova_compute[257700]: 2025-11-24 10:00:57.028 257704 INFO nova.compute.claims [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Claim successful on node compute-0.ctlplane.example.com
Nov 24 10:00:57 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1025: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 24 10:00:57 compute-0 nova_compute[257700]: 2025-11-24 10:00:57.156 257704 DEBUG oslo_concurrency.processutils [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:00:57 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:57 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:57 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:57.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:57 compute-0 nova_compute[257700]: 2025-11-24 10:00:57.254 257704 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763978442.2527747, 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 10:00:57 compute-0 nova_compute[257700]: 2025-11-24 10:00:57.255 257704 INFO nova.compute.manager [-] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] VM Stopped (Lifecycle Event)
Nov 24 10:00:57 compute-0 nova_compute[257700]: 2025-11-24 10:00:57.272 257704 DEBUG nova.compute.manager [None req-e77da89a-2322-4c06-a3b0-daef85f1dada - - - - - -] [instance: 7ae1b935-a8f2-4ed3-965b-a3a3c7225e9e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 10:00:57 compute-0 nova_compute[257700]: 2025-11-24 10:00:57.282 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:00:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:00:57.545Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:00:57 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:00:57 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/201546860' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:00:57 compute-0 nova_compute[257700]: 2025-11-24 10:00:57.634 257704 DEBUG oslo_concurrency.processutils [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:00:57 compute-0 nova_compute[257700]: 2025-11-24 10:00:57.644 257704 DEBUG nova.compute.provider_tree [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed in ProviderTree for provider: a50ce3b5-7e9e-4263-a4aa-c35573ac7257 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 10:00:57 compute-0 nova_compute[257700]: 2025-11-24 10:00:57.657 257704 DEBUG nova.scheduler.client.report [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 10:00:57 compute-0 nova_compute[257700]: 2025-11-24 10:00:57.679 257704 DEBUG oslo_concurrency.lockutils [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:00:57 compute-0 nova_compute[257700]: 2025-11-24 10:00:57.680 257704 DEBUG nova.compute.manager [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 24 10:00:58 compute-0 nova_compute[257700]: 2025-11-24 10:00:58.138 257704 DEBUG nova.compute.manager [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 24 10:00:58 compute-0 nova_compute[257700]: 2025-11-24 10:00:58.138 257704 DEBUG nova.network.neutron [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 24 10:00:58 compute-0 ceph-mon[74331]: pgmap v1025: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 24 10:00:58 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/201546860' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:00:58 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:58 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:00:58 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:00:58.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:00:58 compute-0 nova_compute[257700]: 2025-11-24 10:00:58.611 257704 DEBUG nova.policy [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '43f79ff3105e4372a3c095e8057d4f1f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '94d069fc040647d5a6e54894eec915fe', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 24 10:00:58 compute-0 nova_compute[257700]: 2025-11-24 10:00:58.625 257704 INFO nova.virt.libvirt.driver [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 24 10:00:58 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:00:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:00:58.898Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:00:59 compute-0 nova_compute[257700]: 2025-11-24 10:00:59.034 257704 DEBUG nova.compute.manager [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 24 10:00:59 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1026: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:00:59 compute-0 nova_compute[257700]: 2025-11-24 10:00:59.191 257704 DEBUG nova.compute.manager [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 24 10:00:59 compute-0 nova_compute[257700]: 2025-11-24 10:00:59.193 257704 DEBUG nova.virt.libvirt.driver [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 24 10:00:59 compute-0 nova_compute[257700]: 2025-11-24 10:00:59.193 257704 INFO nova.virt.libvirt.driver [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Creating image(s)
Nov 24 10:00:59 compute-0 nova_compute[257700]: 2025-11-24 10:00:59.222 257704 DEBUG nova.storage.rbd_utils [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 7209acb4-1927-431b-ad9e-0838a25f1f80_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 10:00:59 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:00:59 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:00:59 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:00:59.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:00:59 compute-0 nova_compute[257700]: 2025-11-24 10:00:59.247 257704 DEBUG nova.storage.rbd_utils [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 7209acb4-1927-431b-ad9e-0838a25f1f80_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 10:00:59 compute-0 nova_compute[257700]: 2025-11-24 10:00:59.271 257704 DEBUG nova.storage.rbd_utils [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 7209acb4-1927-431b-ad9e-0838a25f1f80_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 10:00:59 compute-0 nova_compute[257700]: 2025-11-24 10:00:59.274 257704 DEBUG oslo_concurrency.processutils [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:00:59 compute-0 nova_compute[257700]: 2025-11-24 10:00:59.344 257704 DEBUG oslo_concurrency.processutils [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:00:59 compute-0 nova_compute[257700]: 2025-11-24 10:00:59.345 257704 DEBUG oslo_concurrency.lockutils [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "2ed5c667523487159c4c4503c82babbc95dbae40" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:00:59 compute-0 nova_compute[257700]: 2025-11-24 10:00:59.346 257704 DEBUG oslo_concurrency.lockutils [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "2ed5c667523487159c4c4503c82babbc95dbae40" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:00:59 compute-0 nova_compute[257700]: 2025-11-24 10:00:59.346 257704 DEBUG oslo_concurrency.lockutils [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "2ed5c667523487159c4c4503c82babbc95dbae40" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:00:59 compute-0 nova_compute[257700]: 2025-11-24 10:00:59.368 257704 DEBUG nova.storage.rbd_utils [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 7209acb4-1927-431b-ad9e-0838a25f1f80_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 10:00:59 compute-0 nova_compute[257700]: 2025-11-24 10:00:59.371 257704 DEBUG oslo_concurrency.processutils [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 7209acb4-1927-431b-ad9e-0838a25f1f80_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:00:59 compute-0 nova_compute[257700]: 2025-11-24 10:00:59.619 257704 DEBUG oslo_concurrency.processutils [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 7209acb4-1927-431b-ad9e-0838a25f1f80_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.247s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:00:59 compute-0 nova_compute[257700]: 2025-11-24 10:00:59.680 257704 DEBUG nova.storage.rbd_utils [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] resizing rbd image 7209acb4-1927-431b-ad9e-0838a25f1f80_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 24 10:00:59 compute-0 nova_compute[257700]: 2025-11-24 10:00:59.781 257704 DEBUG nova.objects.instance [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'migration_context' on Instance uuid 7209acb4-1927-431b-ad9e-0838a25f1f80 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 10:00:59 compute-0 nova_compute[257700]: 2025-11-24 10:00:59.795 257704 DEBUG nova.virt.libvirt.driver [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 24 10:00:59 compute-0 nova_compute[257700]: 2025-11-24 10:00:59.796 257704 DEBUG nova.virt.libvirt.driver [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Ensure instance console log exists: /var/lib/nova/instances/7209acb4-1927-431b-ad9e-0838a25f1f80/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 24 10:00:59 compute-0 nova_compute[257700]: 2025-11-24 10:00:59.796 257704 DEBUG oslo_concurrency.lockutils [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:00:59 compute-0 nova_compute[257700]: 2025-11-24 10:00:59.797 257704 DEBUG oslo_concurrency.lockutils [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:00:59 compute-0 nova_compute[257700]: 2025-11-24 10:00:59.797 257704 DEBUG oslo_concurrency.lockutils [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:00:59 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:00:59.886 165073 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:13:51', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4e:f0:a8:6f:5e:1b'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 10:00:59 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:00:59.887 165073 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 10:00:59 compute-0 nova_compute[257700]: 2025-11-24 10:00:59.886 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:00:59 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:01:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:00:59 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:01:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:00:59 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:01:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:01:00 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:01:00 compute-0 ceph-mon[74331]: pgmap v1026: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:01:00 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:00 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:01:00 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:00.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:01:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:01:00] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Nov 24 10:01:00 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:01:00] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Nov 24 10:01:01 compute-0 nova_compute[257700]: 2025-11-24 10:01:01.059 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:01 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1027: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:01:01 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:01 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:01 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:01.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:01 compute-0 CROND[276750]: (root) CMD (run-parts /etc/cron.hourly)
Nov 24 10:01:01 compute-0 run-parts[276753]: (/etc/cron.hourly) starting 0anacron
Nov 24 10:01:01 compute-0 run-parts[276759]: (/etc/cron.hourly) finished 0anacron
Nov 24 10:01:01 compute-0 CROND[276749]: (root) CMDEND (run-parts /etc/cron.hourly)
Nov 24 10:01:01 compute-0 nova_compute[257700]: 2025-11-24 10:01:01.392 257704 DEBUG nova.network.neutron [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Successfully updated port: fe53799e-0d96-417b-8153-212f65cd709e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 24 10:01:01 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:01:01 compute-0 nova_compute[257700]: 2025-11-24 10:01:01.433 257704 DEBUG oslo_concurrency.lockutils [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "refresh_cache-7209acb4-1927-431b-ad9e-0838a25f1f80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 10:01:01 compute-0 nova_compute[257700]: 2025-11-24 10:01:01.433 257704 DEBUG oslo_concurrency.lockutils [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquired lock "refresh_cache-7209acb4-1927-431b-ad9e-0838a25f1f80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 10:01:01 compute-0 nova_compute[257700]: 2025-11-24 10:01:01.433 257704 DEBUG nova.network.neutron [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 24 10:01:01 compute-0 nova_compute[257700]: 2025-11-24 10:01:01.497 257704 DEBUG nova.compute.manager [req-a68e2073-ca10-426c-a7ec-4cf3826bab62 req-c254f078-8e61-4dcc-bd92-df2fd402a264 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Received event network-changed-fe53799e-0d96-417b-8153-212f65cd709e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 10:01:01 compute-0 nova_compute[257700]: 2025-11-24 10:01:01.498 257704 DEBUG nova.compute.manager [req-a68e2073-ca10-426c-a7ec-4cf3826bab62 req-c254f078-8e61-4dcc-bd92-df2fd402a264 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Refreshing instance network info cache due to event network-changed-fe53799e-0d96-417b-8153-212f65cd709e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 10:01:01 compute-0 nova_compute[257700]: 2025-11-24 10:01:01.498 257704 DEBUG oslo_concurrency.lockutils [req-a68e2073-ca10-426c-a7ec-4cf3826bab62 req-c254f078-8e61-4dcc-bd92-df2fd402a264 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "refresh_cache-7209acb4-1927-431b-ad9e-0838a25f1f80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 10:01:01 compute-0 nova_compute[257700]: 2025-11-24 10:01:01.704 257704 DEBUG nova.network.neutron [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 24 10:01:02 compute-0 nova_compute[257700]: 2025-11-24 10:01:02.286 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:02 compute-0 ceph-mon[74331]: pgmap v1027: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:01:02 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/951297493' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 10:01:02 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/951297493' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 10:01:02 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:02 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:02 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:02.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:02 compute-0 nova_compute[257700]: 2025-11-24 10:01:02.757 257704 DEBUG nova.network.neutron [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Updating instance_info_cache with network_info: [{"id": "fe53799e-0d96-417b-8153-212f65cd709e", "address": "fa:16:3e:19:3d:30", "network": {"id": "2d64d66d-0f9e-4429-a21c-7e55f44b1e68", "bridge": "br-int", "label": "tempest-network-smoke--1571889183", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe53799e-0d", "ovs_interfaceid": "fe53799e-0d96-417b-8153-212f65cd709e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 10:01:02 compute-0 nova_compute[257700]: 2025-11-24 10:01:02.788 257704 DEBUG oslo_concurrency.lockutils [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Releasing lock "refresh_cache-7209acb4-1927-431b-ad9e-0838a25f1f80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 10:01:02 compute-0 nova_compute[257700]: 2025-11-24 10:01:02.789 257704 DEBUG nova.compute.manager [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Instance network_info: |[{"id": "fe53799e-0d96-417b-8153-212f65cd709e", "address": "fa:16:3e:19:3d:30", "network": {"id": "2d64d66d-0f9e-4429-a21c-7e55f44b1e68", "bridge": "br-int", "label": "tempest-network-smoke--1571889183", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe53799e-0d", "ovs_interfaceid": "fe53799e-0d96-417b-8153-212f65cd709e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 24 10:01:02 compute-0 nova_compute[257700]: 2025-11-24 10:01:02.789 257704 DEBUG oslo_concurrency.lockutils [req-a68e2073-ca10-426c-a7ec-4cf3826bab62 req-c254f078-8e61-4dcc-bd92-df2fd402a264 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquired lock "refresh_cache-7209acb4-1927-431b-ad9e-0838a25f1f80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 10:01:02 compute-0 nova_compute[257700]: 2025-11-24 10:01:02.789 257704 DEBUG nova.network.neutron [req-a68e2073-ca10-426c-a7ec-4cf3826bab62 req-c254f078-8e61-4dcc-bd92-df2fd402a264 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Refreshing network info cache for port fe53799e-0d96-417b-8153-212f65cd709e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 10:01:02 compute-0 nova_compute[257700]: 2025-11-24 10:01:02.791 257704 DEBUG nova.virt.libvirt.driver [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Start _get_guest_xml network_info=[{"id": "fe53799e-0d96-417b-8153-212f65cd709e", "address": "fa:16:3e:19:3d:30", "network": {"id": "2d64d66d-0f9e-4429-a21c-7e55f44b1e68", "bridge": "br-int", "label": "tempest-network-smoke--1571889183", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe53799e-0d", "ovs_interfaceid": "fe53799e-0d96-417b-8153-212f65cd709e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T09:52:37Z,direct_url=<?>,disk_format='qcow2',id=6ef14bdf-4f04-4400-8040-4409d9d5271e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='cf636babb68a4ebe9bf137d3fe0e4c0c',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T09:52:41Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'device_name': '/dev/vda', 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_format': None, 'boot_index': 0, 'device_type': 'disk', 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': '6ef14bdf-4f04-4400-8040-4409d9d5271e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 24 10:01:02 compute-0 nova_compute[257700]: 2025-11-24 10:01:02.795 257704 WARNING nova.virt.libvirt.driver [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 10:01:02 compute-0 nova_compute[257700]: 2025-11-24 10:01:02.800 257704 DEBUG nova.virt.libvirt.host [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 24 10:01:02 compute-0 nova_compute[257700]: 2025-11-24 10:01:02.800 257704 DEBUG nova.virt.libvirt.host [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 24 10:01:02 compute-0 nova_compute[257700]: 2025-11-24 10:01:02.807 257704 DEBUG nova.virt.libvirt.host [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 24 10:01:02 compute-0 nova_compute[257700]: 2025-11-24 10:01:02.807 257704 DEBUG nova.virt.libvirt.host [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 24 10:01:02 compute-0 nova_compute[257700]: 2025-11-24 10:01:02.807 257704 DEBUG nova.virt.libvirt.driver [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 24 10:01:02 compute-0 nova_compute[257700]: 2025-11-24 10:01:02.808 257704 DEBUG nova.virt.hardware [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-24T09:52:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='4a5d03ad-925b-45f1-89bd-f1325f9f3292',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T09:52:37Z,direct_url=<?>,disk_format='qcow2',id=6ef14bdf-4f04-4400-8040-4409d9d5271e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='cf636babb68a4ebe9bf137d3fe0e4c0c',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T09:52:41Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 24 10:01:02 compute-0 nova_compute[257700]: 2025-11-24 10:01:02.808 257704 DEBUG nova.virt.hardware [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 24 10:01:02 compute-0 nova_compute[257700]: 2025-11-24 10:01:02.808 257704 DEBUG nova.virt.hardware [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 24 10:01:02 compute-0 nova_compute[257700]: 2025-11-24 10:01:02.808 257704 DEBUG nova.virt.hardware [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 24 10:01:02 compute-0 nova_compute[257700]: 2025-11-24 10:01:02.809 257704 DEBUG nova.virt.hardware [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 24 10:01:02 compute-0 nova_compute[257700]: 2025-11-24 10:01:02.809 257704 DEBUG nova.virt.hardware [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 24 10:01:02 compute-0 nova_compute[257700]: 2025-11-24 10:01:02.809 257704 DEBUG nova.virt.hardware [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 24 10:01:02 compute-0 nova_compute[257700]: 2025-11-24 10:01:02.809 257704 DEBUG nova.virt.hardware [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 24 10:01:02 compute-0 nova_compute[257700]: 2025-11-24 10:01:02.809 257704 DEBUG nova.virt.hardware [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 24 10:01:02 compute-0 nova_compute[257700]: 2025-11-24 10:01:02.810 257704 DEBUG nova.virt.hardware [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 24 10:01:02 compute-0 nova_compute[257700]: 2025-11-24 10:01:02.810 257704 DEBUG nova.virt.hardware [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 24 10:01:02 compute-0 nova_compute[257700]: 2025-11-24 10:01:02.812 257704 DEBUG oslo_concurrency.processutils [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:01:02 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:02.888 165073 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feb242b9-6422-4c37-bc7a-5c14a79beaf8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:01:03 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1028: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 24 10:01:03 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:03 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:03 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:03.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:03 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 24 10:01:03 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1491936105' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 10:01:03 compute-0 nova_compute[257700]: 2025-11-24 10:01:03.314 257704 DEBUG oslo_concurrency.processutils [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:01:03 compute-0 nova_compute[257700]: 2025-11-24 10:01:03.342 257704 DEBUG nova.storage.rbd_utils [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 7209acb4-1927-431b-ad9e-0838a25f1f80_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 10:01:03 compute-0 nova_compute[257700]: 2025-11-24 10:01:03.347 257704 DEBUG oslo_concurrency.processutils [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:01:03 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1491936105' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 10:01:03 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:01:03 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 24 10:01:03 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1956899645' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 10:01:03 compute-0 nova_compute[257700]: 2025-11-24 10:01:03.784 257704 DEBUG oslo_concurrency.processutils [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:01:03 compute-0 nova_compute[257700]: 2025-11-24 10:01:03.787 257704 DEBUG nova.virt.libvirt.vif [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T10:00:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-701599068',display_name='tempest-TestNetworkBasicOps-server-701599068',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-701599068',id=9,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIp7477cU+9BtZz5V0pd+PQuO3Ovurx4E+UV6WrNVcmUwBGTU+abM49xOCN0WJEBvIaQJzEyMu21dsjeLfPrFNAGWfarLsdzJyntI7oM2ea7tasmabzC9knEz3j7fj67Bw==',key_name='tempest-TestNetworkBasicOps-771347860',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-die6x24m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T10:00:59Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=7209acb4-1927-431b-ad9e-0838a25f1f80,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fe53799e-0d96-417b-8153-212f65cd709e", "address": "fa:16:3e:19:3d:30", "network": {"id": "2d64d66d-0f9e-4429-a21c-7e55f44b1e68", "bridge": "br-int", "label": "tempest-network-smoke--1571889183", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe53799e-0d", "ovs_interfaceid": "fe53799e-0d96-417b-8153-212f65cd709e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 24 10:01:03 compute-0 nova_compute[257700]: 2025-11-24 10:01:03.787 257704 DEBUG nova.network.os_vif_util [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "fe53799e-0d96-417b-8153-212f65cd709e", "address": "fa:16:3e:19:3d:30", "network": {"id": "2d64d66d-0f9e-4429-a21c-7e55f44b1e68", "bridge": "br-int", "label": "tempest-network-smoke--1571889183", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe53799e-0d", "ovs_interfaceid": "fe53799e-0d96-417b-8153-212f65cd709e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 10:01:03 compute-0 nova_compute[257700]: 2025-11-24 10:01:03.788 257704 DEBUG nova.network.os_vif_util [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:3d:30,bridge_name='br-int',has_traffic_filtering=True,id=fe53799e-0d96-417b-8153-212f65cd709e,network=Network(2d64d66d-0f9e-4429-a21c-7e55f44b1e68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfe53799e-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 10:01:03 compute-0 nova_compute[257700]: 2025-11-24 10:01:03.790 257704 DEBUG nova.objects.instance [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'pci_devices' on Instance uuid 7209acb4-1927-431b-ad9e-0838a25f1f80 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 10:01:03 compute-0 nova_compute[257700]: 2025-11-24 10:01:03.806 257704 DEBUG nova.virt.libvirt.driver [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] End _get_guest_xml xml=<domain type="kvm">
Nov 24 10:01:03 compute-0 nova_compute[257700]:   <uuid>7209acb4-1927-431b-ad9e-0838a25f1f80</uuid>
Nov 24 10:01:03 compute-0 nova_compute[257700]:   <name>instance-00000009</name>
Nov 24 10:01:03 compute-0 nova_compute[257700]:   <memory>131072</memory>
Nov 24 10:01:03 compute-0 nova_compute[257700]:   <vcpu>1</vcpu>
Nov 24 10:01:03 compute-0 nova_compute[257700]:   <metadata>
Nov 24 10:01:03 compute-0 nova_compute[257700]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 10:01:03 compute-0 nova_compute[257700]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:       <nova:name>tempest-TestNetworkBasicOps-server-701599068</nova:name>
Nov 24 10:01:03 compute-0 nova_compute[257700]:       <nova:creationTime>2025-11-24 10:01:02</nova:creationTime>
Nov 24 10:01:03 compute-0 nova_compute[257700]:       <nova:flavor name="m1.nano">
Nov 24 10:01:03 compute-0 nova_compute[257700]:         <nova:memory>128</nova:memory>
Nov 24 10:01:03 compute-0 nova_compute[257700]:         <nova:disk>1</nova:disk>
Nov 24 10:01:03 compute-0 nova_compute[257700]:         <nova:swap>0</nova:swap>
Nov 24 10:01:03 compute-0 nova_compute[257700]:         <nova:ephemeral>0</nova:ephemeral>
Nov 24 10:01:03 compute-0 nova_compute[257700]:         <nova:vcpus>1</nova:vcpus>
Nov 24 10:01:03 compute-0 nova_compute[257700]:       </nova:flavor>
Nov 24 10:01:03 compute-0 nova_compute[257700]:       <nova:owner>
Nov 24 10:01:03 compute-0 nova_compute[257700]:         <nova:user uuid="43f79ff3105e4372a3c095e8057d4f1f">tempest-TestNetworkBasicOps-1844071378-project-member</nova:user>
Nov 24 10:01:03 compute-0 nova_compute[257700]:         <nova:project uuid="94d069fc040647d5a6e54894eec915fe">tempest-TestNetworkBasicOps-1844071378</nova:project>
Nov 24 10:01:03 compute-0 nova_compute[257700]:       </nova:owner>
Nov 24 10:01:03 compute-0 nova_compute[257700]:       <nova:root type="image" uuid="6ef14bdf-4f04-4400-8040-4409d9d5271e"/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:       <nova:ports>
Nov 24 10:01:03 compute-0 nova_compute[257700]:         <nova:port uuid="fe53799e-0d96-417b-8153-212f65cd709e">
Nov 24 10:01:03 compute-0 nova_compute[257700]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:         </nova:port>
Nov 24 10:01:03 compute-0 nova_compute[257700]:       </nova:ports>
Nov 24 10:01:03 compute-0 nova_compute[257700]:     </nova:instance>
Nov 24 10:01:03 compute-0 nova_compute[257700]:   </metadata>
Nov 24 10:01:03 compute-0 nova_compute[257700]:   <sysinfo type="smbios">
Nov 24 10:01:03 compute-0 nova_compute[257700]:     <system>
Nov 24 10:01:03 compute-0 nova_compute[257700]:       <entry name="manufacturer">RDO</entry>
Nov 24 10:01:03 compute-0 nova_compute[257700]:       <entry name="product">OpenStack Compute</entry>
Nov 24 10:01:03 compute-0 nova_compute[257700]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 10:01:03 compute-0 nova_compute[257700]:       <entry name="serial">7209acb4-1927-431b-ad9e-0838a25f1f80</entry>
Nov 24 10:01:03 compute-0 nova_compute[257700]:       <entry name="uuid">7209acb4-1927-431b-ad9e-0838a25f1f80</entry>
Nov 24 10:01:03 compute-0 nova_compute[257700]:       <entry name="family">Virtual Machine</entry>
Nov 24 10:01:03 compute-0 nova_compute[257700]:     </system>
Nov 24 10:01:03 compute-0 nova_compute[257700]:   </sysinfo>
Nov 24 10:01:03 compute-0 nova_compute[257700]:   <os>
Nov 24 10:01:03 compute-0 nova_compute[257700]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 24 10:01:03 compute-0 nova_compute[257700]:     <boot dev="hd"/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:     <smbios mode="sysinfo"/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:   </os>
Nov 24 10:01:03 compute-0 nova_compute[257700]:   <features>
Nov 24 10:01:03 compute-0 nova_compute[257700]:     <acpi/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:     <apic/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:     <vmcoreinfo/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:   </features>
Nov 24 10:01:03 compute-0 nova_compute[257700]:   <clock offset="utc">
Nov 24 10:01:03 compute-0 nova_compute[257700]:     <timer name="pit" tickpolicy="delay"/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:     <timer name="hpet" present="no"/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:   </clock>
Nov 24 10:01:03 compute-0 nova_compute[257700]:   <cpu mode="host-model" match="exact">
Nov 24 10:01:03 compute-0 nova_compute[257700]:     <topology sockets="1" cores="1" threads="1"/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:   </cpu>
Nov 24 10:01:03 compute-0 nova_compute[257700]:   <devices>
Nov 24 10:01:03 compute-0 nova_compute[257700]:     <disk type="network" device="disk">
Nov 24 10:01:03 compute-0 nova_compute[257700]:       <driver type="raw" cache="none"/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:       <source protocol="rbd" name="vms/7209acb4-1927-431b-ad9e-0838a25f1f80_disk">
Nov 24 10:01:03 compute-0 nova_compute[257700]:         <host name="192.168.122.100" port="6789"/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:         <host name="192.168.122.102" port="6789"/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:         <host name="192.168.122.101" port="6789"/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:       </source>
Nov 24 10:01:03 compute-0 nova_compute[257700]:       <auth username="openstack">
Nov 24 10:01:03 compute-0 nova_compute[257700]:         <secret type="ceph" uuid="84a084c3-61a7-5de7-8207-1f88efa59a64"/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:       </auth>
Nov 24 10:01:03 compute-0 nova_compute[257700]:       <target dev="vda" bus="virtio"/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:     </disk>
Nov 24 10:01:03 compute-0 nova_compute[257700]:     <disk type="network" device="cdrom">
Nov 24 10:01:03 compute-0 nova_compute[257700]:       <driver type="raw" cache="none"/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:       <source protocol="rbd" name="vms/7209acb4-1927-431b-ad9e-0838a25f1f80_disk.config">
Nov 24 10:01:03 compute-0 nova_compute[257700]:         <host name="192.168.122.100" port="6789"/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:         <host name="192.168.122.102" port="6789"/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:         <host name="192.168.122.101" port="6789"/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:       </source>
Nov 24 10:01:03 compute-0 nova_compute[257700]:       <auth username="openstack">
Nov 24 10:01:03 compute-0 nova_compute[257700]:         <secret type="ceph" uuid="84a084c3-61a7-5de7-8207-1f88efa59a64"/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:       </auth>
Nov 24 10:01:03 compute-0 nova_compute[257700]:       <target dev="sda" bus="sata"/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:     </disk>
Nov 24 10:01:03 compute-0 nova_compute[257700]:     <interface type="ethernet">
Nov 24 10:01:03 compute-0 nova_compute[257700]:       <mac address="fa:16:3e:19:3d:30"/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:       <model type="virtio"/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:       <driver name="vhost" rx_queue_size="512"/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:       <mtu size="1442"/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:       <target dev="tapfe53799e-0d"/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:     </interface>
Nov 24 10:01:03 compute-0 nova_compute[257700]:     <serial type="pty">
Nov 24 10:01:03 compute-0 nova_compute[257700]:       <log file="/var/lib/nova/instances/7209acb4-1927-431b-ad9e-0838a25f1f80/console.log" append="off"/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:     </serial>
Nov 24 10:01:03 compute-0 nova_compute[257700]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:     <video>
Nov 24 10:01:03 compute-0 nova_compute[257700]:       <model type="virtio"/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:     </video>
Nov 24 10:01:03 compute-0 nova_compute[257700]:     <input type="tablet" bus="usb"/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:     <rng model="virtio">
Nov 24 10:01:03 compute-0 nova_compute[257700]:       <backend model="random">/dev/urandom</backend>
Nov 24 10:01:03 compute-0 nova_compute[257700]:     </rng>
Nov 24 10:01:03 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root"/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:     <controller type="usb" index="0"/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:     <memballoon model="virtio">
Nov 24 10:01:03 compute-0 nova_compute[257700]:       <stats period="10"/>
Nov 24 10:01:03 compute-0 nova_compute[257700]:     </memballoon>
Nov 24 10:01:03 compute-0 nova_compute[257700]:   </devices>
Nov 24 10:01:03 compute-0 nova_compute[257700]: </domain>
Nov 24 10:01:03 compute-0 nova_compute[257700]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 24 10:01:03 compute-0 nova_compute[257700]: 2025-11-24 10:01:03.808 257704 DEBUG nova.compute.manager [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Preparing to wait for external event network-vif-plugged-fe53799e-0d96-417b-8153-212f65cd709e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 24 10:01:03 compute-0 nova_compute[257700]: 2025-11-24 10:01:03.808 257704 DEBUG oslo_concurrency.lockutils [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "7209acb4-1927-431b-ad9e-0838a25f1f80-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:01:03 compute-0 nova_compute[257700]: 2025-11-24 10:01:03.808 257704 DEBUG oslo_concurrency.lockutils [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "7209acb4-1927-431b-ad9e-0838a25f1f80-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:01:03 compute-0 nova_compute[257700]: 2025-11-24 10:01:03.809 257704 DEBUG oslo_concurrency.lockutils [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "7209acb4-1927-431b-ad9e-0838a25f1f80-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:01:03 compute-0 nova_compute[257700]: 2025-11-24 10:01:03.810 257704 DEBUG nova.virt.libvirt.vif [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T10:00:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-701599068',display_name='tempest-TestNetworkBasicOps-server-701599068',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-701599068',id=9,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIp7477cU+9BtZz5V0pd+PQuO3Ovurx4E+UV6WrNVcmUwBGTU+abM49xOCN0WJEBvIaQJzEyMu21dsjeLfPrFNAGWfarLsdzJyntI7oM2ea7tasmabzC9knEz3j7fj67Bw==',key_name='tempest-TestNetworkBasicOps-771347860',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-die6x24m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T10:00:59Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=7209acb4-1927-431b-ad9e-0838a25f1f80,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fe53799e-0d96-417b-8153-212f65cd709e", "address": "fa:16:3e:19:3d:30", "network": {"id": "2d64d66d-0f9e-4429-a21c-7e55f44b1e68", "bridge": "br-int", "label": "tempest-network-smoke--1571889183", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe53799e-0d", "ovs_interfaceid": "fe53799e-0d96-417b-8153-212f65cd709e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 24 10:01:03 compute-0 nova_compute[257700]: 2025-11-24 10:01:03.810 257704 DEBUG nova.network.os_vif_util [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "fe53799e-0d96-417b-8153-212f65cd709e", "address": "fa:16:3e:19:3d:30", "network": {"id": "2d64d66d-0f9e-4429-a21c-7e55f44b1e68", "bridge": "br-int", "label": "tempest-network-smoke--1571889183", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe53799e-0d", "ovs_interfaceid": "fe53799e-0d96-417b-8153-212f65cd709e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 10:01:03 compute-0 nova_compute[257700]: 2025-11-24 10:01:03.811 257704 DEBUG nova.network.os_vif_util [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:3d:30,bridge_name='br-int',has_traffic_filtering=True,id=fe53799e-0d96-417b-8153-212f65cd709e,network=Network(2d64d66d-0f9e-4429-a21c-7e55f44b1e68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfe53799e-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 10:01:03 compute-0 nova_compute[257700]: 2025-11-24 10:01:03.811 257704 DEBUG os_vif [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:3d:30,bridge_name='br-int',has_traffic_filtering=True,id=fe53799e-0d96-417b-8153-212f65cd709e,network=Network(2d64d66d-0f9e-4429-a21c-7e55f44b1e68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfe53799e-0d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 24 10:01:03 compute-0 nova_compute[257700]: 2025-11-24 10:01:03.812 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:03 compute-0 nova_compute[257700]: 2025-11-24 10:01:03.812 257704 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:01:03 compute-0 nova_compute[257700]: 2025-11-24 10:01:03.813 257704 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 10:01:03 compute-0 nova_compute[257700]: 2025-11-24 10:01:03.816 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:03 compute-0 nova_compute[257700]: 2025-11-24 10:01:03.816 257704 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfe53799e-0d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:01:03 compute-0 nova_compute[257700]: 2025-11-24 10:01:03.817 257704 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfe53799e-0d, col_values=(('external_ids', {'iface-id': 'fe53799e-0d96-417b-8153-212f65cd709e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:19:3d:30', 'vm-uuid': '7209acb4-1927-431b-ad9e-0838a25f1f80'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:01:03 compute-0 nova_compute[257700]: 2025-11-24 10:01:03.819 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:03 compute-0 NetworkManager[48883]: <info>  [1763978463.8205] manager: (tapfe53799e-0d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Nov 24 10:01:03 compute-0 nova_compute[257700]: 2025-11-24 10:01:03.821 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 10:01:03 compute-0 nova_compute[257700]: 2025-11-24 10:01:03.829 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:03 compute-0 nova_compute[257700]: 2025-11-24 10:01:03.831 257704 INFO os_vif [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:3d:30,bridge_name='br-int',has_traffic_filtering=True,id=fe53799e-0d96-417b-8153-212f65cd709e,network=Network(2d64d66d-0f9e-4429-a21c-7e55f44b1e68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfe53799e-0d')
Nov 24 10:01:03 compute-0 nova_compute[257700]: 2025-11-24 10:01:03.894 257704 DEBUG nova.virt.libvirt.driver [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 10:01:03 compute-0 nova_compute[257700]: 2025-11-24 10:01:03.894 257704 DEBUG nova.virt.libvirt.driver [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 10:01:03 compute-0 nova_compute[257700]: 2025-11-24 10:01:03.895 257704 DEBUG nova.virt.libvirt.driver [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No VIF found with MAC fa:16:3e:19:3d:30, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 24 10:01:03 compute-0 nova_compute[257700]: 2025-11-24 10:01:03.895 257704 INFO nova.virt.libvirt.driver [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Using config drive
Nov 24 10:01:03 compute-0 nova_compute[257700]: 2025-11-24 10:01:03.925 257704 DEBUG nova.storage.rbd_utils [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 7209acb4-1927-431b-ad9e-0838a25f1f80_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 10:01:04 compute-0 ceph-mon[74331]: pgmap v1028: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 24 10:01:04 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1956899645' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 10:01:04 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:04 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:04 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:04.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:04 compute-0 nova_compute[257700]: 2025-11-24 10:01:04.689 257704 INFO nova.virt.libvirt.driver [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Creating config drive at /var/lib/nova/instances/7209acb4-1927-431b-ad9e-0838a25f1f80/disk.config
Nov 24 10:01:04 compute-0 nova_compute[257700]: 2025-11-24 10:01:04.700 257704 DEBUG oslo_concurrency.processutils [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7209acb4-1927-431b-ad9e-0838a25f1f80/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv1mlml96 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:01:04 compute-0 nova_compute[257700]: 2025-11-24 10:01:04.838 257704 DEBUG oslo_concurrency.processutils [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7209acb4-1927-431b-ad9e-0838a25f1f80/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv1mlml96" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:01:04 compute-0 nova_compute[257700]: 2025-11-24 10:01:04.871 257704 DEBUG nova.storage.rbd_utils [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 7209acb4-1927-431b-ad9e-0838a25f1f80_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 10:01:04 compute-0 nova_compute[257700]: 2025-11-24 10:01:04.875 257704 DEBUG oslo_concurrency.processutils [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7209acb4-1927-431b-ad9e-0838a25f1f80/disk.config 7209acb4-1927-431b-ad9e-0838a25f1f80_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:01:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:01:04 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:01:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:01:04 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:01:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:01:04 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:01:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:01:05 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:01:05 compute-0 nova_compute[257700]: 2025-11-24 10:01:05.028 257704 DEBUG oslo_concurrency.processutils [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7209acb4-1927-431b-ad9e-0838a25f1f80/disk.config 7209acb4-1927-431b-ad9e-0838a25f1f80_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:01:05 compute-0 nova_compute[257700]: 2025-11-24 10:01:05.029 257704 INFO nova.virt.libvirt.driver [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Deleting local config drive /var/lib/nova/instances/7209acb4-1927-431b-ad9e-0838a25f1f80/disk.config because it was imported into RBD.
Nov 24 10:01:05 compute-0 nova_compute[257700]: 2025-11-24 10:01:05.067 257704 DEBUG nova.network.neutron [req-a68e2073-ca10-426c-a7ec-4cf3826bab62 req-c254f078-8e61-4dcc-bd92-df2fd402a264 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Updated VIF entry in instance network info cache for port fe53799e-0d96-417b-8153-212f65cd709e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 10:01:05 compute-0 nova_compute[257700]: 2025-11-24 10:01:05.068 257704 DEBUG nova.network.neutron [req-a68e2073-ca10-426c-a7ec-4cf3826bab62 req-c254f078-8e61-4dcc-bd92-df2fd402a264 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Updating instance_info_cache with network_info: [{"id": "fe53799e-0d96-417b-8153-212f65cd709e", "address": "fa:16:3e:19:3d:30", "network": {"id": "2d64d66d-0f9e-4429-a21c-7e55f44b1e68", "bridge": "br-int", "label": "tempest-network-smoke--1571889183", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe53799e-0d", "ovs_interfaceid": "fe53799e-0d96-417b-8153-212f65cd709e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 10:01:05 compute-0 nova_compute[257700]: 2025-11-24 10:01:05.084 257704 DEBUG oslo_concurrency.lockutils [req-a68e2073-ca10-426c-a7ec-4cf3826bab62 req-c254f078-8e61-4dcc-bd92-df2fd402a264 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Releasing lock "refresh_cache-7209acb4-1927-431b-ad9e-0838a25f1f80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 10:01:05 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1029: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 24 10:01:05 compute-0 kernel: tapfe53799e-0d: entered promiscuous mode
Nov 24 10:01:05 compute-0 NetworkManager[48883]: <info>  [1763978465.1110] manager: (tapfe53799e-0d): new Tun device (/org/freedesktop/NetworkManager/Devices/49)
Nov 24 10:01:05 compute-0 nova_compute[257700]: 2025-11-24 10:01:05.112 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:05 compute-0 ovn_controller[155123]: 2025-11-24T10:01:05Z|00069|binding|INFO|Claiming lport fe53799e-0d96-417b-8153-212f65cd709e for this chassis.
Nov 24 10:01:05 compute-0 ovn_controller[155123]: 2025-11-24T10:01:05Z|00070|binding|INFO|fe53799e-0d96-417b-8153-212f65cd709e: Claiming fa:16:3e:19:3d:30 10.100.0.9
Nov 24 10:01:05 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:05.123 165073 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:3d:30 10.100.0.9'], port_security=['fa:16:3e:19:3d:30 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1496723339', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '7209acb4-1927-431b-ad9e-0838a25f1f80', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2d64d66d-0f9e-4429-a21c-7e55f44b1e68', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1496723339', 'neutron:project_id': '94d069fc040647d5a6e54894eec915fe', 'neutron:revision_number': '7', 'neutron:security_group_ids': '33c3a403-57a0-4b88-8817-f12f4bfc92ae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.219'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd5e84a2-6af3-4d25-9e2e-39e01701962b, chassis=[<ovs.db.idl.Row object at 0x7f45b2855760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f45b2855760>], logical_port=fe53799e-0d96-417b-8153-212f65cd709e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 10:01:05 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:05.124 165073 INFO neutron.agent.ovn.metadata.agent [-] Port fe53799e-0d96-417b-8153-212f65cd709e in datapath 2d64d66d-0f9e-4429-a21c-7e55f44b1e68 bound to our chassis
Nov 24 10:01:05 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:05.125 165073 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2d64d66d-0f9e-4429-a21c-7e55f44b1e68
Nov 24 10:01:05 compute-0 ovn_controller[155123]: 2025-11-24T10:01:05Z|00071|binding|INFO|Setting lport fe53799e-0d96-417b-8153-212f65cd709e ovn-installed in OVS
Nov 24 10:01:05 compute-0 ovn_controller[155123]: 2025-11-24T10:01:05Z|00072|binding|INFO|Setting lport fe53799e-0d96-417b-8153-212f65cd709e up in Southbound
Nov 24 10:01:05 compute-0 nova_compute[257700]: 2025-11-24 10:01:05.137 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:05 compute-0 nova_compute[257700]: 2025-11-24 10:01:05.139 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:05 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:05.143 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[cf6e4b80-0a12-4974-a629-a194a848302a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:01:05 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:05.143 165073 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2d64d66d-01 in ovnmeta-2d64d66d-0f9e-4429-a21c-7e55f44b1e68 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 24 10:01:05 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:05.146 264910 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2d64d66d-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 24 10:01:05 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:05.146 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[98fa113c-be63-44f7-8f49-c305099a4a33]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:01:05 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:05.147 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[6fcecbae-3c67-4a96-b5b3-088fdc37e872]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:01:05 compute-0 systemd-udevd[276901]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 10:01:05 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:05.160 165227 DEBUG oslo.privsep.daemon [-] privsep: reply[0a36e64e-b1a4-4481-ae7b-5d59a8e3b122]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:01:05 compute-0 NetworkManager[48883]: <info>  [1763978465.1758] device (tapfe53799e-0d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 10:01:05 compute-0 NetworkManager[48883]: <info>  [1763978465.1770] device (tapfe53799e-0d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 24 10:01:05 compute-0 systemd-machined[219130]: New machine qemu-5-instance-00000009.
Nov 24 10:01:05 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:05.183 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[db09a098-a884-4820-bc3b-8394993df275]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:01:05 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-00000009.
Nov 24 10:01:05 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:05.214 264951 DEBUG oslo.privsep.daemon [-] privsep: reply[4120eea1-3353-4bd5-bd3b-5e2228fdc562]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:01:05 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:05.221 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[46b56b5c-cee7-4724-a8ac-2d0c588406a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:01:05 compute-0 NetworkManager[48883]: <info>  [1763978465.2230] manager: (tap2d64d66d-00): new Veth device (/org/freedesktop/NetworkManager/Devices/50)
Nov 24 10:01:05 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:05 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:01:05 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:05.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:01:05 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:05.253 264951 DEBUG oslo.privsep.daemon [-] privsep: reply[0aa20160-043c-4dfb-b8ce-54a35a165fb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:01:05 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:05.256 264951 DEBUG oslo.privsep.daemon [-] privsep: reply[7ec8413b-8bf0-4cfd-8c2e-362c4d3379dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:01:05 compute-0 NetworkManager[48883]: <info>  [1763978465.2797] device (tap2d64d66d-00): carrier: link connected
Nov 24 10:01:05 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:05.284 264951 DEBUG oslo.privsep.daemon [-] privsep: reply[6f9882df-b7b9-4e09-8cf9-4e03c6baceef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:01:05 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:05.303 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[02d9b2f0-a710-4804-b3c4-2b07a9cdea0a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2d64d66d-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:58:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 441285, 'reachable_time': 44950, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276933, 'error': None, 'target': 'ovnmeta-2d64d66d-0f9e-4429-a21c-7e55f44b1e68', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:01:05 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:05.318 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[8bdb1838-38c0-4e5b-a435-c9bdf31fc5bc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe52:5850'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 441285, 'tstamp': 441285}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276934, 'error': None, 'target': 'ovnmeta-2d64d66d-0f9e-4429-a21c-7e55f44b1e68', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:01:05 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:05.343 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[bbf08c60-877c-4a89-8285-1b3f5471c0ba]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2d64d66d-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:58:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 441285, 'reachable_time': 44950, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 276935, 'error': None, 'target': 'ovnmeta-2d64d66d-0f9e-4429-a21c-7e55f44b1e68', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:01:05 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:05.380 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[4cf50fe5-9ab3-4712-8298-5654c6f8dae2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:01:05 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:05.469 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[98af1848-0d76-47ce-9412-d2e9a6ad4d41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:01:05 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:05.471 165073 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2d64d66d-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:01:05 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:05.471 165073 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 10:01:05 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:05.472 165073 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2d64d66d-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:01:05 compute-0 kernel: tap2d64d66d-00: entered promiscuous mode
Nov 24 10:01:05 compute-0 nova_compute[257700]: 2025-11-24 10:01:05.477 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:05 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:05.479 165073 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2d64d66d-00, col_values=(('external_ids', {'iface-id': '711ae8ab-4c6e-4296-ba4f-192226ad0d42'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:01:05 compute-0 nova_compute[257700]: 2025-11-24 10:01:05.481 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:05 compute-0 ovn_controller[155123]: 2025-11-24T10:01:05Z|00073|binding|INFO|Releasing lport 711ae8ab-4c6e-4296-ba4f-192226ad0d42 from this chassis (sb_readonly=0)
Nov 24 10:01:05 compute-0 NetworkManager[48883]: <info>  [1763978465.4825] manager: (tap2d64d66d-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Nov 24 10:01:05 compute-0 nova_compute[257700]: 2025-11-24 10:01:05.494 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:05 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:05.495 165073 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2d64d66d-0f9e-4429-a21c-7e55f44b1e68.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2d64d66d-0f9e-4429-a21c-7e55f44b1e68.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 24 10:01:05 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:05.496 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[6a444553-33f4-4950-b759-0dbfc676e3d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:01:05 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:05.497 165073 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 24 10:01:05 compute-0 ovn_metadata_agent[165067]: global
Nov 24 10:01:05 compute-0 ovn_metadata_agent[165067]:     log         /dev/log local0 debug
Nov 24 10:01:05 compute-0 ovn_metadata_agent[165067]:     log-tag     haproxy-metadata-proxy-2d64d66d-0f9e-4429-a21c-7e55f44b1e68
Nov 24 10:01:05 compute-0 ovn_metadata_agent[165067]:     user        root
Nov 24 10:01:05 compute-0 ovn_metadata_agent[165067]:     group       root
Nov 24 10:01:05 compute-0 ovn_metadata_agent[165067]:     maxconn     1024
Nov 24 10:01:05 compute-0 ovn_metadata_agent[165067]:     pidfile     /var/lib/neutron/external/pids/2d64d66d-0f9e-4429-a21c-7e55f44b1e68.pid.haproxy
Nov 24 10:01:05 compute-0 ovn_metadata_agent[165067]:     daemon
Nov 24 10:01:05 compute-0 ovn_metadata_agent[165067]: 
Nov 24 10:01:05 compute-0 ovn_metadata_agent[165067]: defaults
Nov 24 10:01:05 compute-0 ovn_metadata_agent[165067]:     log global
Nov 24 10:01:05 compute-0 ovn_metadata_agent[165067]:     mode http
Nov 24 10:01:05 compute-0 ovn_metadata_agent[165067]:     option httplog
Nov 24 10:01:05 compute-0 ovn_metadata_agent[165067]:     option dontlognull
Nov 24 10:01:05 compute-0 ovn_metadata_agent[165067]:     option http-server-close
Nov 24 10:01:05 compute-0 ovn_metadata_agent[165067]:     option forwardfor
Nov 24 10:01:05 compute-0 ovn_metadata_agent[165067]:     retries                 3
Nov 24 10:01:05 compute-0 ovn_metadata_agent[165067]:     timeout http-request    30s
Nov 24 10:01:05 compute-0 ovn_metadata_agent[165067]:     timeout connect         30s
Nov 24 10:01:05 compute-0 ovn_metadata_agent[165067]:     timeout client          32s
Nov 24 10:01:05 compute-0 ovn_metadata_agent[165067]:     timeout server          32s
Nov 24 10:01:05 compute-0 ovn_metadata_agent[165067]:     timeout http-keep-alive 30s
Nov 24 10:01:05 compute-0 ovn_metadata_agent[165067]: 
Nov 24 10:01:05 compute-0 ovn_metadata_agent[165067]: 
Nov 24 10:01:05 compute-0 ovn_metadata_agent[165067]: listen listener
Nov 24 10:01:05 compute-0 ovn_metadata_agent[165067]:     bind 169.254.169.254:80
Nov 24 10:01:05 compute-0 ovn_metadata_agent[165067]:     server metadata /var/lib/neutron/metadata_proxy
Nov 24 10:01:05 compute-0 ovn_metadata_agent[165067]:     http-request add-header X-OVN-Network-ID 2d64d66d-0f9e-4429-a21c-7e55f44b1e68
Nov 24 10:01:05 compute-0 ovn_metadata_agent[165067]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 24 10:01:05 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:05.497 165073 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2d64d66d-0f9e-4429-a21c-7e55f44b1e68', 'env', 'PROCESS_TAG=haproxy-2d64d66d-0f9e-4429-a21c-7e55f44b1e68', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2d64d66d-0f9e-4429-a21c-7e55f44b1e68.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 24 10:01:05 compute-0 podman[276968]: 2025-11-24 10:01:05.886130004 +0000 UTC m=+0.050369983 container create ccfde47eb1164b05e1eddd6ec32ff722c26df590adb886fc1f0264b43f9c4ea8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2d64d66d-0f9e-4429-a21c-7e55f44b1e68, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 24 10:01:05 compute-0 systemd[1]: Started libpod-conmon-ccfde47eb1164b05e1eddd6ec32ff722c26df590adb886fc1f0264b43f9c4ea8.scope.
Nov 24 10:01:05 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:01:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/659b3371f248065f86334af0b70e022dcdee8c6f44e3bb8d3a6299e82f775ae2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 24 10:01:05 compute-0 podman[276968]: 2025-11-24 10:01:05.858174414 +0000 UTC m=+0.022414403 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 24 10:01:05 compute-0 podman[276968]: 2025-11-24 10:01:05.967378838 +0000 UTC m=+0.131618857 container init ccfde47eb1164b05e1eddd6ec32ff722c26df590adb886fc1f0264b43f9c4ea8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2d64d66d-0f9e-4429-a21c-7e55f44b1e68, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 24 10:01:05 compute-0 nova_compute[257700]: 2025-11-24 10:01:05.976 257704 DEBUG nova.compute.manager [req-d744e458-85b7-4a41-8b73-c5cbeffa237a req-362b1f8f-8755-417e-a211-7c9761bf4e0e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Received event network-vif-plugged-fe53799e-0d96-417b-8153-212f65cd709e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 10:01:05 compute-0 nova_compute[257700]: 2025-11-24 10:01:05.976 257704 DEBUG oslo_concurrency.lockutils [req-d744e458-85b7-4a41-8b73-c5cbeffa237a req-362b1f8f-8755-417e-a211-7c9761bf4e0e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "7209acb4-1927-431b-ad9e-0838a25f1f80-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:01:05 compute-0 nova_compute[257700]: 2025-11-24 10:01:05.977 257704 DEBUG oslo_concurrency.lockutils [req-d744e458-85b7-4a41-8b73-c5cbeffa237a req-362b1f8f-8755-417e-a211-7c9761bf4e0e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "7209acb4-1927-431b-ad9e-0838a25f1f80-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:01:05 compute-0 nova_compute[257700]: 2025-11-24 10:01:05.977 257704 DEBUG oslo_concurrency.lockutils [req-d744e458-85b7-4a41-8b73-c5cbeffa237a req-362b1f8f-8755-417e-a211-7c9761bf4e0e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "7209acb4-1927-431b-ad9e-0838a25f1f80-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:01:05 compute-0 nova_compute[257700]: 2025-11-24 10:01:05.978 257704 DEBUG nova.compute.manager [req-d744e458-85b7-4a41-8b73-c5cbeffa237a req-362b1f8f-8755-417e-a211-7c9761bf4e0e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Processing event network-vif-plugged-fe53799e-0d96-417b-8153-212f65cd709e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 24 10:01:05 compute-0 podman[276968]: 2025-11-24 10:01:05.981964288 +0000 UTC m=+0.146204297 container start ccfde47eb1164b05e1eddd6ec32ff722c26df590adb886fc1f0264b43f9c4ea8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2d64d66d-0f9e-4429-a21c-7e55f44b1e68, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 24 10:01:06 compute-0 neutron-haproxy-ovnmeta-2d64d66d-0f9e-4429-a21c-7e55f44b1e68[276984]: [NOTICE]   (276988) : New worker (276990) forked
Nov 24 10:01:06 compute-0 neutron-haproxy-ovnmeta-2d64d66d-0f9e-4429-a21c-7e55f44b1e68[276984]: [NOTICE]   (276988) : Loading success.
Nov 24 10:01:06 compute-0 nova_compute[257700]: 2025-11-24 10:01:06.060 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:06 compute-0 ceph-mon[74331]: pgmap v1029: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 24 10:01:06 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:06 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:01:06 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:06.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:01:06 compute-0 nova_compute[257700]: 2025-11-24 10:01:06.591 257704 DEBUG nova.compute.manager [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 24 10:01:06 compute-0 nova_compute[257700]: 2025-11-24 10:01:06.592 257704 DEBUG nova.virt.driver [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Emitting event <LifecycleEvent: 1763978466.5911908, 7209acb4-1927-431b-ad9e-0838a25f1f80 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 10:01:06 compute-0 nova_compute[257700]: 2025-11-24 10:01:06.593 257704 INFO nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] VM Started (Lifecycle Event)
Nov 24 10:01:06 compute-0 nova_compute[257700]: 2025-11-24 10:01:06.595 257704 DEBUG nova.virt.libvirt.driver [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 24 10:01:06 compute-0 nova_compute[257700]: 2025-11-24 10:01:06.598 257704 INFO nova.virt.libvirt.driver [-] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Instance spawned successfully.
Nov 24 10:01:06 compute-0 nova_compute[257700]: 2025-11-24 10:01:06.599 257704 DEBUG nova.virt.libvirt.driver [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 24 10:01:06 compute-0 nova_compute[257700]: 2025-11-24 10:01:06.619 257704 DEBUG nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 10:01:06 compute-0 nova_compute[257700]: 2025-11-24 10:01:06.626 257704 DEBUG nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 10:01:06 compute-0 nova_compute[257700]: 2025-11-24 10:01:06.628 257704 DEBUG nova.virt.libvirt.driver [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 10:01:06 compute-0 nova_compute[257700]: 2025-11-24 10:01:06.629 257704 DEBUG nova.virt.libvirt.driver [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 10:01:06 compute-0 nova_compute[257700]: 2025-11-24 10:01:06.629 257704 DEBUG nova.virt.libvirt.driver [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 10:01:06 compute-0 nova_compute[257700]: 2025-11-24 10:01:06.630 257704 DEBUG nova.virt.libvirt.driver [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 10:01:06 compute-0 nova_compute[257700]: 2025-11-24 10:01:06.630 257704 DEBUG nova.virt.libvirt.driver [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 10:01:06 compute-0 nova_compute[257700]: 2025-11-24 10:01:06.631 257704 DEBUG nova.virt.libvirt.driver [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 10:01:06 compute-0 nova_compute[257700]: 2025-11-24 10:01:06.670 257704 INFO nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 10:01:06 compute-0 nova_compute[257700]: 2025-11-24 10:01:06.671 257704 DEBUG nova.virt.driver [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Emitting event <LifecycleEvent: 1763978466.592182, 7209acb4-1927-431b-ad9e-0838a25f1f80 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 10:01:06 compute-0 nova_compute[257700]: 2025-11-24 10:01:06.672 257704 INFO nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] VM Paused (Lifecycle Event)
Nov 24 10:01:06 compute-0 nova_compute[257700]: 2025-11-24 10:01:06.694 257704 DEBUG nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 10:01:06 compute-0 nova_compute[257700]: 2025-11-24 10:01:06.697 257704 DEBUG nova.virt.driver [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Emitting event <LifecycleEvent: 1763978466.595038, 7209acb4-1927-431b-ad9e-0838a25f1f80 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 10:01:06 compute-0 nova_compute[257700]: 2025-11-24 10:01:06.698 257704 INFO nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] VM Resumed (Lifecycle Event)
Nov 24 10:01:06 compute-0 nova_compute[257700]: 2025-11-24 10:01:06.747 257704 DEBUG nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 10:01:06 compute-0 nova_compute[257700]: 2025-11-24 10:01:06.752 257704 DEBUG nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 10:01:06 compute-0 nova_compute[257700]: 2025-11-24 10:01:06.775 257704 INFO nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 10:01:06 compute-0 nova_compute[257700]: 2025-11-24 10:01:06.795 257704 INFO nova.compute.manager [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Took 7.60 seconds to spawn the instance on the hypervisor.
Nov 24 10:01:06 compute-0 nova_compute[257700]: 2025-11-24 10:01:06.795 257704 DEBUG nova.compute.manager [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 10:01:06 compute-0 nova_compute[257700]: 2025-11-24 10:01:06.997 257704 INFO nova.compute.manager [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Took 10.02 seconds to build instance.
Nov 24 10:01:07 compute-0 nova_compute[257700]: 2025-11-24 10:01:07.043 257704 DEBUG oslo_concurrency.lockutils [None req-78d1150f-7ba8-4c19-8946-eeeea8a84b9b 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "7209acb4-1927-431b-ad9e-0838a25f1f80" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.163s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:01:07 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1030: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Nov 24 10:01:07 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:07 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:01:07 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:07.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:01:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:01:07.546Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 10:01:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:01:07.547Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:01:07 compute-0 sudo[277043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:01:07 compute-0 sudo[277043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:01:07 compute-0 sudo[277043]: pam_unix(sudo:session): session closed for user root
Nov 24 10:01:08 compute-0 nova_compute[257700]: 2025-11-24 10:01:08.044 257704 DEBUG nova.compute.manager [req-37d59da7-7a8e-4916-a6eb-a211c9e8de3a req-40365932-feb3-4d6e-b203-f1f274896216 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Received event network-vif-plugged-fe53799e-0d96-417b-8153-212f65cd709e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 10:01:08 compute-0 nova_compute[257700]: 2025-11-24 10:01:08.045 257704 DEBUG oslo_concurrency.lockutils [req-37d59da7-7a8e-4916-a6eb-a211c9e8de3a req-40365932-feb3-4d6e-b203-f1f274896216 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "7209acb4-1927-431b-ad9e-0838a25f1f80-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:01:08 compute-0 nova_compute[257700]: 2025-11-24 10:01:08.046 257704 DEBUG oslo_concurrency.lockutils [req-37d59da7-7a8e-4916-a6eb-a211c9e8de3a req-40365932-feb3-4d6e-b203-f1f274896216 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "7209acb4-1927-431b-ad9e-0838a25f1f80-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:01:08 compute-0 nova_compute[257700]: 2025-11-24 10:01:08.047 257704 DEBUG oslo_concurrency.lockutils [req-37d59da7-7a8e-4916-a6eb-a211c9e8de3a req-40365932-feb3-4d6e-b203-f1f274896216 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "7209acb4-1927-431b-ad9e-0838a25f1f80-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:01:08 compute-0 nova_compute[257700]: 2025-11-24 10:01:08.047 257704 DEBUG nova.compute.manager [req-37d59da7-7a8e-4916-a6eb-a211c9e8de3a req-40365932-feb3-4d6e-b203-f1f274896216 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] No waiting events found dispatching network-vif-plugged-fe53799e-0d96-417b-8153-212f65cd709e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 10:01:08 compute-0 nova_compute[257700]: 2025-11-24 10:01:08.048 257704 WARNING nova.compute.manager [req-37d59da7-7a8e-4916-a6eb-a211c9e8de3a req-40365932-feb3-4d6e-b203-f1f274896216 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Received unexpected event network-vif-plugged-fe53799e-0d96-417b-8153-212f65cd709e for instance with vm_state active and task_state None.
Nov 24 10:01:08 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:08 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:08 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:08.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:08 compute-0 ceph-mon[74331]: pgmap v1030: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Nov 24 10:01:08 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:01:08 compute-0 nova_compute[257700]: 2025-11-24 10:01:08.819 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:01:08.899Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 10:01:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:01:08.900Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 10:01:09 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1031: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Nov 24 10:01:09 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:09 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:09 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:09.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:01:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:01:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:01:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:01:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:01:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:01:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:01:10 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:01:10 compute-0 nova_compute[257700]: 2025-11-24 10:01:10.171 257704 DEBUG oslo_concurrency.lockutils [None req-946668a7-e864-4886-ac06-bc52e8ab66c8 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "7209acb4-1927-431b-ad9e-0838a25f1f80" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:01:10 compute-0 nova_compute[257700]: 2025-11-24 10:01:10.172 257704 DEBUG oslo_concurrency.lockutils [None req-946668a7-e864-4886-ac06-bc52e8ab66c8 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "7209acb4-1927-431b-ad9e-0838a25f1f80" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:01:10 compute-0 nova_compute[257700]: 2025-11-24 10:01:10.172 257704 DEBUG oslo_concurrency.lockutils [None req-946668a7-e864-4886-ac06-bc52e8ab66c8 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "7209acb4-1927-431b-ad9e-0838a25f1f80-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:01:10 compute-0 nova_compute[257700]: 2025-11-24 10:01:10.172 257704 DEBUG oslo_concurrency.lockutils [None req-946668a7-e864-4886-ac06-bc52e8ab66c8 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "7209acb4-1927-431b-ad9e-0838a25f1f80-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:01:10 compute-0 nova_compute[257700]: 2025-11-24 10:01:10.173 257704 DEBUG oslo_concurrency.lockutils [None req-946668a7-e864-4886-ac06-bc52e8ab66c8 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "7209acb4-1927-431b-ad9e-0838a25f1f80-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:01:10 compute-0 nova_compute[257700]: 2025-11-24 10:01:10.174 257704 INFO nova.compute.manager [None req-946668a7-e864-4886-ac06-bc52e8ab66c8 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Terminating instance
Nov 24 10:01:10 compute-0 nova_compute[257700]: 2025-11-24 10:01:10.175 257704 DEBUG nova.compute.manager [None req-946668a7-e864-4886-ac06-bc52e8ab66c8 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 24 10:01:10 compute-0 kernel: tapfe53799e-0d (unregistering): left promiscuous mode
Nov 24 10:01:10 compute-0 NetworkManager[48883]: <info>  [1763978470.2169] device (tapfe53799e-0d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 24 10:01:10 compute-0 nova_compute[257700]: 2025-11-24 10:01:10.224 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:10 compute-0 ovn_controller[155123]: 2025-11-24T10:01:10Z|00074|binding|INFO|Releasing lport fe53799e-0d96-417b-8153-212f65cd709e from this chassis (sb_readonly=0)
Nov 24 10:01:10 compute-0 ovn_controller[155123]: 2025-11-24T10:01:10Z|00075|binding|INFO|Setting lport fe53799e-0d96-417b-8153-212f65cd709e down in Southbound
Nov 24 10:01:10 compute-0 ovn_controller[155123]: 2025-11-24T10:01:10Z|00076|binding|INFO|Removing iface tapfe53799e-0d ovn-installed in OVS
Nov 24 10:01:10 compute-0 nova_compute[257700]: 2025-11-24 10:01:10.226 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:10 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:10.233 165073 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:3d:30 10.100.0.9'], port_security=['fa:16:3e:19:3d:30 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1496723339', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '7209acb4-1927-431b-ad9e-0838a25f1f80', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2d64d66d-0f9e-4429-a21c-7e55f44b1e68', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1496723339', 'neutron:project_id': '94d069fc040647d5a6e54894eec915fe', 'neutron:revision_number': '9', 'neutron:security_group_ids': '33c3a403-57a0-4b88-8817-f12f4bfc92ae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.219', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd5e84a2-6af3-4d25-9e2e-39e01701962b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f45b2855760>], logical_port=fe53799e-0d96-417b-8153-212f65cd709e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f45b2855760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 10:01:10 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:10.234 165073 INFO neutron.agent.ovn.metadata.agent [-] Port fe53799e-0d96-417b-8153-212f65cd709e in datapath 2d64d66d-0f9e-4429-a21c-7e55f44b1e68 unbound from our chassis
Nov 24 10:01:10 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:10.235 165073 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2d64d66d-0f9e-4429-a21c-7e55f44b1e68, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 24 10:01:10 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:10.236 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[33854b9c-7295-4e4b-a031-1e142ccade73]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:01:10 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:10.237 165073 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2d64d66d-0f9e-4429-a21c-7e55f44b1e68 namespace which is not needed anymore
Nov 24 10:01:10 compute-0 nova_compute[257700]: 2025-11-24 10:01:10.251 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:10 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000009.scope: Deactivated successfully.
Nov 24 10:01:10 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000009.scope: Consumed 5.104s CPU time.
Nov 24 10:01:10 compute-0 systemd-machined[219130]: Machine qemu-5-instance-00000009 terminated.
Nov 24 10:01:10 compute-0 neutron-haproxy-ovnmeta-2d64d66d-0f9e-4429-a21c-7e55f44b1e68[276984]: [NOTICE]   (276988) : haproxy version is 2.8.14-c23fe91
Nov 24 10:01:10 compute-0 neutron-haproxy-ovnmeta-2d64d66d-0f9e-4429-a21c-7e55f44b1e68[276984]: [NOTICE]   (276988) : path to executable is /usr/sbin/haproxy
Nov 24 10:01:10 compute-0 neutron-haproxy-ovnmeta-2d64d66d-0f9e-4429-a21c-7e55f44b1e68[276984]: [WARNING]  (276988) : Exiting Master process...
Nov 24 10:01:10 compute-0 neutron-haproxy-ovnmeta-2d64d66d-0f9e-4429-a21c-7e55f44b1e68[276984]: [WARNING]  (276988) : Exiting Master process...
Nov 24 10:01:10 compute-0 neutron-haproxy-ovnmeta-2d64d66d-0f9e-4429-a21c-7e55f44b1e68[276984]: [ALERT]    (276988) : Current worker (276990) exited with code 143 (Terminated)
Nov 24 10:01:10 compute-0 neutron-haproxy-ovnmeta-2d64d66d-0f9e-4429-a21c-7e55f44b1e68[276984]: [WARNING]  (276988) : All workers exited. Exiting... (0)
Nov 24 10:01:10 compute-0 systemd[1]: libpod-ccfde47eb1164b05e1eddd6ec32ff722c26df590adb886fc1f0264b43f9c4ea8.scope: Deactivated successfully.
Nov 24 10:01:10 compute-0 conmon[276984]: conmon ccfde47eb1164b05e1ed <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ccfde47eb1164b05e1eddd6ec32ff722c26df590adb886fc1f0264b43f9c4ea8.scope/container/memory.events
Nov 24 10:01:10 compute-0 podman[277095]: 2025-11-24 10:01:10.391235219 +0000 UTC m=+0.043934504 container died ccfde47eb1164b05e1eddd6ec32ff722c26df590adb886fc1f0264b43f9c4ea8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2d64d66d-0f9e-4429-a21c-7e55f44b1e68, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 10:01:10 compute-0 nova_compute[257700]: 2025-11-24 10:01:10.412 257704 INFO nova.virt.libvirt.driver [-] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Instance destroyed successfully.
Nov 24 10:01:10 compute-0 nova_compute[257700]: 2025-11-24 10:01:10.413 257704 DEBUG nova.objects.instance [None req-946668a7-e864-4886-ac06-bc52e8ab66c8 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'resources' on Instance uuid 7209acb4-1927-431b-ad9e-0838a25f1f80 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 10:01:10 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ccfde47eb1164b05e1eddd6ec32ff722c26df590adb886fc1f0264b43f9c4ea8-userdata-shm.mount: Deactivated successfully.
Nov 24 10:01:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-659b3371f248065f86334af0b70e022dcdee8c6f44e3bb8d3a6299e82f775ae2-merged.mount: Deactivated successfully.
Nov 24 10:01:10 compute-0 nova_compute[257700]: 2025-11-24 10:01:10.427 257704 DEBUG nova.virt.libvirt.vif [None req-946668a7-e864-4886-ac06-bc52e8ab66c8 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T10:00:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-701599068',display_name='tempest-TestNetworkBasicOps-server-701599068',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-701599068',id=9,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIp7477cU+9BtZz5V0pd+PQuO3Ovurx4E+UV6WrNVcmUwBGTU+abM49xOCN0WJEBvIaQJzEyMu21dsjeLfPrFNAGWfarLsdzJyntI7oM2ea7tasmabzC9knEz3j7fj67Bw==',key_name='tempest-TestNetworkBasicOps-771347860',keypairs=<?>,launch_index=0,launched_at=2025-11-24T10:01:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-die6x24m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T10:01:06Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=7209acb4-1927-431b-ad9e-0838a25f1f80,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fe53799e-0d96-417b-8153-212f65cd709e", "address": "fa:16:3e:19:3d:30", "network": {"id": "2d64d66d-0f9e-4429-a21c-7e55f44b1e68", "bridge": "br-int", "label": "tempest-network-smoke--1571889183", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe53799e-0d", "ovs_interfaceid": "fe53799e-0d96-417b-8153-212f65cd709e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 24 10:01:10 compute-0 nova_compute[257700]: 2025-11-24 10:01:10.429 257704 DEBUG nova.network.os_vif_util [None req-946668a7-e864-4886-ac06-bc52e8ab66c8 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "fe53799e-0d96-417b-8153-212f65cd709e", "address": "fa:16:3e:19:3d:30", "network": {"id": "2d64d66d-0f9e-4429-a21c-7e55f44b1e68", "bridge": "br-int", "label": "tempest-network-smoke--1571889183", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe53799e-0d", "ovs_interfaceid": "fe53799e-0d96-417b-8153-212f65cd709e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 10:01:10 compute-0 nova_compute[257700]: 2025-11-24 10:01:10.430 257704 DEBUG nova.network.os_vif_util [None req-946668a7-e864-4886-ac06-bc52e8ab66c8 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:3d:30,bridge_name='br-int',has_traffic_filtering=True,id=fe53799e-0d96-417b-8153-212f65cd709e,network=Network(2d64d66d-0f9e-4429-a21c-7e55f44b1e68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfe53799e-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 10:01:10 compute-0 nova_compute[257700]: 2025-11-24 10:01:10.430 257704 DEBUG os_vif [None req-946668a7-e864-4886-ac06-bc52e8ab66c8 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:3d:30,bridge_name='br-int',has_traffic_filtering=True,id=fe53799e-0d96-417b-8153-212f65cd709e,network=Network(2d64d66d-0f9e-4429-a21c-7e55f44b1e68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfe53799e-0d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 24 10:01:10 compute-0 nova_compute[257700]: 2025-11-24 10:01:10.432 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:10 compute-0 nova_compute[257700]: 2025-11-24 10:01:10.432 257704 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfe53799e-0d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:01:10 compute-0 podman[277095]: 2025-11-24 10:01:10.436737911 +0000 UTC m=+0.089437176 container cleanup ccfde47eb1164b05e1eddd6ec32ff722c26df590adb886fc1f0264b43f9c4ea8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2d64d66d-0f9e-4429-a21c-7e55f44b1e68, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 24 10:01:10 compute-0 nova_compute[257700]: 2025-11-24 10:01:10.459 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:10 compute-0 nova_compute[257700]: 2025-11-24 10:01:10.462 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:10 compute-0 systemd[1]: libpod-conmon-ccfde47eb1164b05e1eddd6ec32ff722c26df590adb886fc1f0264b43f9c4ea8.scope: Deactivated successfully.
Nov 24 10:01:10 compute-0 nova_compute[257700]: 2025-11-24 10:01:10.465 257704 INFO os_vif [None req-946668a7-e864-4886-ac06-bc52e8ab66c8 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:3d:30,bridge_name='br-int',has_traffic_filtering=True,id=fe53799e-0d96-417b-8153-212f65cd709e,network=Network(2d64d66d-0f9e-4429-a21c-7e55f44b1e68),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfe53799e-0d')
Nov 24 10:01:10 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:10 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:10 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:10.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:10 compute-0 ceph-mon[74331]: pgmap v1031: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Nov 24 10:01:10 compute-0 podman[277135]: 2025-11-24 10:01:10.532591586 +0000 UTC m=+0.049027201 container remove ccfde47eb1164b05e1eddd6ec32ff722c26df590adb886fc1f0264b43f9c4ea8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2d64d66d-0f9e-4429-a21c-7e55f44b1e68, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 10:01:10 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:10.538 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[b5fd7dd8-1bd2-4f54-b7ee-6890ebba164b]: (4, ('Mon Nov 24 10:01:10 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2d64d66d-0f9e-4429-a21c-7e55f44b1e68 (ccfde47eb1164b05e1eddd6ec32ff722c26df590adb886fc1f0264b43f9c4ea8)\nccfde47eb1164b05e1eddd6ec32ff722c26df590adb886fc1f0264b43f9c4ea8\nMon Nov 24 10:01:10 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2d64d66d-0f9e-4429-a21c-7e55f44b1e68 (ccfde47eb1164b05e1eddd6ec32ff722c26df590adb886fc1f0264b43f9c4ea8)\nccfde47eb1164b05e1eddd6ec32ff722c26df590adb886fc1f0264b43f9c4ea8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:01:10 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:10.540 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[f56126e4-7a94-416c-9032-3ea40c33891e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:01:10 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:10.541 165073 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2d64d66d-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:01:10 compute-0 nova_compute[257700]: 2025-11-24 10:01:10.542 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:10 compute-0 kernel: tap2d64d66d-00: left promiscuous mode
Nov 24 10:01:10 compute-0 nova_compute[257700]: 2025-11-24 10:01:10.556 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:10 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:10.559 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[0b738240-1756-4a25-b13a-8193c38e9d88]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:01:10 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:10.573 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[469cab74-0f4f-46cd-82c7-4431f5aca1b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:01:10 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:10.574 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[75118d94-88bd-4e10-a901-5e795d9c91c8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:01:10 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:10.589 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[d3394884-e81e-4337-a66d-65040e4b9b65]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 441277, 'reachable_time': 20067, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277168, 'error': None, 'target': 'ovnmeta-2d64d66d-0f9e-4429-a21c-7e55f44b1e68', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:01:10 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:10.591 165227 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2d64d66d-0f9e-4429-a21c-7e55f44b1e68 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 24 10:01:10 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:10.591 165227 DEBUG oslo.privsep.daemon [-] privsep: reply[c6052b16-5017-4d32-b477-9244f427c9ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:01:10 compute-0 systemd[1]: run-netns-ovnmeta\x2d2d64d66d\x2d0f9e\x2d4429\x2da21c\x2d7e55f44b1e68.mount: Deactivated successfully.
Nov 24 10:01:10 compute-0 nova_compute[257700]: 2025-11-24 10:01:10.639 257704 DEBUG nova.compute.manager [req-2b684066-61ba-44e6-bd0a-e28000c7820f req-1f2c73a0-1b09-4a3d-b6c8-43bde675aa20 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Received event network-vif-unplugged-fe53799e-0d96-417b-8153-212f65cd709e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 10:01:10 compute-0 nova_compute[257700]: 2025-11-24 10:01:10.640 257704 DEBUG oslo_concurrency.lockutils [req-2b684066-61ba-44e6-bd0a-e28000c7820f req-1f2c73a0-1b09-4a3d-b6c8-43bde675aa20 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "7209acb4-1927-431b-ad9e-0838a25f1f80-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:01:10 compute-0 nova_compute[257700]: 2025-11-24 10:01:10.640 257704 DEBUG oslo_concurrency.lockutils [req-2b684066-61ba-44e6-bd0a-e28000c7820f req-1f2c73a0-1b09-4a3d-b6c8-43bde675aa20 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "7209acb4-1927-431b-ad9e-0838a25f1f80-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:01:10 compute-0 nova_compute[257700]: 2025-11-24 10:01:10.640 257704 DEBUG oslo_concurrency.lockutils [req-2b684066-61ba-44e6-bd0a-e28000c7820f req-1f2c73a0-1b09-4a3d-b6c8-43bde675aa20 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "7209acb4-1927-431b-ad9e-0838a25f1f80-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:01:10 compute-0 nova_compute[257700]: 2025-11-24 10:01:10.641 257704 DEBUG nova.compute.manager [req-2b684066-61ba-44e6-bd0a-e28000c7820f req-1f2c73a0-1b09-4a3d-b6c8-43bde675aa20 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] No waiting events found dispatching network-vif-unplugged-fe53799e-0d96-417b-8153-212f65cd709e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 10:01:10 compute-0 nova_compute[257700]: 2025-11-24 10:01:10.641 257704 DEBUG nova.compute.manager [req-2b684066-61ba-44e6-bd0a-e28000c7820f req-1f2c73a0-1b09-4a3d-b6c8-43bde675aa20 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Received event network-vif-unplugged-fe53799e-0d96-417b-8153-212f65cd709e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 24 10:01:10 compute-0 nova_compute[257700]: 2025-11-24 10:01:10.875 257704 INFO nova.virt.libvirt.driver [None req-946668a7-e864-4886-ac06-bc52e8ab66c8 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Deleting instance files /var/lib/nova/instances/7209acb4-1927-431b-ad9e-0838a25f1f80_del
Nov 24 10:01:10 compute-0 nova_compute[257700]: 2025-11-24 10:01:10.875 257704 INFO nova.virt.libvirt.driver [None req-946668a7-e864-4886-ac06-bc52e8ab66c8 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Deletion of /var/lib/nova/instances/7209acb4-1927-431b-ad9e-0838a25f1f80_del complete
Nov 24 10:01:10 compute-0 nova_compute[257700]: 2025-11-24 10:01:10.928 257704 INFO nova.compute.manager [None req-946668a7-e864-4886-ac06-bc52e8ab66c8 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Took 0.75 seconds to destroy the instance on the hypervisor.
Nov 24 10:01:10 compute-0 nova_compute[257700]: 2025-11-24 10:01:10.929 257704 DEBUG oslo.service.loopingcall [None req-946668a7-e864-4886-ac06-bc52e8ab66c8 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 24 10:01:10 compute-0 nova_compute[257700]: 2025-11-24 10:01:10.929 257704 DEBUG nova.compute.manager [-] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 24 10:01:10 compute-0 nova_compute[257700]: 2025-11-24 10:01:10.930 257704 DEBUG nova.network.neutron [-] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 24 10:01:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:01:10] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Nov 24 10:01:10 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:01:10] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Nov 24 10:01:11 compute-0 nova_compute[257700]: 2025-11-24 10:01:11.062 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:11 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1032: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Nov 24 10:01:11 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:11 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:01:11 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:11.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:01:12 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:12 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:12 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:12.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:12 compute-0 ceph-mon[74331]: pgmap v1032: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Nov 24 10:01:12 compute-0 nova_compute[257700]: 2025-11-24 10:01:12.557 257704 DEBUG nova.network.neutron [-] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 10:01:12 compute-0 nova_compute[257700]: 2025-11-24 10:01:12.571 257704 INFO nova.compute.manager [-] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Took 1.64 seconds to deallocate network for instance.
Nov 24 10:01:12 compute-0 nova_compute[257700]: 2025-11-24 10:01:12.643 257704 DEBUG oslo_concurrency.lockutils [None req-946668a7-e864-4886-ac06-bc52e8ab66c8 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:01:12 compute-0 nova_compute[257700]: 2025-11-24 10:01:12.643 257704 DEBUG oslo_concurrency.lockutils [None req-946668a7-e864-4886-ac06-bc52e8ab66c8 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:01:12 compute-0 nova_compute[257700]: 2025-11-24 10:01:12.706 257704 DEBUG oslo_concurrency.processutils [None req-946668a7-e864-4886-ac06-bc52e8ab66c8 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:01:12 compute-0 nova_compute[257700]: 2025-11-24 10:01:12.737 257704 DEBUG nova.compute.manager [req-ed2a3661-96b2-4643-bd7e-a08986c98ecd req-5122b734-8e96-4675-a0a4-7a1331314aef 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Received event network-vif-plugged-fe53799e-0d96-417b-8153-212f65cd709e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 10:01:12 compute-0 nova_compute[257700]: 2025-11-24 10:01:12.737 257704 DEBUG oslo_concurrency.lockutils [req-ed2a3661-96b2-4643-bd7e-a08986c98ecd req-5122b734-8e96-4675-a0a4-7a1331314aef 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "7209acb4-1927-431b-ad9e-0838a25f1f80-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:01:12 compute-0 nova_compute[257700]: 2025-11-24 10:01:12.738 257704 DEBUG oslo_concurrency.lockutils [req-ed2a3661-96b2-4643-bd7e-a08986c98ecd req-5122b734-8e96-4675-a0a4-7a1331314aef 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "7209acb4-1927-431b-ad9e-0838a25f1f80-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:01:12 compute-0 nova_compute[257700]: 2025-11-24 10:01:12.738 257704 DEBUG oslo_concurrency.lockutils [req-ed2a3661-96b2-4643-bd7e-a08986c98ecd req-5122b734-8e96-4675-a0a4-7a1331314aef 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "7209acb4-1927-431b-ad9e-0838a25f1f80-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:01:12 compute-0 nova_compute[257700]: 2025-11-24 10:01:12.738 257704 DEBUG nova.compute.manager [req-ed2a3661-96b2-4643-bd7e-a08986c98ecd req-5122b734-8e96-4675-a0a4-7a1331314aef 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] No waiting events found dispatching network-vif-plugged-fe53799e-0d96-417b-8153-212f65cd709e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 10:01:12 compute-0 nova_compute[257700]: 2025-11-24 10:01:12.738 257704 WARNING nova.compute.manager [req-ed2a3661-96b2-4643-bd7e-a08986c98ecd req-5122b734-8e96-4675-a0a4-7a1331314aef 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Received unexpected event network-vif-plugged-fe53799e-0d96-417b-8153-212f65cd709e for instance with vm_state deleted and task_state None.
Nov 24 10:01:12 compute-0 podman[277172]: 2025-11-24 10:01:12.792698249 +0000 UTC m=+0.063077906 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 24 10:01:12 compute-0 podman[277174]: 2025-11-24 10:01:12.899110694 +0000 UTC m=+0.164083068 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 10:01:13 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1033: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Nov 24 10:01:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:01:13 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/376239360' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:01:13 compute-0 nova_compute[257700]: 2025-11-24 10:01:13.176 257704 DEBUG oslo_concurrency.processutils [None req-946668a7-e864-4886-ac06-bc52e8ab66c8 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:01:13 compute-0 nova_compute[257700]: 2025-11-24 10:01:13.183 257704 DEBUG nova.compute.provider_tree [None req-946668a7-e864-4886-ac06-bc52e8ab66c8 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed in ProviderTree for provider: a50ce3b5-7e9e-4263-a4aa-c35573ac7257 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 10:01:13 compute-0 nova_compute[257700]: 2025-11-24 10:01:13.210 257704 DEBUG nova.scheduler.client.report [None req-946668a7-e864-4886-ac06-bc52e8ab66c8 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 10:01:13 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:13 compute-0 nova_compute[257700]: 2025-11-24 10:01:13.244 257704 DEBUG oslo_concurrency.lockutils [None req-946668a7-e864-4886-ac06-bc52e8ab66c8 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:01:13 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:01:13 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:13.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:01:13 compute-0 nova_compute[257700]: 2025-11-24 10:01:13.289 257704 INFO nova.scheduler.client.report [None req-946668a7-e864-4886-ac06-bc52e8ab66c8 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Deleted allocations for instance 7209acb4-1927-431b-ad9e-0838a25f1f80
Nov 24 10:01:13 compute-0 nova_compute[257700]: 2025-11-24 10:01:13.388 257704 DEBUG oslo_concurrency.lockutils [None req-946668a7-e864-4886-ac06-bc52e8ab66c8 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "7209acb4-1927-431b-ad9e-0838a25f1f80" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.216s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:01:13 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/376239360' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:01:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:01:14 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:14 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:14 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:14.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:14 compute-0 ceph-mon[74331]: pgmap v1033: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Nov 24 10:01:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:01:14 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:01:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:01:14 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:01:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:01:14 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:01:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:01:15 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:01:15 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1034: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Nov 24 10:01:15 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:15 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:15 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:15.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:01:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:01:15 compute-0 nova_compute[257700]: 2025-11-24 10:01:15.461 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:01:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:01:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:01:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:01:15 compute-0 ceph-mon[74331]: pgmap v1034: 353 pgs: 353 active+clean; 41 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Nov 24 10:01:15 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:01:16 compute-0 nova_compute[257700]: 2025-11-24 10:01:16.092 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:16 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:16 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:16 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:16.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:16 compute-0 nova_compute[257700]: 2025-11-24 10:01:16.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:01:17 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1035: 353 pgs: 353 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Nov 24 10:01:17 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:17 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:01:17 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:17.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:01:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:01:17.548Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:01:18 compute-0 ceph-mon[74331]: pgmap v1035: 353 pgs: 353 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Nov 24 10:01:18 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:18 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:18 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:18.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:18 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:01:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:01:18.901Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:01:19 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1036: 353 pgs: 353 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 96 op/s
Nov 24 10:01:19 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:19 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:19 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:19.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:19 compute-0 podman[277244]: 2025-11-24 10:01:19.821302484 +0000 UTC m=+0.094433480 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 24 10:01:19 compute-0 nova_compute[257700]: 2025-11-24 10:01:19.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:01:19 compute-0 nova_compute[257700]: 2025-11-24 10:01:19.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:01:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:01:19 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:01:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:01:19 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:01:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:01:19 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:01:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:01:20 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:01:20 compute-0 ceph-mon[74331]: pgmap v1036: 353 pgs: 353 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 96 op/s
Nov 24 10:01:20 compute-0 nova_compute[257700]: 2025-11-24 10:01:20.465 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:20 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:20 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:01:20 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:20.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:01:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:20.572 165073 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:01:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:20.573 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:01:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:20.573 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:01:20 compute-0 nova_compute[257700]: 2025-11-24 10:01:20.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:01:20 compute-0 nova_compute[257700]: 2025-11-24 10:01:20.921 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 10:01:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:01:20] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Nov 24 10:01:20 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:01:20] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Nov 24 10:01:21 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1037: 353 pgs: 353 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 96 op/s
Nov 24 10:01:21 compute-0 nova_compute[257700]: 2025-11-24 10:01:21.142 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:21 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/1254106227' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:01:21 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:21 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:21 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:21.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:21 compute-0 nova_compute[257700]: 2025-11-24 10:01:21.915 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:01:22 compute-0 ceph-mon[74331]: pgmap v1037: 353 pgs: 353 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 96 op/s
Nov 24 10:01:22 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/1213079402' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:01:22 compute-0 nova_compute[257700]: 2025-11-24 10:01:22.446 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:22 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:22 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:01:22 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:22.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:01:22 compute-0 nova_compute[257700]: 2025-11-24 10:01:22.528 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:22 compute-0 nova_compute[257700]: 2025-11-24 10:01:22.920 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:01:22 compute-0 nova_compute[257700]: 2025-11-24 10:01:22.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:01:22 compute-0 nova_compute[257700]: 2025-11-24 10:01:22.921 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 10:01:22 compute-0 nova_compute[257700]: 2025-11-24 10:01:22.921 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 10:01:22 compute-0 nova_compute[257700]: 2025-11-24 10:01:22.934 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 10:01:23 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1038: 353 pgs: 353 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 96 op/s
Nov 24 10:01:23 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/1862029681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:01:23 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:23 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:23 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:23.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:23 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:01:23 compute-0 nova_compute[257700]: 2025-11-24 10:01:23.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:01:23 compute-0 nova_compute[257700]: 2025-11-24 10:01:23.958 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:01:23 compute-0 nova_compute[257700]: 2025-11-24 10:01:23.959 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:01:23 compute-0 nova_compute[257700]: 2025-11-24 10:01:23.959 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:01:23 compute-0 nova_compute[257700]: 2025-11-24 10:01:23.960 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 10:01:23 compute-0 nova_compute[257700]: 2025-11-24 10:01:23.960 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:01:24 compute-0 ceph-mon[74331]: pgmap v1038: 353 pgs: 353 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 96 op/s
Nov 24 10:01:24 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/518376627' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:01:24 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:01:24 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2072489161' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:01:24 compute-0 nova_compute[257700]: 2025-11-24 10:01:24.451 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:01:24 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:24 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:24 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:24.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:24 compute-0 nova_compute[257700]: 2025-11-24 10:01:24.654 257704 WARNING nova.virt.libvirt.driver [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 10:01:24 compute-0 nova_compute[257700]: 2025-11-24 10:01:24.657 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4594MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 10:01:24 compute-0 nova_compute[257700]: 2025-11-24 10:01:24.657 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:01:24 compute-0 nova_compute[257700]: 2025-11-24 10:01:24.658 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:01:24 compute-0 nova_compute[257700]: 2025-11-24 10:01:24.721 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 10:01:24 compute-0 nova_compute[257700]: 2025-11-24 10:01:24.722 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 10:01:24 compute-0 nova_compute[257700]: 2025-11-24 10:01:24.738 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:01:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:01:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:01:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:01:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:01:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:01:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:01:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:01:25 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:01:25 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1039: 353 pgs: 353 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:01:25 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:01:25 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3938711372' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:01:25 compute-0 nova_compute[257700]: 2025-11-24 10:01:25.197 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:01:25 compute-0 nova_compute[257700]: 2025-11-24 10:01:25.205 257704 DEBUG nova.compute.provider_tree [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Inventory has not changed in ProviderTree for provider: a50ce3b5-7e9e-4263-a4aa-c35573ac7257 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 10:01:25 compute-0 nova_compute[257700]: 2025-11-24 10:01:25.217 257704 DEBUG nova.scheduler.client.report [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Inventory has not changed for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 10:01:25 compute-0 nova_compute[257700]: 2025-11-24 10:01:25.233 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 10:01:25 compute-0 nova_compute[257700]: 2025-11-24 10:01:25.233 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.575s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:01:25 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2072489161' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:01:25 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3938711372' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:01:25 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:25 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:01:25 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:25.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:01:25 compute-0 nova_compute[257700]: 2025-11-24 10:01:25.411 257704 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763978470.4095275, 7209acb4-1927-431b-ad9e-0838a25f1f80 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 10:01:25 compute-0 nova_compute[257700]: 2025-11-24 10:01:25.412 257704 INFO nova.compute.manager [-] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] VM Stopped (Lifecycle Event)
Nov 24 10:01:25 compute-0 nova_compute[257700]: 2025-11-24 10:01:25.436 257704 DEBUG nova.compute.manager [None req-3117bed5-29a2-4043-bb49-831e1de58fc6 - - - - - -] [instance: 7209acb4-1927-431b-ad9e-0838a25f1f80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 10:01:25 compute-0 nova_compute[257700]: 2025-11-24 10:01:25.468 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:26 compute-0 nova_compute[257700]: 2025-11-24 10:01:26.144 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:26 compute-0 nova_compute[257700]: 2025-11-24 10:01:26.233 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:01:26 compute-0 ceph-mon[74331]: pgmap v1039: 353 pgs: 353 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:01:26 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:26 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:01:26 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:26.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:01:27 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1040: 353 pgs: 353 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:01:27 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:27 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:27 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:27.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:01:27.549Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 10:01:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:01:27.550Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:01:27 compute-0 nova_compute[257700]: 2025-11-24 10:01:27.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:01:27 compute-0 sudo[277319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:01:27 compute-0 sudo[277319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:01:27 compute-0 sudo[277319]: pam_unix(sudo:session): session closed for user root
Nov 24 10:01:28 compute-0 ceph-mon[74331]: pgmap v1040: 353 pgs: 353 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:01:28 compute-0 sshd-session[277312]: Invalid user admin from 14.215.126.91 port 34826
Nov 24 10:01:28 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:28 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:28 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:28.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:28 compute-0 sshd-session[277312]: Received disconnect from 14.215.126.91 port 34826:11: Bye Bye [preauth]
Nov 24 10:01:28 compute-0 sshd-session[277312]: Disconnected from invalid user admin 14.215.126.91 port 34826 [preauth]
Nov 24 10:01:28 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:01:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:01:28.902Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:01:29 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1041: 353 pgs: 353 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:01:29 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:29 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:01:29 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:29.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:01:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:01:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:01:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:01:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:01:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:01:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:01:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:01:30 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:01:30 compute-0 ceph-mon[74331]: pgmap v1041: 353 pgs: 353 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:01:30 compute-0 nova_compute[257700]: 2025-11-24 10:01:30.471 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:30 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:30 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:30 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:30.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:01:30] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Nov 24 10:01:30 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:01:30] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Nov 24 10:01:31 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1042: 353 pgs: 353 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:01:31 compute-0 nova_compute[257700]: 2025-11-24 10:01:31.146 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:31 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:31 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:01:31 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:31.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:01:31 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:01:32 compute-0 ceph-mon[74331]: pgmap v1042: 353 pgs: 353 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:01:32 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:32 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:01:32 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:32.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:01:33 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1043: 353 pgs: 353 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:01:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:01:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:33.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:01:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:01:34 compute-0 ceph-mon[74331]: pgmap v1043: 353 pgs: 353 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:01:34 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:34 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:34 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:34.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:01:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:01:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:01:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:01:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:01:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:01:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:01:35 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:01:35 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1044: 353 pgs: 353 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:01:35 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:35 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:35 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:35.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:35 compute-0 nova_compute[257700]: 2025-11-24 10:01:35.474 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:36 compute-0 sudo[277352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:01:36 compute-0 sudo[277352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:01:36 compute-0 sudo[277352]: pam_unix(sudo:session): session closed for user root
Nov 24 10:01:36 compute-0 nova_compute[257700]: 2025-11-24 10:01:36.148 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:36 compute-0 sudo[277377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 10:01:36 compute-0 sudo[277377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:01:36 compute-0 ceph-mon[74331]: pgmap v1044: 353 pgs: 353 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:01:36 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:36 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:36 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:36.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:36 compute-0 sudo[277377]: pam_unix(sudo:session): session closed for user root
Nov 24 10:01:36 compute-0 nova_compute[257700]: 2025-11-24 10:01:36.821 257704 DEBUG oslo_concurrency.lockutils [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "5c9d7984-c8b4-481b-8d02-4149b3de004a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:01:36 compute-0 nova_compute[257700]: 2025-11-24 10:01:36.823 257704 DEBUG oslo_concurrency.lockutils [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "5c9d7984-c8b4-481b-8d02-4149b3de004a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:01:36 compute-0 nova_compute[257700]: 2025-11-24 10:01:36.848 257704 DEBUG nova.compute.manager [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 24 10:01:36 compute-0 nova_compute[257700]: 2025-11-24 10:01:36.934 257704 DEBUG oslo_concurrency.lockutils [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:01:36 compute-0 nova_compute[257700]: 2025-11-24 10:01:36.935 257704 DEBUG oslo_concurrency.lockutils [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:01:36 compute-0 nova_compute[257700]: 2025-11-24 10:01:36.946 257704 DEBUG nova.virt.hardware [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 24 10:01:36 compute-0 nova_compute[257700]: 2025-11-24 10:01:36.946 257704 INFO nova.compute.claims [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Claim successful on node compute-0.ctlplane.example.com
Nov 24 10:01:37 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1045: 353 pgs: 353 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:01:37 compute-0 nova_compute[257700]: 2025-11-24 10:01:37.156 257704 DEBUG oslo_concurrency.processutils [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:01:37 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:37 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:37 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:37.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:01:37.551Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 10:01:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:01:37.551Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:01:37 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:01:37 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2501652805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:01:37 compute-0 nova_compute[257700]: 2025-11-24 10:01:37.606 257704 DEBUG oslo_concurrency.processutils [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:01:37 compute-0 nova_compute[257700]: 2025-11-24 10:01:37.612 257704 DEBUG nova.compute.provider_tree [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed in ProviderTree for provider: a50ce3b5-7e9e-4263-a4aa-c35573ac7257 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 10:01:37 compute-0 nova_compute[257700]: 2025-11-24 10:01:37.628 257704 DEBUG nova.scheduler.client.report [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 10:01:37 compute-0 nova_compute[257700]: 2025-11-24 10:01:37.643 257704 DEBUG oslo_concurrency.lockutils [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:01:37 compute-0 nova_compute[257700]: 2025-11-24 10:01:37.643 257704 DEBUG nova.compute.manager [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 24 10:01:37 compute-0 nova_compute[257700]: 2025-11-24 10:01:37.684 257704 DEBUG nova.compute.manager [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 24 10:01:37 compute-0 nova_compute[257700]: 2025-11-24 10:01:37.685 257704 DEBUG nova.network.neutron [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 24 10:01:37 compute-0 nova_compute[257700]: 2025-11-24 10:01:37.700 257704 INFO nova.virt.libvirt.driver [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 24 10:01:37 compute-0 nova_compute[257700]: 2025-11-24 10:01:37.718 257704 DEBUG nova.compute.manager [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 24 10:01:37 compute-0 nova_compute[257700]: 2025-11-24 10:01:37.805 257704 DEBUG nova.compute.manager [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 24 10:01:37 compute-0 nova_compute[257700]: 2025-11-24 10:01:37.806 257704 DEBUG nova.virt.libvirt.driver [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 24 10:01:37 compute-0 nova_compute[257700]: 2025-11-24 10:01:37.807 257704 INFO nova.virt.libvirt.driver [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Creating image(s)
Nov 24 10:01:37 compute-0 nova_compute[257700]: 2025-11-24 10:01:37.828 257704 DEBUG nova.storage.rbd_utils [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 5c9d7984-c8b4-481b-8d02-4149b3de004a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 10:01:37 compute-0 nova_compute[257700]: 2025-11-24 10:01:37.855 257704 DEBUG nova.storage.rbd_utils [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 5c9d7984-c8b4-481b-8d02-4149b3de004a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 10:01:37 compute-0 nova_compute[257700]: 2025-11-24 10:01:37.879 257704 DEBUG nova.storage.rbd_utils [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 5c9d7984-c8b4-481b-8d02-4149b3de004a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 10:01:37 compute-0 nova_compute[257700]: 2025-11-24 10:01:37.882 257704 DEBUG oslo_concurrency.processutils [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:01:37 compute-0 nova_compute[257700]: 2025-11-24 10:01:37.937 257704 DEBUG oslo_concurrency.processutils [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:01:37 compute-0 nova_compute[257700]: 2025-11-24 10:01:37.938 257704 DEBUG oslo_concurrency.lockutils [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "2ed5c667523487159c4c4503c82babbc95dbae40" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:01:37 compute-0 nova_compute[257700]: 2025-11-24 10:01:37.939 257704 DEBUG oslo_concurrency.lockutils [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "2ed5c667523487159c4c4503c82babbc95dbae40" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:01:37 compute-0 nova_compute[257700]: 2025-11-24 10:01:37.939 257704 DEBUG oslo_concurrency.lockutils [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "2ed5c667523487159c4c4503c82babbc95dbae40" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:01:37 compute-0 nova_compute[257700]: 2025-11-24 10:01:37.963 257704 DEBUG nova.storage.rbd_utils [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 5c9d7984-c8b4-481b-8d02-4149b3de004a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 10:01:37 compute-0 nova_compute[257700]: 2025-11-24 10:01:37.967 257704 DEBUG oslo_concurrency.processutils [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 5c9d7984-c8b4-481b-8d02-4149b3de004a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:01:38 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 10:01:38 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:01:38 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 10:01:38 compute-0 nova_compute[257700]: 2025-11-24 10:01:38.481 257704 DEBUG nova.policy [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '43f79ff3105e4372a3c095e8057d4f1f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '94d069fc040647d5a6e54894eec915fe', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 24 10:01:38 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:38 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:38 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:38.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:38 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:01:38 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:01:38 compute-0 ceph-mon[74331]: pgmap v1045: 353 pgs: 353 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:01:38 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2501652805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:01:38 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:01:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:01:38.903Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:01:39 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1046: 353 pgs: 353 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:01:39 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Nov 24 10:01:39 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 24 10:01:39 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:39 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:01:39 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:39.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:01:39 compute-0 nova_compute[257700]: 2025-11-24 10:01:39.304 257704 DEBUG oslo_concurrency.processutils [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 5c9d7984-c8b4-481b-8d02-4149b3de004a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:01:39 compute-0 nova_compute[257700]: 2025-11-24 10:01:39.396 257704 DEBUG nova.storage.rbd_utils [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] resizing rbd image 5c9d7984-c8b4-481b-8d02-4149b3de004a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 24 10:01:39 compute-0 nova_compute[257700]: 2025-11-24 10:01:39.609 257704 DEBUG nova.objects.instance [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'migration_context' on Instance uuid 5c9d7984-c8b4-481b-8d02-4149b3de004a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 10:01:39 compute-0 nova_compute[257700]: 2025-11-24 10:01:39.644 257704 DEBUG nova.virt.libvirt.driver [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 24 10:01:39 compute-0 nova_compute[257700]: 2025-11-24 10:01:39.645 257704 DEBUG nova.virt.libvirt.driver [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Ensure instance console log exists: /var/lib/nova/instances/5c9d7984-c8b4-481b-8d02-4149b3de004a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 24 10:01:39 compute-0 nova_compute[257700]: 2025-11-24 10:01:39.645 257704 DEBUG oslo_concurrency.lockutils [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:01:39 compute-0 nova_compute[257700]: 2025-11-24 10:01:39.646 257704 DEBUG oslo_concurrency.lockutils [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:01:39 compute-0 nova_compute[257700]: 2025-11-24 10:01:39.646 257704 DEBUG oslo_concurrency.lockutils [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:01:39 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:01:39 compute-0 ceph-mon[74331]: pgmap v1046: 353 pgs: 353 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:01:39 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 24 10:01:39 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 24 10:01:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:01:40 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:01:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:01:40 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:01:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:01:40 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:01:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:01:40 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:01:40 compute-0 nova_compute[257700]: 2025-11-24 10:01:40.478 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:40 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:40 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:40 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:40.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:40 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 10:01:40 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:01:40 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 10:01:40 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:01:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:01:40] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Nov 24 10:01:40 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:01:40] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Nov 24 10:01:41 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1047: 353 pgs: 353 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:01:41 compute-0 nova_compute[257700]: 2025-11-24 10:01:41.150 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:41 compute-0 nova_compute[257700]: 2025-11-24 10:01:41.264 257704 DEBUG nova.network.neutron [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Successfully created port: 9a53d19e-b3d6-44e0-9943-79a68e4f4fc3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 24 10:01:41 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:41 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:01:41 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:41.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:01:41 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Nov 24 10:01:41 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 24 10:01:41 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1048: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.0 MiB/s wr, 32 op/s
Nov 24 10:01:41 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 10:01:41 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 10:01:41 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 10:01:42 compute-0 nova_compute[257700]: 2025-11-24 10:01:42.255 257704 DEBUG nova.network.neutron [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Successfully updated port: 9a53d19e-b3d6-44e0-9943-79a68e4f4fc3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 24 10:01:42 compute-0 nova_compute[257700]: 2025-11-24 10:01:42.269 257704 DEBUG oslo_concurrency.lockutils [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "refresh_cache-5c9d7984-c8b4-481b-8d02-4149b3de004a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 10:01:42 compute-0 nova_compute[257700]: 2025-11-24 10:01:42.269 257704 DEBUG oslo_concurrency.lockutils [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquired lock "refresh_cache-5c9d7984-c8b4-481b-8d02-4149b3de004a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 10:01:42 compute-0 nova_compute[257700]: 2025-11-24 10:01:42.269 257704 DEBUG nova.network.neutron [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 24 10:01:42 compute-0 nova_compute[257700]: 2025-11-24 10:01:42.347 257704 DEBUG nova.compute.manager [req-1935b06e-6d33-463b-ae25-ff13038a7b9d req-7f82a7e3-9977-4047-b448-6fa9d17180fb 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Received event network-changed-9a53d19e-b3d6-44e0-9943-79a68e4f4fc3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 10:01:42 compute-0 nova_compute[257700]: 2025-11-24 10:01:42.347 257704 DEBUG nova.compute.manager [req-1935b06e-6d33-463b-ae25-ff13038a7b9d req-7f82a7e3-9977-4047-b448-6fa9d17180fb 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Refreshing instance network info cache due to event network-changed-9a53d19e-b3d6-44e0-9943-79a68e4f4fc3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 10:01:42 compute-0 nova_compute[257700]: 2025-11-24 10:01:42.347 257704 DEBUG oslo_concurrency.lockutils [req-1935b06e-6d33-463b-ae25-ff13038a7b9d req-7f82a7e3-9977-4047-b448-6fa9d17180fb 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "refresh_cache-5c9d7984-c8b4-481b-8d02-4149b3de004a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 10:01:42 compute-0 nova_compute[257700]: 2025-11-24 10:01:42.382 257704 DEBUG nova.network.neutron [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 24 10:01:42 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:42 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:01:42 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:42.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:01:43 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:43 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:43 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:43.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:43 compute-0 nova_compute[257700]: 2025-11-24 10:01:43.709 257704 DEBUG nova.network.neutron [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Updating instance_info_cache with network_info: [{"id": "9a53d19e-b3d6-44e0-9943-79a68e4f4fc3", "address": "fa:16:3e:aa:32:5c", "network": {"id": "d754e322-9ff1-4d43-9c36-046a636812dd", "bridge": "br-int", "label": "tempest-network-smoke--1566260689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a53d19e-b3", "ovs_interfaceid": "9a53d19e-b3d6-44e0-9943-79a68e4f4fc3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 10:01:43 compute-0 nova_compute[257700]: 2025-11-24 10:01:43.734 257704 DEBUG oslo_concurrency.lockutils [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Releasing lock "refresh_cache-5c9d7984-c8b4-481b-8d02-4149b3de004a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 10:01:43 compute-0 nova_compute[257700]: 2025-11-24 10:01:43.735 257704 DEBUG nova.compute.manager [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Instance network_info: |[{"id": "9a53d19e-b3d6-44e0-9943-79a68e4f4fc3", "address": "fa:16:3e:aa:32:5c", "network": {"id": "d754e322-9ff1-4d43-9c36-046a636812dd", "bridge": "br-int", "label": "tempest-network-smoke--1566260689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a53d19e-b3", "ovs_interfaceid": "9a53d19e-b3d6-44e0-9943-79a68e4f4fc3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 24 10:01:43 compute-0 nova_compute[257700]: 2025-11-24 10:01:43.735 257704 DEBUG oslo_concurrency.lockutils [req-1935b06e-6d33-463b-ae25-ff13038a7b9d req-7f82a7e3-9977-4047-b448-6fa9d17180fb 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquired lock "refresh_cache-5c9d7984-c8b4-481b-8d02-4149b3de004a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 10:01:43 compute-0 nova_compute[257700]: 2025-11-24 10:01:43.736 257704 DEBUG nova.network.neutron [req-1935b06e-6d33-463b-ae25-ff13038a7b9d req-7f82a7e3-9977-4047-b448-6fa9d17180fb 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Refreshing network info cache for port 9a53d19e-b3d6-44e0-9943-79a68e4f4fc3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 10:01:43 compute-0 nova_compute[257700]: 2025-11-24 10:01:43.739 257704 DEBUG nova.virt.libvirt.driver [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Start _get_guest_xml network_info=[{"id": "9a53d19e-b3d6-44e0-9943-79a68e4f4fc3", "address": "fa:16:3e:aa:32:5c", "network": {"id": "d754e322-9ff1-4d43-9c36-046a636812dd", "bridge": "br-int", "label": "tempest-network-smoke--1566260689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a53d19e-b3", "ovs_interfaceid": "9a53d19e-b3d6-44e0-9943-79a68e4f4fc3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T09:52:37Z,direct_url=<?>,disk_format='qcow2',id=6ef14bdf-4f04-4400-8040-4409d9d5271e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='cf636babb68a4ebe9bf137d3fe0e4c0c',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T09:52:41Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'device_name': '/dev/vda', 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_format': None, 'boot_index': 0, 'device_type': 'disk', 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': '6ef14bdf-4f04-4400-8040-4409d9d5271e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 24 10:01:43 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1049: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.0 MiB/s wr, 31 op/s
Nov 24 10:01:43 compute-0 nova_compute[257700]: 2025-11-24 10:01:43.745 257704 WARNING nova.virt.libvirt.driver [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 10:01:43 compute-0 nova_compute[257700]: 2025-11-24 10:01:43.751 257704 DEBUG nova.virt.libvirt.host [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 24 10:01:43 compute-0 nova_compute[257700]: 2025-11-24 10:01:43.751 257704 DEBUG nova.virt.libvirt.host [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 24 10:01:43 compute-0 nova_compute[257700]: 2025-11-24 10:01:43.760 257704 DEBUG nova.virt.libvirt.host [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 24 10:01:43 compute-0 nova_compute[257700]: 2025-11-24 10:01:43.761 257704 DEBUG nova.virt.libvirt.host [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 24 10:01:43 compute-0 nova_compute[257700]: 2025-11-24 10:01:43.762 257704 DEBUG nova.virt.libvirt.driver [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 24 10:01:43 compute-0 nova_compute[257700]: 2025-11-24 10:01:43.762 257704 DEBUG nova.virt.hardware [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-24T09:52:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='4a5d03ad-925b-45f1-89bd-f1325f9f3292',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T09:52:37Z,direct_url=<?>,disk_format='qcow2',id=6ef14bdf-4f04-4400-8040-4409d9d5271e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='cf636babb68a4ebe9bf137d3fe0e4c0c',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T09:52:41Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 24 10:01:43 compute-0 nova_compute[257700]: 2025-11-24 10:01:43.763 257704 DEBUG nova.virt.hardware [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 24 10:01:43 compute-0 nova_compute[257700]: 2025-11-24 10:01:43.763 257704 DEBUG nova.virt.hardware [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 24 10:01:43 compute-0 nova_compute[257700]: 2025-11-24 10:01:43.763 257704 DEBUG nova.virt.hardware [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 24 10:01:43 compute-0 nova_compute[257700]: 2025-11-24 10:01:43.764 257704 DEBUG nova.virt.hardware [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 24 10:01:43 compute-0 nova_compute[257700]: 2025-11-24 10:01:43.764 257704 DEBUG nova.virt.hardware [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 24 10:01:43 compute-0 nova_compute[257700]: 2025-11-24 10:01:43.764 257704 DEBUG nova.virt.hardware [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 24 10:01:43 compute-0 nova_compute[257700]: 2025-11-24 10:01:43.765 257704 DEBUG nova.virt.hardware [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 24 10:01:43 compute-0 nova_compute[257700]: 2025-11-24 10:01:43.765 257704 DEBUG nova.virt.hardware [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 24 10:01:43 compute-0 nova_compute[257700]: 2025-11-24 10:01:43.765 257704 DEBUG nova.virt.hardware [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 24 10:01:43 compute-0 nova_compute[257700]: 2025-11-24 10:01:43.766 257704 DEBUG nova.virt.hardware [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 24 10:01:43 compute-0 nova_compute[257700]: 2025-11-24 10:01:43.770 257704 DEBUG oslo_concurrency.processutils [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:01:43 compute-0 podman[277630]: 2025-11-24 10:01:43.858160014 +0000 UTC m=+0.110196259 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 24 10:01:43 compute-0 podman[277631]: 2025-11-24 10:01:43.908146937 +0000 UTC m=+0.157811463 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 10:01:44 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:44 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:44 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:44.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:01:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:01:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:01:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:01:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:01:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:01:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:01:45 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:01:45 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:45 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:45 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:45.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Optimize plan auto_2025-11-24_10:01:45
Nov 24 10:01:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 10:01:45 compute-0 ceph-mgr[74626]: [balancer INFO root] do_upmap
Nov 24 10:01:45 compute-0 ceph-mgr[74626]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', '.mgr', 'images', 'volumes', 'cephfs.cephfs.meta', 'vms', 'cephfs.cephfs.data', 'backups', 'default.rgw.log', '.nfs', 'default.rgw.control']
Nov 24 10:01:45 compute-0 ceph-mgr[74626]: [balancer INFO root] prepared 0/10 upmap changes
Nov 24 10:01:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:01:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:01:45 compute-0 nova_compute[257700]: 2025-11-24 10:01:45.482 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:01:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:01:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:01:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:01:45 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1050: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.0 MiB/s wr, 31 op/s
Nov 24 10:01:45 compute-0 nova_compute[257700]: 2025-11-24 10:01:45.831 257704 DEBUG nova.network.neutron [req-1935b06e-6d33-463b-ae25-ff13038a7b9d req-7f82a7e3-9977-4047-b448-6fa9d17180fb 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Updated VIF entry in instance network info cache for port 9a53d19e-b3d6-44e0-9943-79a68e4f4fc3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 10:01:45 compute-0 nova_compute[257700]: 2025-11-24 10:01:45.832 257704 DEBUG nova.network.neutron [req-1935b06e-6d33-463b-ae25-ff13038a7b9d req-7f82a7e3-9977-4047-b448-6fa9d17180fb 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Updating instance_info_cache with network_info: [{"id": "9a53d19e-b3d6-44e0-9943-79a68e4f4fc3", "address": "fa:16:3e:aa:32:5c", "network": {"id": "d754e322-9ff1-4d43-9c36-046a636812dd", "bridge": "br-int", "label": "tempest-network-smoke--1566260689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a53d19e-b3", "ovs_interfaceid": "9a53d19e-b3d6-44e0-9943-79a68e4f4fc3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 10:01:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 10:01:45 compute-0 nova_compute[257700]: 2025-11-24 10:01:45.846 257704 DEBUG oslo_concurrency.lockutils [req-1935b06e-6d33-463b-ae25-ff13038a7b9d req-7f82a7e3-9977-4047-b448-6fa9d17180fb 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Releasing lock "refresh_cache-5c9d7984-c8b4-481b-8d02-4149b3de004a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 10:01:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 10:01:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 10:01:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 10:01:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 10:01:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:01:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 10:01:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:01:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003459970412515465 of space, bias 1.0, pg target 0.10379911237546395 quantized to 32 (current 32)
Nov 24 10:01:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:01:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:01:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:01:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:01:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:01:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 24 10:01:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:01:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 10:01:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:01:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:01:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:01:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 24 10:01:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:01:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 10:01:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:01:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:01:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:01:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 10:01:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:01:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 10:01:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 10:01:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 10:01:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 10:01:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 10:01:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 10:01:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 10:01:46 compute-0 nova_compute[257700]: 2025-11-24 10:01:46.153 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:46 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:46 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:46 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:46.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:46 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:01:46 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:01:46 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:01:46 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:01:46 compute-0 ceph-mon[74331]: pgmap v1047: 353 pgs: 353 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:01:46 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 24 10:01:46 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 24 10:01:46 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:01:46 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 10:01:46 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 10:01:46 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:01:46 compute-0 sudo[277697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:01:46 compute-0 sudo[277697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:01:46 compute-0 sudo[277697]: pam_unix(sudo:session): session closed for user root
Nov 24 10:01:46 compute-0 sudo[277722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 24 10:01:46 compute-0 sudo[277722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:01:47 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 24 10:01:47 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2817330001' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 10:01:47 compute-0 nova_compute[257700]: 2025-11-24 10:01:47.053 257704 DEBUG oslo_concurrency.processutils [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.283s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:01:47 compute-0 nova_compute[257700]: 2025-11-24 10:01:47.092 257704 DEBUG nova.storage.rbd_utils [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 5c9d7984-c8b4-481b-8d02-4149b3de004a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 10:01:47 compute-0 nova_compute[257700]: 2025-11-24 10:01:47.100 257704 DEBUG oslo_concurrency.processutils [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:01:47 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:47 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:47 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:47.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:47 compute-0 podman[277830]: 2025-11-24 10:01:47.364132266 +0000 UTC m=+0.048772924 container create 0680c4659f97029664cdf9539c59d4f910f58c30ea6a327f0eedd487ba69e71a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_khayyam, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Nov 24 10:01:47 compute-0 systemd[1]: Started libpod-conmon-0680c4659f97029664cdf9539c59d4f910f58c30ea6a327f0eedd487ba69e71a.scope.
Nov 24 10:01:47 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:01:47 compute-0 podman[277830]: 2025-11-24 10:01:47.340353459 +0000 UTC m=+0.024994107 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:01:47 compute-0 podman[277830]: 2025-11-24 10:01:47.44255224 +0000 UTC m=+0.127192858 container init 0680c4659f97029664cdf9539c59d4f910f58c30ea6a327f0eedd487ba69e71a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_khayyam, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 10:01:47 compute-0 podman[277830]: 2025-11-24 10:01:47.450007853 +0000 UTC m=+0.134648471 container start 0680c4659f97029664cdf9539c59d4f910f58c30ea6a327f0eedd487ba69e71a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_khayyam, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 10:01:47 compute-0 podman[277830]: 2025-11-24 10:01:47.453486479 +0000 UTC m=+0.138127117 container attach 0680c4659f97029664cdf9539c59d4f910f58c30ea6a327f0eedd487ba69e71a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 24 10:01:47 compute-0 systemd[1]: libpod-0680c4659f97029664cdf9539c59d4f910f58c30ea6a327f0eedd487ba69e71a.scope: Deactivated successfully.
Nov 24 10:01:47 compute-0 elastic_khayyam[277846]: 167 167
Nov 24 10:01:47 compute-0 conmon[277846]: conmon 0680c4659f97029664cd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0680c4659f97029664cdf9539c59d4f910f58c30ea6a327f0eedd487ba69e71a.scope/container/memory.events
Nov 24 10:01:47 compute-0 podman[277830]: 2025-11-24 10:01:47.458277257 +0000 UTC m=+0.142917875 container died 0680c4659f97029664cdf9539c59d4f910f58c30ea6a327f0eedd487ba69e71a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_khayyam, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 10:01:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ee34f968e08190f545c79e1c67d94a0e6f24c23b23f0868b9c15fc65446eca0-merged.mount: Deactivated successfully.
Nov 24 10:01:47 compute-0 podman[277830]: 2025-11-24 10:01:47.500278663 +0000 UTC m=+0.184919311 container remove 0680c4659f97029664cdf9539c59d4f910f58c30ea6a327f0eedd487ba69e71a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_khayyam, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Nov 24 10:01:47 compute-0 systemd[1]: libpod-conmon-0680c4659f97029664cdf9539c59d4f910f58c30ea6a327f0eedd487ba69e71a.scope: Deactivated successfully.
Nov 24 10:01:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:01:47.552Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 10:01:47 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 24 10:01:47 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/984696211' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 10:01:47 compute-0 nova_compute[257700]: 2025-11-24 10:01:47.579 257704 DEBUG oslo_concurrency.processutils [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:01:47 compute-0 nova_compute[257700]: 2025-11-24 10:01:47.581 257704 DEBUG nova.virt.libvirt.vif [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T10:01:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-956984475',display_name='tempest-TestNetworkBasicOps-server-956984475',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-956984475',id=10,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKuoTfe68HXZGbJfjz8QvzvUcn9raHUm7jcwqmlo2grIjCxn4xXYaNtD3Te7wsfeP7X8BrDZpRm661umH273S3VAqs9EBsjs6AcgTXTQEtr5AtHIzKyqSFDKtZjPMSYskQ==',key_name='tempest-TestNetworkBasicOps-1736723705',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-d03wzt7p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T10:01:37Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=5c9d7984-c8b4-481b-8d02-4149b3de004a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9a53d19e-b3d6-44e0-9943-79a68e4f4fc3", "address": "fa:16:3e:aa:32:5c", "network": {"id": "d754e322-9ff1-4d43-9c36-046a636812dd", "bridge": "br-int", "label": "tempest-network-smoke--1566260689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a53d19e-b3", "ovs_interfaceid": "9a53d19e-b3d6-44e0-9943-79a68e4f4fc3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 24 10:01:47 compute-0 nova_compute[257700]: 2025-11-24 10:01:47.582 257704 DEBUG nova.network.os_vif_util [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "9a53d19e-b3d6-44e0-9943-79a68e4f4fc3", "address": "fa:16:3e:aa:32:5c", "network": {"id": "d754e322-9ff1-4d43-9c36-046a636812dd", "bridge": "br-int", "label": "tempest-network-smoke--1566260689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a53d19e-b3", "ovs_interfaceid": "9a53d19e-b3d6-44e0-9943-79a68e4f4fc3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 10:01:47 compute-0 nova_compute[257700]: 2025-11-24 10:01:47.583 257704 DEBUG nova.network.os_vif_util [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:aa:32:5c,bridge_name='br-int',has_traffic_filtering=True,id=9a53d19e-b3d6-44e0-9943-79a68e4f4fc3,network=Network(d754e322-9ff1-4d43-9c36-046a636812dd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a53d19e-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 10:01:47 compute-0 nova_compute[257700]: 2025-11-24 10:01:47.585 257704 DEBUG nova.objects.instance [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'pci_devices' on Instance uuid 5c9d7984-c8b4-481b-8d02-4149b3de004a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 10:01:47 compute-0 nova_compute[257700]: 2025-11-24 10:01:47.598 257704 DEBUG nova.virt.libvirt.driver [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] End _get_guest_xml xml=<domain type="kvm">
Nov 24 10:01:47 compute-0 nova_compute[257700]:   <uuid>5c9d7984-c8b4-481b-8d02-4149b3de004a</uuid>
Nov 24 10:01:47 compute-0 nova_compute[257700]:   <name>instance-0000000a</name>
Nov 24 10:01:47 compute-0 nova_compute[257700]:   <memory>131072</memory>
Nov 24 10:01:47 compute-0 nova_compute[257700]:   <vcpu>1</vcpu>
Nov 24 10:01:47 compute-0 nova_compute[257700]:   <metadata>
Nov 24 10:01:47 compute-0 nova_compute[257700]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 10:01:47 compute-0 nova_compute[257700]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:       <nova:name>tempest-TestNetworkBasicOps-server-956984475</nova:name>
Nov 24 10:01:47 compute-0 nova_compute[257700]:       <nova:creationTime>2025-11-24 10:01:43</nova:creationTime>
Nov 24 10:01:47 compute-0 nova_compute[257700]:       <nova:flavor name="m1.nano">
Nov 24 10:01:47 compute-0 nova_compute[257700]:         <nova:memory>128</nova:memory>
Nov 24 10:01:47 compute-0 nova_compute[257700]:         <nova:disk>1</nova:disk>
Nov 24 10:01:47 compute-0 nova_compute[257700]:         <nova:swap>0</nova:swap>
Nov 24 10:01:47 compute-0 nova_compute[257700]:         <nova:ephemeral>0</nova:ephemeral>
Nov 24 10:01:47 compute-0 nova_compute[257700]:         <nova:vcpus>1</nova:vcpus>
Nov 24 10:01:47 compute-0 nova_compute[257700]:       </nova:flavor>
Nov 24 10:01:47 compute-0 nova_compute[257700]:       <nova:owner>
Nov 24 10:01:47 compute-0 nova_compute[257700]:         <nova:user uuid="43f79ff3105e4372a3c095e8057d4f1f">tempest-TestNetworkBasicOps-1844071378-project-member</nova:user>
Nov 24 10:01:47 compute-0 nova_compute[257700]:         <nova:project uuid="94d069fc040647d5a6e54894eec915fe">tempest-TestNetworkBasicOps-1844071378</nova:project>
Nov 24 10:01:47 compute-0 nova_compute[257700]:       </nova:owner>
Nov 24 10:01:47 compute-0 nova_compute[257700]:       <nova:root type="image" uuid="6ef14bdf-4f04-4400-8040-4409d9d5271e"/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:       <nova:ports>
Nov 24 10:01:47 compute-0 nova_compute[257700]:         <nova:port uuid="9a53d19e-b3d6-44e0-9943-79a68e4f4fc3">
Nov 24 10:01:47 compute-0 nova_compute[257700]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:         </nova:port>
Nov 24 10:01:47 compute-0 nova_compute[257700]:       </nova:ports>
Nov 24 10:01:47 compute-0 nova_compute[257700]:     </nova:instance>
Nov 24 10:01:47 compute-0 nova_compute[257700]:   </metadata>
Nov 24 10:01:47 compute-0 nova_compute[257700]:   <sysinfo type="smbios">
Nov 24 10:01:47 compute-0 nova_compute[257700]:     <system>
Nov 24 10:01:47 compute-0 nova_compute[257700]:       <entry name="manufacturer">RDO</entry>
Nov 24 10:01:47 compute-0 nova_compute[257700]:       <entry name="product">OpenStack Compute</entry>
Nov 24 10:01:47 compute-0 nova_compute[257700]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 10:01:47 compute-0 nova_compute[257700]:       <entry name="serial">5c9d7984-c8b4-481b-8d02-4149b3de004a</entry>
Nov 24 10:01:47 compute-0 nova_compute[257700]:       <entry name="uuid">5c9d7984-c8b4-481b-8d02-4149b3de004a</entry>
Nov 24 10:01:47 compute-0 nova_compute[257700]:       <entry name="family">Virtual Machine</entry>
Nov 24 10:01:47 compute-0 nova_compute[257700]:     </system>
Nov 24 10:01:47 compute-0 nova_compute[257700]:   </sysinfo>
Nov 24 10:01:47 compute-0 nova_compute[257700]:   <os>
Nov 24 10:01:47 compute-0 nova_compute[257700]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 24 10:01:47 compute-0 nova_compute[257700]:     <boot dev="hd"/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:     <smbios mode="sysinfo"/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:   </os>
Nov 24 10:01:47 compute-0 nova_compute[257700]:   <features>
Nov 24 10:01:47 compute-0 nova_compute[257700]:     <acpi/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:     <apic/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:     <vmcoreinfo/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:   </features>
Nov 24 10:01:47 compute-0 nova_compute[257700]:   <clock offset="utc">
Nov 24 10:01:47 compute-0 nova_compute[257700]:     <timer name="pit" tickpolicy="delay"/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:     <timer name="hpet" present="no"/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:   </clock>
Nov 24 10:01:47 compute-0 nova_compute[257700]:   <cpu mode="host-model" match="exact">
Nov 24 10:01:47 compute-0 nova_compute[257700]:     <topology sockets="1" cores="1" threads="1"/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:   </cpu>
Nov 24 10:01:47 compute-0 nova_compute[257700]:   <devices>
Nov 24 10:01:47 compute-0 nova_compute[257700]:     <disk type="network" device="disk">
Nov 24 10:01:47 compute-0 nova_compute[257700]:       <driver type="raw" cache="none"/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:       <source protocol="rbd" name="vms/5c9d7984-c8b4-481b-8d02-4149b3de004a_disk">
Nov 24 10:01:47 compute-0 nova_compute[257700]:         <host name="192.168.122.100" port="6789"/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:         <host name="192.168.122.102" port="6789"/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:         <host name="192.168.122.101" port="6789"/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:       </source>
Nov 24 10:01:47 compute-0 nova_compute[257700]:       <auth username="openstack">
Nov 24 10:01:47 compute-0 nova_compute[257700]:         <secret type="ceph" uuid="84a084c3-61a7-5de7-8207-1f88efa59a64"/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:       </auth>
Nov 24 10:01:47 compute-0 nova_compute[257700]:       <target dev="vda" bus="virtio"/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:     </disk>
Nov 24 10:01:47 compute-0 nova_compute[257700]:     <disk type="network" device="cdrom">
Nov 24 10:01:47 compute-0 nova_compute[257700]:       <driver type="raw" cache="none"/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:       <source protocol="rbd" name="vms/5c9d7984-c8b4-481b-8d02-4149b3de004a_disk.config">
Nov 24 10:01:47 compute-0 nova_compute[257700]:         <host name="192.168.122.100" port="6789"/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:         <host name="192.168.122.102" port="6789"/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:         <host name="192.168.122.101" port="6789"/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:       </source>
Nov 24 10:01:47 compute-0 nova_compute[257700]:       <auth username="openstack">
Nov 24 10:01:47 compute-0 nova_compute[257700]:         <secret type="ceph" uuid="84a084c3-61a7-5de7-8207-1f88efa59a64"/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:       </auth>
Nov 24 10:01:47 compute-0 nova_compute[257700]:       <target dev="sda" bus="sata"/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:     </disk>
Nov 24 10:01:47 compute-0 nova_compute[257700]:     <interface type="ethernet">
Nov 24 10:01:47 compute-0 nova_compute[257700]:       <mac address="fa:16:3e:aa:32:5c"/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:       <model type="virtio"/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:       <driver name="vhost" rx_queue_size="512"/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:       <mtu size="1442"/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:       <target dev="tap9a53d19e-b3"/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:     </interface>
Nov 24 10:01:47 compute-0 nova_compute[257700]:     <serial type="pty">
Nov 24 10:01:47 compute-0 nova_compute[257700]:       <log file="/var/lib/nova/instances/5c9d7984-c8b4-481b-8d02-4149b3de004a/console.log" append="off"/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:     </serial>
Nov 24 10:01:47 compute-0 nova_compute[257700]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:     <video>
Nov 24 10:01:47 compute-0 nova_compute[257700]:       <model type="virtio"/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:     </video>
Nov 24 10:01:47 compute-0 nova_compute[257700]:     <input type="tablet" bus="usb"/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:     <rng model="virtio">
Nov 24 10:01:47 compute-0 nova_compute[257700]:       <backend model="random">/dev/urandom</backend>
Nov 24 10:01:47 compute-0 nova_compute[257700]:     </rng>
Nov 24 10:01:47 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root"/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:     <controller type="usb" index="0"/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:     <memballoon model="virtio">
Nov 24 10:01:47 compute-0 nova_compute[257700]:       <stats period="10"/>
Nov 24 10:01:47 compute-0 nova_compute[257700]:     </memballoon>
Nov 24 10:01:47 compute-0 nova_compute[257700]:   </devices>
Nov 24 10:01:47 compute-0 nova_compute[257700]: </domain>
Nov 24 10:01:47 compute-0 nova_compute[257700]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 24 10:01:47 compute-0 nova_compute[257700]: 2025-11-24 10:01:47.600 257704 DEBUG nova.compute.manager [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Preparing to wait for external event network-vif-plugged-9a53d19e-b3d6-44e0-9943-79a68e4f4fc3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 24 10:01:47 compute-0 nova_compute[257700]: 2025-11-24 10:01:47.600 257704 DEBUG oslo_concurrency.lockutils [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "5c9d7984-c8b4-481b-8d02-4149b3de004a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:01:47 compute-0 nova_compute[257700]: 2025-11-24 10:01:47.600 257704 DEBUG oslo_concurrency.lockutils [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "5c9d7984-c8b4-481b-8d02-4149b3de004a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:01:47 compute-0 nova_compute[257700]: 2025-11-24 10:01:47.601 257704 DEBUG oslo_concurrency.lockutils [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "5c9d7984-c8b4-481b-8d02-4149b3de004a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:01:47 compute-0 nova_compute[257700]: 2025-11-24 10:01:47.602 257704 DEBUG nova.virt.libvirt.vif [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T10:01:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-956984475',display_name='tempest-TestNetworkBasicOps-server-956984475',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-956984475',id=10,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKuoTfe68HXZGbJfjz8QvzvUcn9raHUm7jcwqmlo2grIjCxn4xXYaNtD3Te7wsfeP7X8BrDZpRm661umH273S3VAqs9EBsjs6AcgTXTQEtr5AtHIzKyqSFDKtZjPMSYskQ==',key_name='tempest-TestNetworkBasicOps-1736723705',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-d03wzt7p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T10:01:37Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=5c9d7984-c8b4-481b-8d02-4149b3de004a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9a53d19e-b3d6-44e0-9943-79a68e4f4fc3", "address": "fa:16:3e:aa:32:5c", "network": {"id": "d754e322-9ff1-4d43-9c36-046a636812dd", "bridge": "br-int", "label": "tempest-network-smoke--1566260689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a53d19e-b3", "ovs_interfaceid": "9a53d19e-b3d6-44e0-9943-79a68e4f4fc3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 24 10:01:47 compute-0 nova_compute[257700]: 2025-11-24 10:01:47.602 257704 DEBUG nova.network.os_vif_util [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "9a53d19e-b3d6-44e0-9943-79a68e4f4fc3", "address": "fa:16:3e:aa:32:5c", "network": {"id": "d754e322-9ff1-4d43-9c36-046a636812dd", "bridge": "br-int", "label": "tempest-network-smoke--1566260689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a53d19e-b3", "ovs_interfaceid": "9a53d19e-b3d6-44e0-9943-79a68e4f4fc3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 10:01:47 compute-0 nova_compute[257700]: 2025-11-24 10:01:47.603 257704 DEBUG nova.network.os_vif_util [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:aa:32:5c,bridge_name='br-int',has_traffic_filtering=True,id=9a53d19e-b3d6-44e0-9943-79a68e4f4fc3,network=Network(d754e322-9ff1-4d43-9c36-046a636812dd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a53d19e-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 10:01:47 compute-0 nova_compute[257700]: 2025-11-24 10:01:47.603 257704 DEBUG os_vif [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:32:5c,bridge_name='br-int',has_traffic_filtering=True,id=9a53d19e-b3d6-44e0-9943-79a68e4f4fc3,network=Network(d754e322-9ff1-4d43-9c36-046a636812dd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a53d19e-b3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 24 10:01:47 compute-0 nova_compute[257700]: 2025-11-24 10:01:47.604 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:47 compute-0 nova_compute[257700]: 2025-11-24 10:01:47.604 257704 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:01:47 compute-0 nova_compute[257700]: 2025-11-24 10:01:47.605 257704 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 10:01:47 compute-0 nova_compute[257700]: 2025-11-24 10:01:47.609 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:47 compute-0 nova_compute[257700]: 2025-11-24 10:01:47.609 257704 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9a53d19e-b3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:01:47 compute-0 nova_compute[257700]: 2025-11-24 10:01:47.610 257704 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9a53d19e-b3, col_values=(('external_ids', {'iface-id': '9a53d19e-b3d6-44e0-9943-79a68e4f4fc3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:aa:32:5c', 'vm-uuid': '5c9d7984-c8b4-481b-8d02-4149b3de004a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:01:47 compute-0 nova_compute[257700]: 2025-11-24 10:01:47.612 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:47 compute-0 NetworkManager[48883]: <info>  [1763978507.6136] manager: (tap9a53d19e-b3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Nov 24 10:01:47 compute-0 nova_compute[257700]: 2025-11-24 10:01:47.614 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 10:01:47 compute-0 nova_compute[257700]: 2025-11-24 10:01:47.620 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:47 compute-0 nova_compute[257700]: 2025-11-24 10:01:47.621 257704 INFO os_vif [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:32:5c,bridge_name='br-int',has_traffic_filtering=True,id=9a53d19e-b3d6-44e0-9943-79a68e4f4fc3,network=Network(d754e322-9ff1-4d43-9c36-046a636812dd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a53d19e-b3')
Nov 24 10:01:47 compute-0 nova_compute[257700]: 2025-11-24 10:01:47.669 257704 DEBUG nova.virt.libvirt.driver [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 10:01:47 compute-0 nova_compute[257700]: 2025-11-24 10:01:47.669 257704 DEBUG nova.virt.libvirt.driver [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 10:01:47 compute-0 nova_compute[257700]: 2025-11-24 10:01:47.670 257704 DEBUG nova.virt.libvirt.driver [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No VIF found with MAC fa:16:3e:aa:32:5c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 24 10:01:47 compute-0 nova_compute[257700]: 2025-11-24 10:01:47.670 257704 INFO nova.virt.libvirt.driver [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Using config drive
Nov 24 10:01:47 compute-0 podman[277875]: 2025-11-24 10:01:47.687401788 +0000 UTC m=+0.041192637 container create c3628370f77c0d07791d8d35bcca0b82e44f03ae6534d40ab463c1068ec08668 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_shamir, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 24 10:01:47 compute-0 nova_compute[257700]: 2025-11-24 10:01:47.706 257704 DEBUG nova.storage.rbd_utils [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 5c9d7984-c8b4-481b-8d02-4149b3de004a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 10:01:47 compute-0 systemd[1]: Started libpod-conmon-c3628370f77c0d07791d8d35bcca0b82e44f03ae6534d40ab463c1068ec08668.scope.
Nov 24 10:01:47 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1051: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.0 MiB/s wr, 31 op/s
Nov 24 10:01:47 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:01:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b280572b8d18a5902fb8c3f57cc6eebbb1272656b7d2cd505fcf12e0f9e1805/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 10:01:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b280572b8d18a5902fb8c3f57cc6eebbb1272656b7d2cd505fcf12e0f9e1805/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 10:01:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b280572b8d18a5902fb8c3f57cc6eebbb1272656b7d2cd505fcf12e0f9e1805/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 10:01:47 compute-0 podman[277875]: 2025-11-24 10:01:47.669866246 +0000 UTC m=+0.023657095 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:01:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b280572b8d18a5902fb8c3f57cc6eebbb1272656b7d2cd505fcf12e0f9e1805/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 10:01:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b280572b8d18a5902fb8c3f57cc6eebbb1272656b7d2cd505fcf12e0f9e1805/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 10:01:47 compute-0 podman[277875]: 2025-11-24 10:01:47.776396323 +0000 UTC m=+0.130187192 container init c3628370f77c0d07791d8d35bcca0b82e44f03ae6534d40ab463c1068ec08668 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_shamir, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 24 10:01:47 compute-0 ceph-mon[74331]: pgmap v1048: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.0 MiB/s wr, 32 op/s
Nov 24 10:01:47 compute-0 ceph-mon[74331]: pgmap v1049: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.0 MiB/s wr, 31 op/s
Nov 24 10:01:47 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:01:47 compute-0 ceph-mon[74331]: pgmap v1050: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.0 MiB/s wr, 31 op/s
Nov 24 10:01:47 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:01:47 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:01:47 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 10:01:47 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 10:01:47 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:01:47 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2817330001' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 10:01:47 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/984696211' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 10:01:47 compute-0 podman[277875]: 2025-11-24 10:01:47.787335183 +0000 UTC m=+0.141126032 container start c3628370f77c0d07791d8d35bcca0b82e44f03ae6534d40ab463c1068ec08668 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_shamir, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 10:01:47 compute-0 podman[277875]: 2025-11-24 10:01:47.790380068 +0000 UTC m=+0.144170917 container attach c3628370f77c0d07791d8d35bcca0b82e44f03ae6534d40ab463c1068ec08668 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_shamir, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 10:01:48 compute-0 sudo[277916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:01:48 compute-0 sudo[277916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:01:48 compute-0 sudo[277916]: pam_unix(sudo:session): session closed for user root
Nov 24 10:01:48 compute-0 happy_shamir[277909]: --> passed data devices: 0 physical, 1 LVM
Nov 24 10:01:48 compute-0 happy_shamir[277909]: --> All data devices are unavailable
Nov 24 10:01:48 compute-0 systemd[1]: libpod-c3628370f77c0d07791d8d35bcca0b82e44f03ae6534d40ab463c1068ec08668.scope: Deactivated successfully.
Nov 24 10:01:48 compute-0 podman[277875]: 2025-11-24 10:01:48.148793708 +0000 UTC m=+0.502584557 container died c3628370f77c0d07791d8d35bcca0b82e44f03ae6534d40ab463c1068ec08668 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_shamir, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 10:01:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b280572b8d18a5902fb8c3f57cc6eebbb1272656b7d2cd505fcf12e0f9e1805-merged.mount: Deactivated successfully.
Nov 24 10:01:48 compute-0 podman[277875]: 2025-11-24 10:01:48.195414088 +0000 UTC m=+0.549204937 container remove c3628370f77c0d07791d8d35bcca0b82e44f03ae6534d40ab463c1068ec08668 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_shamir, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 10:01:48 compute-0 systemd[1]: libpod-conmon-c3628370f77c0d07791d8d35bcca0b82e44f03ae6534d40ab463c1068ec08668.scope: Deactivated successfully.
Nov 24 10:01:48 compute-0 sudo[277722]: pam_unix(sudo:session): session closed for user root
Nov 24 10:01:48 compute-0 sudo[277960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:01:48 compute-0 sudo[277960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:01:48 compute-0 sudo[277960]: pam_unix(sudo:session): session closed for user root
Nov 24 10:01:48 compute-0 sudo[277985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- lvm list --format json
Nov 24 10:01:48 compute-0 sshd-session[277747]: Received disconnect from 83.229.122.23 port 39402:11: Bye Bye [preauth]
Nov 24 10:01:48 compute-0 sshd-session[277747]: Disconnected from authenticating user root 83.229.122.23 port 39402 [preauth]
Nov 24 10:01:48 compute-0 sudo[277985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:01:48 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:48 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:01:48 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:48.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:01:48 compute-0 nova_compute[257700]: 2025-11-24 10:01:48.593 257704 INFO nova.virt.libvirt.driver [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Creating config drive at /var/lib/nova/instances/5c9d7984-c8b4-481b-8d02-4149b3de004a/disk.config
Nov 24 10:01:48 compute-0 nova_compute[257700]: 2025-11-24 10:01:48.604 257704 DEBUG oslo_concurrency.processutils [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5c9d7984-c8b4-481b-8d02-4149b3de004a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk3nkkdvp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:01:48 compute-0 podman[278055]: 2025-11-24 10:01:48.730479355 +0000 UTC m=+0.041608687 container create 0b19587799ee16032827694098a59fb12091b23713fa003ede49767342464704 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_gagarin, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2)
Nov 24 10:01:48 compute-0 nova_compute[257700]: 2025-11-24 10:01:48.747 257704 DEBUG oslo_concurrency.processutils [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5c9d7984-c8b4-481b-8d02-4149b3de004a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk3nkkdvp" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:01:48 compute-0 systemd[1]: Started libpod-conmon-0b19587799ee16032827694098a59fb12091b23713fa003ede49767342464704.scope.
Nov 24 10:01:48 compute-0 nova_compute[257700]: 2025-11-24 10:01:48.780 257704 DEBUG nova.storage.rbd_utils [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 5c9d7984-c8b4-481b-8d02-4149b3de004a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 10:01:48 compute-0 nova_compute[257700]: 2025-11-24 10:01:48.785 257704 DEBUG oslo_concurrency.processutils [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5c9d7984-c8b4-481b-8d02-4149b3de004a/disk.config 5c9d7984-c8b4-481b-8d02-4149b3de004a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:01:48 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:01:48 compute-0 ceph-mon[74331]: pgmap v1051: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.0 MiB/s wr, 31 op/s
Nov 24 10:01:48 compute-0 podman[278055]: 2025-11-24 10:01:48.806405448 +0000 UTC m=+0.117534790 container init 0b19587799ee16032827694098a59fb12091b23713fa003ede49767342464704 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 24 10:01:48 compute-0 podman[278055]: 2025-11-24 10:01:48.711302092 +0000 UTC m=+0.022431444 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:01:48 compute-0 podman[278055]: 2025-11-24 10:01:48.814306532 +0000 UTC m=+0.125435864 container start 0b19587799ee16032827694098a59fb12091b23713fa003ede49767342464704 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_gagarin, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 10:01:48 compute-0 podman[278055]: 2025-11-24 10:01:48.817745818 +0000 UTC m=+0.128875150 container attach 0b19587799ee16032827694098a59fb12091b23713fa003ede49767342464704 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_gagarin, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 10:01:48 compute-0 competent_gagarin[278086]: 167 167
Nov 24 10:01:48 compute-0 systemd[1]: libpod-0b19587799ee16032827694098a59fb12091b23713fa003ede49767342464704.scope: Deactivated successfully.
Nov 24 10:01:48 compute-0 podman[278055]: 2025-11-24 10:01:48.822456784 +0000 UTC m=+0.133586126 container died 0b19587799ee16032827694098a59fb12091b23713fa003ede49767342464704 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_gagarin, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 10:01:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc488e98b7273644a645e855c9a8c9ade6eb83276a3fa59c634a0df32fe7081c-merged.mount: Deactivated successfully.
Nov 24 10:01:48 compute-0 podman[278055]: 2025-11-24 10:01:48.870016507 +0000 UTC m=+0.181145839 container remove 0b19587799ee16032827694098a59fb12091b23713fa003ede49767342464704 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_gagarin, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 10:01:48 compute-0 systemd[1]: libpod-conmon-0b19587799ee16032827694098a59fb12091b23713fa003ede49767342464704.scope: Deactivated successfully.
Nov 24 10:01:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:01:48.904Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 10:01:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:01:48.904Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 10:01:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:01:48.904Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 10:01:49 compute-0 nova_compute[257700]: 2025-11-24 10:01:49.031 257704 DEBUG oslo_concurrency.processutils [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5c9d7984-c8b4-481b-8d02-4149b3de004a/disk.config 5c9d7984-c8b4-481b-8d02-4149b3de004a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.246s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:01:49 compute-0 nova_compute[257700]: 2025-11-24 10:01:49.033 257704 INFO nova.virt.libvirt.driver [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Deleting local config drive /var/lib/nova/instances/5c9d7984-c8b4-481b-8d02-4149b3de004a/disk.config because it was imported into RBD.
Nov 24 10:01:49 compute-0 podman[278130]: 2025-11-24 10:01:49.057583313 +0000 UTC m=+0.047160625 container create e7ce42439edaf090dcd7dc10743e6044955029471d9583a613d4aaa47da73157 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_villani, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 10:01:49 compute-0 kernel: tap9a53d19e-b3: entered promiscuous mode
Nov 24 10:01:49 compute-0 NetworkManager[48883]: <info>  [1763978509.0884] manager: (tap9a53d19e-b3): new Tun device (/org/freedesktop/NetworkManager/Devices/53)
Nov 24 10:01:49 compute-0 ovn_controller[155123]: 2025-11-24T10:01:49Z|00077|binding|INFO|Claiming lport 9a53d19e-b3d6-44e0-9943-79a68e4f4fc3 for this chassis.
Nov 24 10:01:49 compute-0 ovn_controller[155123]: 2025-11-24T10:01:49Z|00078|binding|INFO|9a53d19e-b3d6-44e0-9943-79a68e4f4fc3: Claiming fa:16:3e:aa:32:5c 10.100.0.3
Nov 24 10:01:49 compute-0 nova_compute[257700]: 2025-11-24 10:01:49.091 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:49 compute-0 nova_compute[257700]: 2025-11-24 10:01:49.094 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:49 compute-0 systemd[1]: Started libpod-conmon-e7ce42439edaf090dcd7dc10743e6044955029471d9583a613d4aaa47da73157.scope.
Nov 24 10:01:49 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:49.111 165073 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:aa:32:5c 10.100.0.3'], port_security=['fa:16:3e:aa:32:5c 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5c9d7984-c8b4-481b-8d02-4149b3de004a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d754e322-9ff1-4d43-9c36-046a636812dd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94d069fc040647d5a6e54894eec915fe', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4ad840a7-269b-4612-ad15-662a3b4097e7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=225712e3-6af3-42ae-ae6b-a838e36455df, chassis=[<ovs.db.idl.Row object at 0x7f45b2855760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f45b2855760>], logical_port=9a53d19e-b3d6-44e0-9943-79a68e4f4fc3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 10:01:49 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:49.112 165073 INFO neutron.agent.ovn.metadata.agent [-] Port 9a53d19e-b3d6-44e0-9943-79a68e4f4fc3 in datapath d754e322-9ff1-4d43-9c36-046a636812dd bound to our chassis
Nov 24 10:01:49 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:49.113 165073 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d754e322-9ff1-4d43-9c36-046a636812dd
Nov 24 10:01:49 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:49.129 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[e169250e-06e8-4509-9a0d-fbe303eaa820]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:01:49 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:49.130 165073 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd754e322-91 in ovnmeta-d754e322-9ff1-4d43-9c36-046a636812dd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 24 10:01:49 compute-0 systemd-udevd[278163]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 10:01:49 compute-0 podman[278130]: 2025-11-24 10:01:49.0392368 +0000 UTC m=+0.028814122 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:01:49 compute-0 systemd-machined[219130]: New machine qemu-6-instance-0000000a.
Nov 24 10:01:49 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:49.136 264910 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd754e322-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 24 10:01:49 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:49.136 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[ec47ed15-f2fb-411d-995f-b4d04d282de4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:01:49 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:49.139 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[41f946a8-b2cf-4b67-8cb7-32e3fa000599]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:01:49 compute-0 NetworkManager[48883]: <info>  [1763978509.1491] device (tap9a53d19e-b3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 10:01:49 compute-0 NetworkManager[48883]: <info>  [1763978509.1502] device (tap9a53d19e-b3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 24 10:01:49 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:49.151 165227 DEBUG oslo.privsep.daemon [-] privsep: reply[bbdae410-e2d2-4f10-9b65-1ab1a919b3f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:01:49 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:01:49 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-0000000a.
Nov 24 10:01:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/057e4d9090364431335830c785ed316dcc284677128daa406e6e65016b7f3815/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 10:01:49 compute-0 nova_compute[257700]: 2025-11-24 10:01:49.170 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/057e4d9090364431335830c785ed316dcc284677128daa406e6e65016b7f3815/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 10:01:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/057e4d9090364431335830c785ed316dcc284677128daa406e6e65016b7f3815/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 10:01:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/057e4d9090364431335830c785ed316dcc284677128daa406e6e65016b7f3815/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 10:01:49 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:49.175 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[0d4f1e77-aee0-4961-9c2c-f2195a1732d5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:01:49 compute-0 ovn_controller[155123]: 2025-11-24T10:01:49Z|00079|binding|INFO|Setting lport 9a53d19e-b3d6-44e0-9943-79a68e4f4fc3 ovn-installed in OVS
Nov 24 10:01:49 compute-0 ovn_controller[155123]: 2025-11-24T10:01:49Z|00080|binding|INFO|Setting lport 9a53d19e-b3d6-44e0-9943-79a68e4f4fc3 up in Southbound
Nov 24 10:01:49 compute-0 nova_compute[257700]: 2025-11-24 10:01:49.178 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:49 compute-0 podman[278130]: 2025-11-24 10:01:49.193725021 +0000 UTC m=+0.183302323 container init e7ce42439edaf090dcd7dc10743e6044955029471d9583a613d4aaa47da73157 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_villani, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Nov 24 10:01:49 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:49.199 264951 DEBUG oslo.privsep.daemon [-] privsep: reply[fc0d4f73-743d-434a-b77e-6b6df6a18f21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:01:49 compute-0 podman[278130]: 2025-11-24 10:01:49.204850525 +0000 UTC m=+0.194427827 container start e7ce42439edaf090dcd7dc10743e6044955029471d9583a613d4aaa47da73157 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_villani, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 10:01:49 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:49.206 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[94589fb1-3a0b-4b4e-b0b5-ad89b0f7f29d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:01:49 compute-0 NetworkManager[48883]: <info>  [1763978509.2082] manager: (tapd754e322-90): new Veth device (/org/freedesktop/NetworkManager/Devices/54)
Nov 24 10:01:49 compute-0 podman[278130]: 2025-11-24 10:01:49.211585611 +0000 UTC m=+0.201162913 container attach e7ce42439edaf090dcd7dc10743e6044955029471d9583a613d4aaa47da73157 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_villani, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 24 10:01:49 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:49.242 264951 DEBUG oslo.privsep.daemon [-] privsep: reply[1f2727db-e8f7-46ad-b1f1-4ea9610e0fd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:01:49 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:49.245 264951 DEBUG oslo.privsep.daemon [-] privsep: reply[0c51ddc8-ba34-4bb7-b977-d7c331844f71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:01:49 compute-0 NetworkManager[48883]: <info>  [1763978509.2693] device (tapd754e322-90): carrier: link connected
Nov 24 10:01:49 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:49.277 264951 DEBUG oslo.privsep.daemon [-] privsep: reply[d878d0cb-a379-423c-9794-b72d1de554e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:01:49 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:49 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:01:49 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:49.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:01:49 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:49.300 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[65cd08e4-7e16-4c14-b57e-0906d5a1ed1b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd754e322-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0f:89:49'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445683, 'reachable_time': 22416, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278197, 'error': None, 'target': 'ovnmeta-d754e322-9ff1-4d43-9c36-046a636812dd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:01:49 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:49.321 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[69b89cb1-40a8-4642-9be7-02da02f17803]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0f:8949'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445683, 'tstamp': 445683}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278198, 'error': None, 'target': 'ovnmeta-d754e322-9ff1-4d43-9c36-046a636812dd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:01:49 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:49.342 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[8dabf936-7219-4dad-ab8f-c93777f2602a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd754e322-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0f:89:49'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445683, 'reachable_time': 22416, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 278199, 'error': None, 'target': 'ovnmeta-d754e322-9ff1-4d43-9c36-046a636812dd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:01:49 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:49.387 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[fac41391-c12a-49ab-a4d8-db3db3b2cb1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:01:49 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:49.478 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[bac9873d-76c8-4fc8-9e35-511f5baf06a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:01:49 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:49.481 165073 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd754e322-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:01:49 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:49.481 165073 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 10:01:49 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:49.482 165073 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd754e322-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:01:49 compute-0 nova_compute[257700]: 2025-11-24 10:01:49.484 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:49 compute-0 kernel: tapd754e322-90: entered promiscuous mode
Nov 24 10:01:49 compute-0 NetworkManager[48883]: <info>  [1763978509.4867] manager: (tapd754e322-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Nov 24 10:01:49 compute-0 nova_compute[257700]: 2025-11-24 10:01:49.487 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:49 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:49.489 165073 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd754e322-90, col_values=(('external_ids', {'iface-id': '7e4a5eb6-2483-4598-9b08-ceed7ffc252b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:01:49 compute-0 ovn_controller[155123]: 2025-11-24T10:01:49Z|00081|binding|INFO|Releasing lport 7e4a5eb6-2483-4598-9b08-ceed7ffc252b from this chassis (sb_readonly=0)
Nov 24 10:01:49 compute-0 nova_compute[257700]: 2025-11-24 10:01:49.547 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:49 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:49.549 165073 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d754e322-9ff1-4d43-9c36-046a636812dd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d754e322-9ff1-4d43-9c36-046a636812dd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 24 10:01:49 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:49.551 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[84d64482-97b5-406a-9aa4-e536f481385d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:01:49 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:49.551 165073 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 24 10:01:49 compute-0 ovn_metadata_agent[165067]: global
Nov 24 10:01:49 compute-0 ovn_metadata_agent[165067]:     log         /dev/log local0 debug
Nov 24 10:01:49 compute-0 ovn_metadata_agent[165067]:     log-tag     haproxy-metadata-proxy-d754e322-9ff1-4d43-9c36-046a636812dd
Nov 24 10:01:49 compute-0 ovn_metadata_agent[165067]:     user        root
Nov 24 10:01:49 compute-0 ovn_metadata_agent[165067]:     group       root
Nov 24 10:01:49 compute-0 ovn_metadata_agent[165067]:     maxconn     1024
Nov 24 10:01:49 compute-0 ovn_metadata_agent[165067]:     pidfile     /var/lib/neutron/external/pids/d754e322-9ff1-4d43-9c36-046a636812dd.pid.haproxy
Nov 24 10:01:49 compute-0 ovn_metadata_agent[165067]:     daemon
Nov 24 10:01:49 compute-0 ovn_metadata_agent[165067]: 
Nov 24 10:01:49 compute-0 ovn_metadata_agent[165067]: defaults
Nov 24 10:01:49 compute-0 ovn_metadata_agent[165067]:     log global
Nov 24 10:01:49 compute-0 ovn_metadata_agent[165067]:     mode http
Nov 24 10:01:49 compute-0 ovn_metadata_agent[165067]:     option httplog
Nov 24 10:01:49 compute-0 ovn_metadata_agent[165067]:     option dontlognull
Nov 24 10:01:49 compute-0 ovn_metadata_agent[165067]:     option http-server-close
Nov 24 10:01:49 compute-0 ovn_metadata_agent[165067]:     option forwardfor
Nov 24 10:01:49 compute-0 ovn_metadata_agent[165067]:     retries                 3
Nov 24 10:01:49 compute-0 ovn_metadata_agent[165067]:     timeout http-request    30s
Nov 24 10:01:49 compute-0 ovn_metadata_agent[165067]:     timeout connect         30s
Nov 24 10:01:49 compute-0 ovn_metadata_agent[165067]:     timeout client          32s
Nov 24 10:01:49 compute-0 ovn_metadata_agent[165067]:     timeout server          32s
Nov 24 10:01:49 compute-0 ovn_metadata_agent[165067]:     timeout http-keep-alive 30s
Nov 24 10:01:49 compute-0 ovn_metadata_agent[165067]: 
Nov 24 10:01:49 compute-0 ovn_metadata_agent[165067]: 
Nov 24 10:01:49 compute-0 ovn_metadata_agent[165067]: listen listener
Nov 24 10:01:49 compute-0 ovn_metadata_agent[165067]:     bind 169.254.169.254:80
Nov 24 10:01:49 compute-0 ovn_metadata_agent[165067]:     server metadata /var/lib/neutron/metadata_proxy
Nov 24 10:01:49 compute-0 ovn_metadata_agent[165067]:     http-request add-header X-OVN-Network-ID d754e322-9ff1-4d43-9c36-046a636812dd
Nov 24 10:01:49 compute-0 ovn_metadata_agent[165067]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 24 10:01:49 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:01:49.552 165073 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d754e322-9ff1-4d43-9c36-046a636812dd', 'env', 'PROCESS_TAG=haproxy-d754e322-9ff1-4d43-9c36-046a636812dd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d754e322-9ff1-4d43-9c36-046a636812dd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 24 10:01:49 compute-0 youthful_villani[278156]: {
Nov 24 10:01:49 compute-0 youthful_villani[278156]:     "0": [
Nov 24 10:01:49 compute-0 youthful_villani[278156]:         {
Nov 24 10:01:49 compute-0 youthful_villani[278156]:             "devices": [
Nov 24 10:01:49 compute-0 youthful_villani[278156]:                 "/dev/loop3"
Nov 24 10:01:49 compute-0 youthful_villani[278156]:             ],
Nov 24 10:01:49 compute-0 youthful_villani[278156]:             "lv_name": "ceph_lv0",
Nov 24 10:01:49 compute-0 youthful_villani[278156]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 10:01:49 compute-0 youthful_villani[278156]:             "lv_size": "21470642176",
Nov 24 10:01:49 compute-0 youthful_villani[278156]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=84a084c3-61a7-5de7-8207-1f88efa59a64,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 24 10:01:49 compute-0 youthful_villani[278156]:             "lv_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 10:01:49 compute-0 youthful_villani[278156]:             "name": "ceph_lv0",
Nov 24 10:01:49 compute-0 youthful_villani[278156]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 10:01:49 compute-0 youthful_villani[278156]:             "tags": {
Nov 24 10:01:49 compute-0 youthful_villani[278156]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 10:01:49 compute-0 youthful_villani[278156]:                 "ceph.block_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 10:01:49 compute-0 youthful_villani[278156]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 10:01:49 compute-0 youthful_villani[278156]:                 "ceph.cluster_fsid": "84a084c3-61a7-5de7-8207-1f88efa59a64",
Nov 24 10:01:49 compute-0 youthful_villani[278156]:                 "ceph.cluster_name": "ceph",
Nov 24 10:01:49 compute-0 youthful_villani[278156]:                 "ceph.crush_device_class": "",
Nov 24 10:01:49 compute-0 youthful_villani[278156]:                 "ceph.encrypted": "0",
Nov 24 10:01:49 compute-0 youthful_villani[278156]:                 "ceph.osd_fsid": "4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c",
Nov 24 10:01:49 compute-0 youthful_villani[278156]:                 "ceph.osd_id": "0",
Nov 24 10:01:49 compute-0 youthful_villani[278156]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 10:01:49 compute-0 youthful_villani[278156]:                 "ceph.type": "block",
Nov 24 10:01:49 compute-0 youthful_villani[278156]:                 "ceph.vdo": "0",
Nov 24 10:01:49 compute-0 youthful_villani[278156]:                 "ceph.with_tpm": "0"
Nov 24 10:01:49 compute-0 youthful_villani[278156]:             },
Nov 24 10:01:49 compute-0 youthful_villani[278156]:             "type": "block",
Nov 24 10:01:49 compute-0 youthful_villani[278156]:             "vg_name": "ceph_vg0"
Nov 24 10:01:49 compute-0 youthful_villani[278156]:         }
Nov 24 10:01:49 compute-0 youthful_villani[278156]:     ]
Nov 24 10:01:49 compute-0 youthful_villani[278156]: }
Nov 24 10:01:49 compute-0 nova_compute[257700]: 2025-11-24 10:01:49.563 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:49 compute-0 systemd[1]: libpod-e7ce42439edaf090dcd7dc10743e6044955029471d9583a613d4aaa47da73157.scope: Deactivated successfully.
Nov 24 10:01:49 compute-0 conmon[278156]: conmon e7ce42439edaf090dcd7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e7ce42439edaf090dcd7dc10743e6044955029471d9583a613d4aaa47da73157.scope/container/memory.events
Nov 24 10:01:49 compute-0 podman[278130]: 2025-11-24 10:01:49.611516156 +0000 UTC m=+0.601093478 container died e7ce42439edaf090dcd7dc10743e6044955029471d9583a613d4aaa47da73157 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Nov 24 10:01:49 compute-0 nova_compute[257700]: 2025-11-24 10:01:49.629 257704 DEBUG nova.compute.manager [req-84f96271-4cbc-48f9-b95d-85a6d21836ff req-3fa860ba-59f2-404a-be77-829bc0f853f6 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Received event network-vif-plugged-9a53d19e-b3d6-44e0-9943-79a68e4f4fc3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 10:01:49 compute-0 nova_compute[257700]: 2025-11-24 10:01:49.629 257704 DEBUG oslo_concurrency.lockutils [req-84f96271-4cbc-48f9-b95d-85a6d21836ff req-3fa860ba-59f2-404a-be77-829bc0f853f6 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "5c9d7984-c8b4-481b-8d02-4149b3de004a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:01:49 compute-0 nova_compute[257700]: 2025-11-24 10:01:49.629 257704 DEBUG oslo_concurrency.lockutils [req-84f96271-4cbc-48f9-b95d-85a6d21836ff req-3fa860ba-59f2-404a-be77-829bc0f853f6 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "5c9d7984-c8b4-481b-8d02-4149b3de004a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:01:49 compute-0 nova_compute[257700]: 2025-11-24 10:01:49.630 257704 DEBUG oslo_concurrency.lockutils [req-84f96271-4cbc-48f9-b95d-85a6d21836ff req-3fa860ba-59f2-404a-be77-829bc0f853f6 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "5c9d7984-c8b4-481b-8d02-4149b3de004a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:01:49 compute-0 nova_compute[257700]: 2025-11-24 10:01:49.630 257704 DEBUG nova.compute.manager [req-84f96271-4cbc-48f9-b95d-85a6d21836ff req-3fa860ba-59f2-404a-be77-829bc0f853f6 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Processing event network-vif-plugged-9a53d19e-b3d6-44e0-9943-79a68e4f4fc3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 24 10:01:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-057e4d9090364431335830c785ed316dcc284677128daa406e6e65016b7f3815-merged.mount: Deactivated successfully.
Nov 24 10:01:49 compute-0 podman[278130]: 2025-11-24 10:01:49.661796486 +0000 UTC m=+0.651373818 container remove e7ce42439edaf090dcd7dc10743e6044955029471d9583a613d4aaa47da73157 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_villani, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 10:01:49 compute-0 systemd[1]: libpod-conmon-e7ce42439edaf090dcd7dc10743e6044955029471d9583a613d4aaa47da73157.scope: Deactivated successfully.
Nov 24 10:01:49 compute-0 sudo[277985]: pam_unix(sudo:session): session closed for user root
Nov 24 10:01:49 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1052: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.0 MiB/s wr, 31 op/s
Nov 24 10:01:49 compute-0 nova_compute[257700]: 2025-11-24 10:01:49.769 257704 DEBUG nova.virt.driver [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Emitting event <LifecycleEvent: 1763978509.7689943, 5c9d7984-c8b4-481b-8d02-4149b3de004a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 10:01:49 compute-0 nova_compute[257700]: 2025-11-24 10:01:49.769 257704 INFO nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] VM Started (Lifecycle Event)
Nov 24 10:01:49 compute-0 nova_compute[257700]: 2025-11-24 10:01:49.771 257704 DEBUG nova.compute.manager [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 24 10:01:49 compute-0 nova_compute[257700]: 2025-11-24 10:01:49.777 257704 DEBUG nova.virt.libvirt.driver [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 24 10:01:49 compute-0 nova_compute[257700]: 2025-11-24 10:01:49.785 257704 DEBUG nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 10:01:49 compute-0 nova_compute[257700]: 2025-11-24 10:01:49.788 257704 INFO nova.virt.libvirt.driver [-] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Instance spawned successfully.
Nov 24 10:01:49 compute-0 nova_compute[257700]: 2025-11-24 10:01:49.789 257704 DEBUG nova.virt.libvirt.driver [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 24 10:01:49 compute-0 sudo[278267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:01:49 compute-0 nova_compute[257700]: 2025-11-24 10:01:49.795 257704 DEBUG nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 10:01:49 compute-0 sudo[278267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:01:49 compute-0 sudo[278267]: pam_unix(sudo:session): session closed for user root
Nov 24 10:01:49 compute-0 nova_compute[257700]: 2025-11-24 10:01:49.812 257704 INFO nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 10:01:49 compute-0 nova_compute[257700]: 2025-11-24 10:01:49.813 257704 DEBUG nova.virt.driver [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Emitting event <LifecycleEvent: 1763978509.7691972, 5c9d7984-c8b4-481b-8d02-4149b3de004a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 10:01:49 compute-0 nova_compute[257700]: 2025-11-24 10:01:49.813 257704 INFO nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] VM Paused (Lifecycle Event)
Nov 24 10:01:49 compute-0 nova_compute[257700]: 2025-11-24 10:01:49.816 257704 DEBUG nova.virt.libvirt.driver [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 10:01:49 compute-0 nova_compute[257700]: 2025-11-24 10:01:49.817 257704 DEBUG nova.virt.libvirt.driver [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 10:01:49 compute-0 nova_compute[257700]: 2025-11-24 10:01:49.817 257704 DEBUG nova.virt.libvirt.driver [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 10:01:49 compute-0 nova_compute[257700]: 2025-11-24 10:01:49.818 257704 DEBUG nova.virt.libvirt.driver [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 10:01:49 compute-0 nova_compute[257700]: 2025-11-24 10:01:49.818 257704 DEBUG nova.virt.libvirt.driver [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 10:01:49 compute-0 nova_compute[257700]: 2025-11-24 10:01:49.819 257704 DEBUG nova.virt.libvirt.driver [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 10:01:49 compute-0 nova_compute[257700]: 2025-11-24 10:01:49.844 257704 DEBUG nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 10:01:49 compute-0 nova_compute[257700]: 2025-11-24 10:01:49.850 257704 DEBUG nova.virt.driver [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Emitting event <LifecycleEvent: 1763978509.777211, 5c9d7984-c8b4-481b-8d02-4149b3de004a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 10:01:49 compute-0 nova_compute[257700]: 2025-11-24 10:01:49.851 257704 INFO nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] VM Resumed (Lifecycle Event)
Nov 24 10:01:49 compute-0 nova_compute[257700]: 2025-11-24 10:01:49.876 257704 DEBUG nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 10:01:49 compute-0 nova_compute[257700]: 2025-11-24 10:01:49.880 257704 DEBUG nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 10:01:49 compute-0 sudo[278294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- raw list --format json
Nov 24 10:01:49 compute-0 sudo[278294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:01:49 compute-0 nova_compute[257700]: 2025-11-24 10:01:49.886 257704 INFO nova.compute.manager [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Took 12.08 seconds to spawn the instance on the hypervisor.
Nov 24 10:01:49 compute-0 nova_compute[257700]: 2025-11-24 10:01:49.887 257704 DEBUG nova.compute.manager [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 10:01:49 compute-0 nova_compute[257700]: 2025-11-24 10:01:49.899 257704 INFO nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 10:01:49 compute-0 nova_compute[257700]: 2025-11-24 10:01:49.973 257704 INFO nova.compute.manager [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Took 13.07 seconds to build instance.
Nov 24 10:01:49 compute-0 nova_compute[257700]: 2025-11-24 10:01:49.989 257704 DEBUG oslo_concurrency.lockutils [None req-b6701651-5a98-438e-bdeb-34e51bd8323a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "5c9d7984-c8b4-481b-8d02-4149b3de004a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.166s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:01:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:01:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:01:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:01:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:01:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:01:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:01:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:01:50 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:01:50 compute-0 podman[278331]: 2025-11-24 10:01:50.007388449 +0000 UTC m=+0.103886943 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 10:01:50 compute-0 podman[278350]: 2025-11-24 10:01:50.025568337 +0000 UTC m=+0.073161385 container create a0fec16f229e1375ffa084e0d75bcecee1f9390c79da7f346364f6d442095b11 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d754e322-9ff1-4d43-9c36-046a636812dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 10:01:50 compute-0 systemd[1]: Started libpod-conmon-a0fec16f229e1375ffa084e0d75bcecee1f9390c79da7f346364f6d442095b11.scope.
Nov 24 10:01:50 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:01:50 compute-0 podman[278350]: 2025-11-24 10:01:49.996578113 +0000 UTC m=+0.044171191 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 24 10:01:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2de4b3924e7f0f0d63091c0d7af7216ee820e368195fc950db1d0a0498ea101c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 24 10:01:50 compute-0 podman[278350]: 2025-11-24 10:01:50.105327694 +0000 UTC m=+0.152920752 container init a0fec16f229e1375ffa084e0d75bcecee1f9390c79da7f346364f6d442095b11 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d754e322-9ff1-4d43-9c36-046a636812dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 10:01:50 compute-0 podman[278350]: 2025-11-24 10:01:50.111815585 +0000 UTC m=+0.159408633 container start a0fec16f229e1375ffa084e0d75bcecee1f9390c79da7f346364f6d442095b11 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d754e322-9ff1-4d43-9c36-046a636812dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 24 10:01:50 compute-0 neutron-haproxy-ovnmeta-d754e322-9ff1-4d43-9c36-046a636812dd[278374]: [NOTICE]   (278385) : New worker (278392) forked
Nov 24 10:01:50 compute-0 neutron-haproxy-ovnmeta-d754e322-9ff1-4d43-9c36-046a636812dd[278374]: [NOTICE]   (278385) : Loading success.
Nov 24 10:01:50 compute-0 podman[278431]: 2025-11-24 10:01:50.306376474 +0000 UTC m=+0.036585844 container create bc3ca8febce2218961f08aee6cf13e98e702f3defac56cba5e93741415d79377 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_burnell, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1)
Nov 24 10:01:50 compute-0 systemd[1]: Started libpod-conmon-bc3ca8febce2218961f08aee6cf13e98e702f3defac56cba5e93741415d79377.scope.
Nov 24 10:01:50 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:01:50 compute-0 podman[278431]: 2025-11-24 10:01:50.291032085 +0000 UTC m=+0.021241475 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:01:50 compute-0 podman[278431]: 2025-11-24 10:01:50.406123133 +0000 UTC m=+0.136332523 container init bc3ca8febce2218961f08aee6cf13e98e702f3defac56cba5e93741415d79377 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 10:01:50 compute-0 podman[278431]: 2025-11-24 10:01:50.416200492 +0000 UTC m=+0.146409862 container start bc3ca8febce2218961f08aee6cf13e98e702f3defac56cba5e93741415d79377 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_burnell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 10:01:50 compute-0 podman[278431]: 2025-11-24 10:01:50.419936484 +0000 UTC m=+0.150145884 container attach bc3ca8febce2218961f08aee6cf13e98e702f3defac56cba5e93741415d79377 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid)
Nov 24 10:01:50 compute-0 hopeful_burnell[278449]: 167 167
Nov 24 10:01:50 compute-0 systemd[1]: libpod-bc3ca8febce2218961f08aee6cf13e98e702f3defac56cba5e93741415d79377.scope: Deactivated successfully.
Nov 24 10:01:50 compute-0 conmon[278449]: conmon bc3ca8febce2218961f0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bc3ca8febce2218961f08aee6cf13e98e702f3defac56cba5e93741415d79377.scope/container/memory.events
Nov 24 10:01:50 compute-0 podman[278454]: 2025-11-24 10:01:50.479836822 +0000 UTC m=+0.035005695 container died bc3ca8febce2218961f08aee6cf13e98e702f3defac56cba5e93741415d79377 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_burnell, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 10:01:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-2851bd4494367c51a5fb3d08c181e00222f70dd78a9ee81fbf86228e76ad5af7-merged.mount: Deactivated successfully.
Nov 24 10:01:50 compute-0 podman[278454]: 2025-11-24 10:01:50.522552305 +0000 UTC m=+0.077721188 container remove bc3ca8febce2218961f08aee6cf13e98e702f3defac56cba5e93741415d79377 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Nov 24 10:01:50 compute-0 systemd[1]: libpod-conmon-bc3ca8febce2218961f08aee6cf13e98e702f3defac56cba5e93741415d79377.scope: Deactivated successfully.
Nov 24 10:01:50 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:50 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:01:50 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:50.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:01:50 compute-0 podman[278477]: 2025-11-24 10:01:50.774054838 +0000 UTC m=+0.047190695 container create 0e3ec751774abe761c6c61020db5cf45bffcf55bc527e71c36fa16f83ce826ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_chaum, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Nov 24 10:01:50 compute-0 systemd[1]: Started libpod-conmon-0e3ec751774abe761c6c61020db5cf45bffcf55bc527e71c36fa16f83ce826ef.scope.
Nov 24 10:01:50 compute-0 ceph-mon[74331]: pgmap v1052: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.0 MiB/s wr, 31 op/s
Nov 24 10:01:50 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:01:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a40c2df131652eaa21d099dcd6640f94570c2c1ab2e9525c326a086268791fc2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 10:01:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a40c2df131652eaa21d099dcd6640f94570c2c1ab2e9525c326a086268791fc2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 10:01:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a40c2df131652eaa21d099dcd6640f94570c2c1ab2e9525c326a086268791fc2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 10:01:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a40c2df131652eaa21d099dcd6640f94570c2c1ab2e9525c326a086268791fc2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 10:01:50 compute-0 podman[278477]: 2025-11-24 10:01:50.844645259 +0000 UTC m=+0.117781126 container init 0e3ec751774abe761c6c61020db5cf45bffcf55bc527e71c36fa16f83ce826ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_chaum, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 24 10:01:50 compute-0 podman[278477]: 2025-11-24 10:01:50.751676666 +0000 UTC m=+0.024812543 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:01:50 compute-0 podman[278477]: 2025-11-24 10:01:50.850969556 +0000 UTC m=+0.124105413 container start 0e3ec751774abe761c6c61020db5cf45bffcf55bc527e71c36fa16f83ce826ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Nov 24 10:01:50 compute-0 podman[278477]: 2025-11-24 10:01:50.854265106 +0000 UTC m=+0.127400963 container attach 0e3ec751774abe761c6c61020db5cf45bffcf55bc527e71c36fa16f83ce826ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_chaum, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 10:01:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:01:50] "GET /metrics HTTP/1.1" 200 48484 "" "Prometheus/2.51.0"
Nov 24 10:01:50 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:01:50] "GET /metrics HTTP/1.1" 200 48484 "" "Prometheus/2.51.0"
Nov 24 10:01:51 compute-0 nova_compute[257700]: 2025-11-24 10:01:51.156 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:51 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:51 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:51 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:51.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:51 compute-0 lvm[278567]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 10:01:51 compute-0 lvm[278567]: VG ceph_vg0 finished
Nov 24 10:01:51 compute-0 magical_chaum[278494]: {}
Nov 24 10:01:51 compute-0 systemd[1]: libpod-0e3ec751774abe761c6c61020db5cf45bffcf55bc527e71c36fa16f83ce826ef.scope: Deactivated successfully.
Nov 24 10:01:51 compute-0 systemd[1]: libpod-0e3ec751774abe761c6c61020db5cf45bffcf55bc527e71c36fa16f83ce826ef.scope: Consumed 1.255s CPU time.
Nov 24 10:01:51 compute-0 podman[278477]: 2025-11-24 10:01:51.675330928 +0000 UTC m=+0.948466855 container died 0e3ec751774abe761c6c61020db5cf45bffcf55bc527e71c36fa16f83ce826ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_chaum, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 10:01:51 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:01:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-a40c2df131652eaa21d099dcd6640f94570c2c1ab2e9525c326a086268791fc2-merged.mount: Deactivated successfully.
Nov 24 10:01:51 compute-0 nova_compute[257700]: 2025-11-24 10:01:51.733 257704 DEBUG nova.compute.manager [req-4c449549-8dfc-4c32-812b-7d03ddfd23b2 req-d032669a-9278-4637-a329-07124e9cfcb8 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Received event network-vif-plugged-9a53d19e-b3d6-44e0-9943-79a68e4f4fc3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 10:01:51 compute-0 nova_compute[257700]: 2025-11-24 10:01:51.736 257704 DEBUG oslo_concurrency.lockutils [req-4c449549-8dfc-4c32-812b-7d03ddfd23b2 req-d032669a-9278-4637-a329-07124e9cfcb8 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "5c9d7984-c8b4-481b-8d02-4149b3de004a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:01:51 compute-0 nova_compute[257700]: 2025-11-24 10:01:51.737 257704 DEBUG oslo_concurrency.lockutils [req-4c449549-8dfc-4c32-812b-7d03ddfd23b2 req-d032669a-9278-4637-a329-07124e9cfcb8 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "5c9d7984-c8b4-481b-8d02-4149b3de004a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:01:51 compute-0 nova_compute[257700]: 2025-11-24 10:01:51.738 257704 DEBUG oslo_concurrency.lockutils [req-4c449549-8dfc-4c32-812b-7d03ddfd23b2 req-d032669a-9278-4637-a329-07124e9cfcb8 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "5c9d7984-c8b4-481b-8d02-4149b3de004a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:01:51 compute-0 nova_compute[257700]: 2025-11-24 10:01:51.743 257704 DEBUG nova.compute.manager [req-4c449549-8dfc-4c32-812b-7d03ddfd23b2 req-d032669a-9278-4637-a329-07124e9cfcb8 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] No waiting events found dispatching network-vif-plugged-9a53d19e-b3d6-44e0-9943-79a68e4f4fc3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 10:01:51 compute-0 nova_compute[257700]: 2025-11-24 10:01:51.744 257704 WARNING nova.compute.manager [req-4c449549-8dfc-4c32-812b-7d03ddfd23b2 req-d032669a-9278-4637-a329-07124e9cfcb8 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Received unexpected event network-vif-plugged-9a53d19e-b3d6-44e0-9943-79a68e4f4fc3 for instance with vm_state active and task_state None.
Nov 24 10:01:51 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1053: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.0 MiB/s wr, 96 op/s
Nov 24 10:01:51 compute-0 podman[278477]: 2025-11-24 10:01:51.748186605 +0000 UTC m=+1.021322472 container remove 0e3ec751774abe761c6c61020db5cf45bffcf55bc527e71c36fa16f83ce826ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_chaum, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Nov 24 10:01:51 compute-0 systemd[1]: libpod-conmon-0e3ec751774abe761c6c61020db5cf45bffcf55bc527e71c36fa16f83ce826ef.scope: Deactivated successfully.
Nov 24 10:01:51 compute-0 sudo[278294]: pam_unix(sudo:session): session closed for user root
Nov 24 10:01:51 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 10:01:51 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:01:51 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 10:01:51 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:01:51 compute-0 sudo[278585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 10:01:51 compute-0 sudo[278585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:01:51 compute-0 sudo[278585]: pam_unix(sudo:session): session closed for user root
Nov 24 10:01:52 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:52 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:01:52 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:52.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:01:52 compute-0 nova_compute[257700]: 2025-11-24 10:01:52.613 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:52 compute-0 ceph-mon[74331]: pgmap v1053: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.0 MiB/s wr, 96 op/s
Nov 24 10:01:52 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:01:52 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:01:53 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:53 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:53 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:53.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:53 compute-0 NetworkManager[48883]: <info>  [1763978513.4995] manager: (patch-provnet-aec09a4d-39ae-42d2-80ba-0cd5b53fed5d-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/56)
Nov 24 10:01:53 compute-0 NetworkManager[48883]: <info>  [1763978513.5003] manager: (patch-br-int-to-provnet-aec09a4d-39ae-42d2-80ba-0cd5b53fed5d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/57)
Nov 24 10:01:53 compute-0 nova_compute[257700]: 2025-11-24 10:01:53.498 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:53 compute-0 ovn_controller[155123]: 2025-11-24T10:01:53Z|00082|binding|INFO|Releasing lport 7e4a5eb6-2483-4598-9b08-ceed7ffc252b from this chassis (sb_readonly=0)
Nov 24 10:01:53 compute-0 nova_compute[257700]: 2025-11-24 10:01:53.502 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:53 compute-0 nova_compute[257700]: 2025-11-24 10:01:53.504 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:53 compute-0 ovn_controller[155123]: 2025-11-24T10:01:53Z|00083|binding|INFO|Releasing lport 7e4a5eb6-2483-4598-9b08-ceed7ffc252b from this chassis (sb_readonly=0)
Nov 24 10:01:53 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1054: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 12 KiB/s wr, 58 op/s
Nov 24 10:01:54 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:54 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:54 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:54.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:54 compute-0 nova_compute[257700]: 2025-11-24 10:01:54.634 257704 DEBUG nova.compute.manager [req-e0bd6df8-0637-405f-92aa-47e87fa929db req-b00d274e-641c-4152-88ad-6de2f32f9390 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Received event network-changed-9a53d19e-b3d6-44e0-9943-79a68e4f4fc3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 10:01:54 compute-0 nova_compute[257700]: 2025-11-24 10:01:54.635 257704 DEBUG nova.compute.manager [req-e0bd6df8-0637-405f-92aa-47e87fa929db req-b00d274e-641c-4152-88ad-6de2f32f9390 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Refreshing instance network info cache due to event network-changed-9a53d19e-b3d6-44e0-9943-79a68e4f4fc3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 10:01:54 compute-0 nova_compute[257700]: 2025-11-24 10:01:54.635 257704 DEBUG oslo_concurrency.lockutils [req-e0bd6df8-0637-405f-92aa-47e87fa929db req-b00d274e-641c-4152-88ad-6de2f32f9390 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "refresh_cache-5c9d7984-c8b4-481b-8d02-4149b3de004a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 10:01:54 compute-0 nova_compute[257700]: 2025-11-24 10:01:54.635 257704 DEBUG oslo_concurrency.lockutils [req-e0bd6df8-0637-405f-92aa-47e87fa929db req-b00d274e-641c-4152-88ad-6de2f32f9390 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquired lock "refresh_cache-5c9d7984-c8b4-481b-8d02-4149b3de004a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 10:01:54 compute-0 nova_compute[257700]: 2025-11-24 10:01:54.635 257704 DEBUG nova.network.neutron [req-e0bd6df8-0637-405f-92aa-47e87fa929db req-b00d274e-641c-4152-88ad-6de2f32f9390 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Refreshing network info cache for port 9a53d19e-b3d6-44e0-9943-79a68e4f4fc3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 10:01:54 compute-0 sshd-session[278613]: Invalid user kyt from 36.255.3.203 port 36023
Nov 24 10:01:54 compute-0 ceph-mon[74331]: pgmap v1054: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 12 KiB/s wr, 58 op/s
Nov 24 10:01:54 compute-0 sshd-session[278613]: Received disconnect from 36.255.3.203 port 36023:11: Bye Bye [preauth]
Nov 24 10:01:54 compute-0 sshd-session[278613]: Disconnected from invalid user kyt 36.255.3.203 port 36023 [preauth]
Nov 24 10:01:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:01:54 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:01:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:01:54 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:01:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:01:54 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:01:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:01:55 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:01:55 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:55 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:01:55 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:55.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:01:55 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1055: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 12 KiB/s wr, 58 op/s
Nov 24 10:01:56 compute-0 nova_compute[257700]: 2025-11-24 10:01:56.157 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:56 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:56 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:56 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:56.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:56 compute-0 nova_compute[257700]: 2025-11-24 10:01:56.646 257704 DEBUG nova.network.neutron [req-e0bd6df8-0637-405f-92aa-47e87fa929db req-b00d274e-641c-4152-88ad-6de2f32f9390 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Updated VIF entry in instance network info cache for port 9a53d19e-b3d6-44e0-9943-79a68e4f4fc3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 10:01:56 compute-0 nova_compute[257700]: 2025-11-24 10:01:56.647 257704 DEBUG nova.network.neutron [req-e0bd6df8-0637-405f-92aa-47e87fa929db req-b00d274e-641c-4152-88ad-6de2f32f9390 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Updating instance_info_cache with network_info: [{"id": "9a53d19e-b3d6-44e0-9943-79a68e4f4fc3", "address": "fa:16:3e:aa:32:5c", "network": {"id": "d754e322-9ff1-4d43-9c36-046a636812dd", "bridge": "br-int", "label": "tempest-network-smoke--1566260689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a53d19e-b3", "ovs_interfaceid": "9a53d19e-b3d6-44e0-9943-79a68e4f4fc3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 10:01:56 compute-0 nova_compute[257700]: 2025-11-24 10:01:56.676 257704 DEBUG oslo_concurrency.lockutils [req-e0bd6df8-0637-405f-92aa-47e87fa929db req-b00d274e-641c-4152-88ad-6de2f32f9390 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Releasing lock "refresh_cache-5c9d7984-c8b4-481b-8d02-4149b3de004a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 10:01:56 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:01:56 compute-0 ceph-mon[74331]: pgmap v1055: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 12 KiB/s wr, 58 op/s
Nov 24 10:01:57 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:57 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:57 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:57.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:01:57.555Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:01:57 compute-0 nova_compute[257700]: 2025-11-24 10:01:57.616 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:01:57 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1056: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Nov 24 10:01:58 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:58 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:01:58 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:01:58.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:01:58 compute-0 ceph-mon[74331]: pgmap v1056: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Nov 24 10:01:58 compute-0 sshd-session[278618]: Invalid user system from 14.215.126.91 port 54172
Nov 24 10:01:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:01:58.904Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 10:01:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:01:58.905Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:01:59 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:01:59 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:01:59 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:01:59.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:01:59 compute-0 sshd-session[278618]: Received disconnect from 14.215.126.91 port 54172:11: Bye Bye [preauth]
Nov 24 10:01:59 compute-0 sshd-session[278618]: Disconnected from invalid user system 14.215.126.91 port 54172 [preauth]
Nov 24 10:01:59 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1057: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 24 10:02:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:01:59 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:02:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:01:59 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:02:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:01:59 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:02:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:02:00 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:02:00 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:00 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:00 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:00.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:00 compute-0 ceph-mon[74331]: pgmap v1057: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 24 10:02:00 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:02:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:02:00] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Nov 24 10:02:00 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:02:00] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Nov 24 10:02:01 compute-0 nova_compute[257700]: 2025-11-24 10:02:01.159 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:01 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:01 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:01 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:01.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:01 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:02:01 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1058: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Nov 24 10:02:02 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:02 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:02 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:02.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:02 compute-0 nova_compute[257700]: 2025-11-24 10:02:02.620 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:02 compute-0 ceph-mon[74331]: pgmap v1058: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Nov 24 10:02:02 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/2570801218' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 10:02:02 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/2570801218' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 10:02:03 compute-0 ovn_controller[155123]: 2025-11-24T10:02:03Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:aa:32:5c 10.100.0.3
Nov 24 10:02:03 compute-0 ovn_controller[155123]: 2025-11-24T10:02:03Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:aa:32:5c 10.100.0.3
Nov 24 10:02:03 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:03 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:02:03 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:03.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:02:03 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1059: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 525 KiB/s rd, 18 op/s
Nov 24 10:02:04 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:04 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:04 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:04.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:04 compute-0 ceph-mon[74331]: pgmap v1059: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 525 KiB/s rd, 18 op/s
Nov 24 10:02:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:02:04 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:02:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:02:04 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:02:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:02:04 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:02:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:02:05 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:02:05 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:05 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:02:05 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:05.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:02:05 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1060: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 525 KiB/s rd, 18 op/s
Nov 24 10:02:06 compute-0 nova_compute[257700]: 2025-11-24 10:02:06.161 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:06 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:06 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:02:06 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:06.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:02:06 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:02:06 compute-0 ceph-mon[74331]: pgmap v1060: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 525 KiB/s rd, 18 op/s
Nov 24 10:02:07 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:07 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:02:07 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:07.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:02:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:02:07.556Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:02:07 compute-0 nova_compute[257700]: 2025-11-24 10:02:07.623 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:07 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1061: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 907 KiB/s rd, 2.1 MiB/s wr, 83 op/s
Nov 24 10:02:08 compute-0 sudo[278632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:02:08 compute-0 sudo[278632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:02:08 compute-0 sudo[278632]: pam_unix(sudo:session): session closed for user root
Nov 24 10:02:08 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:08 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:08 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:08.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:02:08.906Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 10:02:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:02:08.906Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:02:08 compute-0 ceph-mon[74331]: pgmap v1061: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 907 KiB/s rd, 2.1 MiB/s wr, 83 op/s
Nov 24 10:02:09 compute-0 nova_compute[257700]: 2025-11-24 10:02:09.003 257704 INFO nova.compute.manager [None req-0303ec09-24a2-4977-a20e-7047e8c28476 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Get console output
Nov 24 10:02:09 compute-0 nova_compute[257700]: 2025-11-24 10:02:09.009 266539 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 24 10:02:09 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:09 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:02:09 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:09.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:02:09 compute-0 ovn_controller[155123]: 2025-11-24T10:02:09Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:aa:32:5c 10.100.0.3
Nov 24 10:02:09 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1062: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 386 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Nov 24 10:02:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:02:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:02:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:02:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:02:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:02:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:02:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:02:10 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:02:10 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:10 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:10 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:10.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:10 compute-0 ceph-mon[74331]: pgmap v1062: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 386 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Nov 24 10:02:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:02:10] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Nov 24 10:02:10 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:02:10] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Nov 24 10:02:11 compute-0 nova_compute[257700]: 2025-11-24 10:02:11.163 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:11 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:11 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:11 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:11.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:11 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:02:11 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1063: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 389 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Nov 24 10:02:12 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:12 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:12 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:12.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:12 compute-0 nova_compute[257700]: 2025-11-24 10:02:12.626 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:12 compute-0 ceph-mon[74331]: pgmap v1063: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 389 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Nov 24 10:02:13 compute-0 ovn_controller[155123]: 2025-11-24T10:02:13Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:aa:32:5c 10.100.0.3
Nov 24 10:02:13 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:13 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:13 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:13.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:13 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1064: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 385 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Nov 24 10:02:14 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:14 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:14 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:14.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:14 compute-0 podman[278664]: 2025-11-24 10:02:14.797108148 +0000 UTC m=+0.068113821 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 10:02:14 compute-0 podman[278665]: 2025-11-24 10:02:14.82394849 +0000 UTC m=+0.095235160 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 10:02:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:02:14 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:02:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:02:14 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:02:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:02:14 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:02:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:02:15 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:02:15 compute-0 ceph-mon[74331]: pgmap v1064: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 385 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Nov 24 10:02:15 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:15 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:15 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:15.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:15 compute-0 ovn_controller[155123]: 2025-11-24T10:02:15Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:aa:32:5c 10.100.0.3
Nov 24 10:02:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:02:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:02:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:02:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:02:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:02:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:02:15 compute-0 nova_compute[257700]: 2025-11-24 10:02:15.737 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:15 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:02:15.736 165073 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:13:51', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4e:f0:a8:6f:5e:1b'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 10:02:15 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:02:15.738 165073 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 10:02:15 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1065: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 385 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Nov 24 10:02:16 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:02:16 compute-0 nova_compute[257700]: 2025-11-24 10:02:16.166 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:16 compute-0 nova_compute[257700]: 2025-11-24 10:02:16.616 257704 DEBUG nova.compute.manager [req-860abff8-e272-4256-a2f6-10a3f4704b12 req-da9adae5-b89a-4673-99d7-9cdc7be3c338 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Received event network-changed-9a53d19e-b3d6-44e0-9943-79a68e4f4fc3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 10:02:16 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:16 compute-0 nova_compute[257700]: 2025-11-24 10:02:16.617 257704 DEBUG nova.compute.manager [req-860abff8-e272-4256-a2f6-10a3f4704b12 req-da9adae5-b89a-4673-99d7-9cdc7be3c338 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Refreshing instance network info cache due to event network-changed-9a53d19e-b3d6-44e0-9943-79a68e4f4fc3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 10:02:16 compute-0 nova_compute[257700]: 2025-11-24 10:02:16.618 257704 DEBUG oslo_concurrency.lockutils [req-860abff8-e272-4256-a2f6-10a3f4704b12 req-da9adae5-b89a-4673-99d7-9cdc7be3c338 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "refresh_cache-5c9d7984-c8b4-481b-8d02-4149b3de004a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 10:02:16 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:02:16 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:16.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:02:16 compute-0 nova_compute[257700]: 2025-11-24 10:02:16.618 257704 DEBUG oslo_concurrency.lockutils [req-860abff8-e272-4256-a2f6-10a3f4704b12 req-da9adae5-b89a-4673-99d7-9cdc7be3c338 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquired lock "refresh_cache-5c9d7984-c8b4-481b-8d02-4149b3de004a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 10:02:16 compute-0 nova_compute[257700]: 2025-11-24 10:02:16.618 257704 DEBUG nova.network.neutron [req-860abff8-e272-4256-a2f6-10a3f4704b12 req-da9adae5-b89a-4673-99d7-9cdc7be3c338 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Refreshing network info cache for port 9a53d19e-b3d6-44e0-9943-79a68e4f4fc3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 10:02:16 compute-0 nova_compute[257700]: 2025-11-24 10:02:16.697 257704 DEBUG oslo_concurrency.lockutils [None req-66d6dcb9-4bb0-4e9b-8ebc-f5860d455dda 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "5c9d7984-c8b4-481b-8d02-4149b3de004a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:02:16 compute-0 nova_compute[257700]: 2025-11-24 10:02:16.697 257704 DEBUG oslo_concurrency.lockutils [None req-66d6dcb9-4bb0-4e9b-8ebc-f5860d455dda 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "5c9d7984-c8b4-481b-8d02-4149b3de004a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:02:16 compute-0 nova_compute[257700]: 2025-11-24 10:02:16.698 257704 DEBUG oslo_concurrency.lockutils [None req-66d6dcb9-4bb0-4e9b-8ebc-f5860d455dda 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "5c9d7984-c8b4-481b-8d02-4149b3de004a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:02:16 compute-0 nova_compute[257700]: 2025-11-24 10:02:16.698 257704 DEBUG oslo_concurrency.lockutils [None req-66d6dcb9-4bb0-4e9b-8ebc-f5860d455dda 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "5c9d7984-c8b4-481b-8d02-4149b3de004a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:02:16 compute-0 nova_compute[257700]: 2025-11-24 10:02:16.699 257704 DEBUG oslo_concurrency.lockutils [None req-66d6dcb9-4bb0-4e9b-8ebc-f5860d455dda 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "5c9d7984-c8b4-481b-8d02-4149b3de004a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:02:16 compute-0 nova_compute[257700]: 2025-11-24 10:02:16.700 257704 INFO nova.compute.manager [None req-66d6dcb9-4bb0-4e9b-8ebc-f5860d455dda 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Terminating instance
Nov 24 10:02:16 compute-0 nova_compute[257700]: 2025-11-24 10:02:16.702 257704 DEBUG nova.compute.manager [None req-66d6dcb9-4bb0-4e9b-8ebc-f5860d455dda 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 24 10:02:16 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:02:16 compute-0 kernel: tap9a53d19e-b3 (unregistering): left promiscuous mode
Nov 24 10:02:16 compute-0 NetworkManager[48883]: <info>  [1763978536.7548] device (tap9a53d19e-b3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 24 10:02:16 compute-0 ovn_controller[155123]: 2025-11-24T10:02:16Z|00084|binding|INFO|Releasing lport 9a53d19e-b3d6-44e0-9943-79a68e4f4fc3 from this chassis (sb_readonly=0)
Nov 24 10:02:16 compute-0 ovn_controller[155123]: 2025-11-24T10:02:16Z|00085|binding|INFO|Setting lport 9a53d19e-b3d6-44e0-9943-79a68e4f4fc3 down in Southbound
Nov 24 10:02:16 compute-0 nova_compute[257700]: 2025-11-24 10:02:16.766 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:16 compute-0 ovn_controller[155123]: 2025-11-24T10:02:16Z|00086|binding|INFO|Removing iface tap9a53d19e-b3 ovn-installed in OVS
Nov 24 10:02:16 compute-0 nova_compute[257700]: 2025-11-24 10:02:16.771 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:16 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:02:16.776 165073 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:aa:32:5c 10.100.0.3'], port_security=['fa:16:3e:aa:32:5c 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5c9d7984-c8b4-481b-8d02-4149b3de004a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d754e322-9ff1-4d43-9c36-046a636812dd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94d069fc040647d5a6e54894eec915fe', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4ad840a7-269b-4612-ad15-662a3b4097e7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=225712e3-6af3-42ae-ae6b-a838e36455df, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f45b2855760>], logical_port=9a53d19e-b3d6-44e0-9943-79a68e4f4fc3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f45b2855760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 10:02:16 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:02:16.777 165073 INFO neutron.agent.ovn.metadata.agent [-] Port 9a53d19e-b3d6-44e0-9943-79a68e4f4fc3 in datapath d754e322-9ff1-4d43-9c36-046a636812dd unbound from our chassis
Nov 24 10:02:16 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:02:16.778 165073 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d754e322-9ff1-4d43-9c36-046a636812dd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 24 10:02:16 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:02:16.780 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[407a2e03-b733-49e3-a81c-d2805321ecea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:02:16 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:02:16.780 165073 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d754e322-9ff1-4d43-9c36-046a636812dd namespace which is not needed anymore
Nov 24 10:02:16 compute-0 nova_compute[257700]: 2025-11-24 10:02:16.805 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:16 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Nov 24 10:02:16 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000a.scope: Consumed 14.964s CPU time.
Nov 24 10:02:16 compute-0 systemd-machined[219130]: Machine qemu-6-instance-0000000a terminated.
Nov 24 10:02:16 compute-0 nova_compute[257700]: 2025-11-24 10:02:16.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:02:16 compute-0 nova_compute[257700]: 2025-11-24 10:02:16.921 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 24 10:02:16 compute-0 neutron-haproxy-ovnmeta-d754e322-9ff1-4d43-9c36-046a636812dd[278374]: [NOTICE]   (278385) : haproxy version is 2.8.14-c23fe91
Nov 24 10:02:16 compute-0 neutron-haproxy-ovnmeta-d754e322-9ff1-4d43-9c36-046a636812dd[278374]: [NOTICE]   (278385) : path to executable is /usr/sbin/haproxy
Nov 24 10:02:16 compute-0 neutron-haproxy-ovnmeta-d754e322-9ff1-4d43-9c36-046a636812dd[278374]: [WARNING]  (278385) : Exiting Master process...
Nov 24 10:02:16 compute-0 neutron-haproxy-ovnmeta-d754e322-9ff1-4d43-9c36-046a636812dd[278374]: [WARNING]  (278385) : Exiting Master process...
Nov 24 10:02:16 compute-0 neutron-haproxy-ovnmeta-d754e322-9ff1-4d43-9c36-046a636812dd[278374]: [ALERT]    (278385) : Current worker (278392) exited with code 143 (Terminated)
Nov 24 10:02:16 compute-0 neutron-haproxy-ovnmeta-d754e322-9ff1-4d43-9c36-046a636812dd[278374]: [WARNING]  (278385) : All workers exited. Exiting... (0)
Nov 24 10:02:16 compute-0 systemd[1]: libpod-a0fec16f229e1375ffa084e0d75bcecee1f9390c79da7f346364f6d442095b11.scope: Deactivated successfully.
Nov 24 10:02:16 compute-0 nova_compute[257700]: 2025-11-24 10:02:16.940 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 24 10:02:16 compute-0 podman[278734]: 2025-11-24 10:02:16.944596934 +0000 UTC m=+0.057097528 container died a0fec16f229e1375ffa084e0d75bcecee1f9390c79da7f346364f6d442095b11 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d754e322-9ff1-4d43-9c36-046a636812dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 24 10:02:16 compute-0 nova_compute[257700]: 2025-11-24 10:02:16.945 257704 INFO nova.virt.libvirt.driver [-] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Instance destroyed successfully.
Nov 24 10:02:16 compute-0 nova_compute[257700]: 2025-11-24 10:02:16.946 257704 DEBUG nova.objects.instance [None req-66d6dcb9-4bb0-4e9b-8ebc-f5860d455dda 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'resources' on Instance uuid 5c9d7984-c8b4-481b-8d02-4149b3de004a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 10:02:16 compute-0 nova_compute[257700]: 2025-11-24 10:02:16.956 257704 DEBUG nova.virt.libvirt.vif [None req-66d6dcb9-4bb0-4e9b-8ebc-f5860d455dda 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T10:01:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-956984475',display_name='tempest-TestNetworkBasicOps-server-956984475',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-956984475',id=10,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKuoTfe68HXZGbJfjz8QvzvUcn9raHUm7jcwqmlo2grIjCxn4xXYaNtD3Te7wsfeP7X8BrDZpRm661umH273S3VAqs9EBsjs6AcgTXTQEtr5AtHIzKyqSFDKtZjPMSYskQ==',key_name='tempest-TestNetworkBasicOps-1736723705',keypairs=<?>,launch_index=0,launched_at=2025-11-24T10:01:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-d03wzt7p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T10:01:49Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=5c9d7984-c8b4-481b-8d02-4149b3de004a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9a53d19e-b3d6-44e0-9943-79a68e4f4fc3", "address": "fa:16:3e:aa:32:5c", "network": {"id": "d754e322-9ff1-4d43-9c36-046a636812dd", "bridge": "br-int", "label": "tempest-network-smoke--1566260689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a53d19e-b3", "ovs_interfaceid": "9a53d19e-b3d6-44e0-9943-79a68e4f4fc3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 24 10:02:16 compute-0 nova_compute[257700]: 2025-11-24 10:02:16.957 257704 DEBUG nova.network.os_vif_util [None req-66d6dcb9-4bb0-4e9b-8ebc-f5860d455dda 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "9a53d19e-b3d6-44e0-9943-79a68e4f4fc3", "address": "fa:16:3e:aa:32:5c", "network": {"id": "d754e322-9ff1-4d43-9c36-046a636812dd", "bridge": "br-int", "label": "tempest-network-smoke--1566260689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a53d19e-b3", "ovs_interfaceid": "9a53d19e-b3d6-44e0-9943-79a68e4f4fc3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 10:02:16 compute-0 nova_compute[257700]: 2025-11-24 10:02:16.958 257704 DEBUG nova.network.os_vif_util [None req-66d6dcb9-4bb0-4e9b-8ebc-f5860d455dda 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:aa:32:5c,bridge_name='br-int',has_traffic_filtering=True,id=9a53d19e-b3d6-44e0-9943-79a68e4f4fc3,network=Network(d754e322-9ff1-4d43-9c36-046a636812dd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a53d19e-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 10:02:16 compute-0 nova_compute[257700]: 2025-11-24 10:02:16.958 257704 DEBUG os_vif [None req-66d6dcb9-4bb0-4e9b-8ebc-f5860d455dda 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:aa:32:5c,bridge_name='br-int',has_traffic_filtering=True,id=9a53d19e-b3d6-44e0-9943-79a68e4f4fc3,network=Network(d754e322-9ff1-4d43-9c36-046a636812dd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a53d19e-b3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 24 10:02:16 compute-0 nova_compute[257700]: 2025-11-24 10:02:16.960 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:16 compute-0 nova_compute[257700]: 2025-11-24 10:02:16.961 257704 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9a53d19e-b3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:02:17 compute-0 nova_compute[257700]: 2025-11-24 10:02:17.010 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:17 compute-0 nova_compute[257700]: 2025-11-24 10:02:17.012 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:17 compute-0 nova_compute[257700]: 2025-11-24 10:02:17.015 257704 INFO os_vif [None req-66d6dcb9-4bb0-4e9b-8ebc-f5860d455dda 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:aa:32:5c,bridge_name='br-int',has_traffic_filtering=True,id=9a53d19e-b3d6-44e0-9943-79a68e4f4fc3,network=Network(d754e322-9ff1-4d43-9c36-046a636812dd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a53d19e-b3')
Nov 24 10:02:17 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a0fec16f229e1375ffa084e0d75bcecee1f9390c79da7f346364f6d442095b11-userdata-shm.mount: Deactivated successfully.
Nov 24 10:02:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-2de4b3924e7f0f0d63091c0d7af7216ee820e368195fc950db1d0a0498ea101c-merged.mount: Deactivated successfully.
Nov 24 10:02:17 compute-0 ceph-mon[74331]: pgmap v1065: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 385 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Nov 24 10:02:17 compute-0 podman[278734]: 2025-11-24 10:02:17.039861594 +0000 UTC m=+0.152362178 container cleanup a0fec16f229e1375ffa084e0d75bcecee1f9390c79da7f346364f6d442095b11 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d754e322-9ff1-4d43-9c36-046a636812dd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 10:02:17 compute-0 systemd[1]: libpod-conmon-a0fec16f229e1375ffa084e0d75bcecee1f9390c79da7f346364f6d442095b11.scope: Deactivated successfully.
Nov 24 10:02:17 compute-0 podman[278787]: 2025-11-24 10:02:17.105740238 +0000 UTC m=+0.045239386 container remove a0fec16f229e1375ffa084e0d75bcecee1f9390c79da7f346364f6d442095b11 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d754e322-9ff1-4d43-9c36-046a636812dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 24 10:02:17 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:02:17.112 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[d82aafba-f0f2-450e-9a00-a6a33d15c5e3]: (4, ('Mon Nov 24 10:02:16 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d754e322-9ff1-4d43-9c36-046a636812dd (a0fec16f229e1375ffa084e0d75bcecee1f9390c79da7f346364f6d442095b11)\na0fec16f229e1375ffa084e0d75bcecee1f9390c79da7f346364f6d442095b11\nMon Nov 24 10:02:17 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d754e322-9ff1-4d43-9c36-046a636812dd (a0fec16f229e1375ffa084e0d75bcecee1f9390c79da7f346364f6d442095b11)\na0fec16f229e1375ffa084e0d75bcecee1f9390c79da7f346364f6d442095b11\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:02:17 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:02:17.113 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[43397f74-9cf3-49f4-abcc-ab0c6669aa48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:02:17 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:02:17.114 165073 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd754e322-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:02:17 compute-0 nova_compute[257700]: 2025-11-24 10:02:17.115 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:17 compute-0 kernel: tapd754e322-90: left promiscuous mode
Nov 24 10:02:17 compute-0 nova_compute[257700]: 2025-11-24 10:02:17.117 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:17 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:02:17.120 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[004f1278-2d19-4efc-a090-36607a0d427e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:02:17 compute-0 nova_compute[257700]: 2025-11-24 10:02:17.138 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:17 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:02:17.138 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[41e27b9c-2d6b-4c24-87a1-1c5d2f77e7a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:02:17 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:02:17.139 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[36c97681-0cba-46e4-a641-0c2dbfb62d34]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:02:17 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:02:17.154 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[078945f2-787a-4c9d-b030-1453989050c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445676, 'reachable_time': 39943, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278808, 'error': None, 'target': 'ovnmeta-d754e322-9ff1-4d43-9c36-046a636812dd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:02:17 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:02:17.157 165227 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d754e322-9ff1-4d43-9c36-046a636812dd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 24 10:02:17 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:02:17.157 165227 DEBUG oslo.privsep.daemon [-] privsep: reply[7d0dddd0-d39a-4035-a081-6f6c5654c4a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:02:17 compute-0 systemd[1]: run-netns-ovnmeta\x2dd754e322\x2d9ff1\x2d4d43\x2d9c36\x2d046a636812dd.mount: Deactivated successfully.
Nov 24 10:02:17 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:17 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:17 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:17.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:17 compute-0 nova_compute[257700]: 2025-11-24 10:02:17.453 257704 INFO nova.virt.libvirt.driver [None req-66d6dcb9-4bb0-4e9b-8ebc-f5860d455dda 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Deleting instance files /var/lib/nova/instances/5c9d7984-c8b4-481b-8d02-4149b3de004a_del
Nov 24 10:02:17 compute-0 nova_compute[257700]: 2025-11-24 10:02:17.455 257704 INFO nova.virt.libvirt.driver [None req-66d6dcb9-4bb0-4e9b-8ebc-f5860d455dda 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Deletion of /var/lib/nova/instances/5c9d7984-c8b4-481b-8d02-4149b3de004a_del complete
Nov 24 10:02:17 compute-0 nova_compute[257700]: 2025-11-24 10:02:17.547 257704 INFO nova.compute.manager [None req-66d6dcb9-4bb0-4e9b-8ebc-f5860d455dda 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Took 0.84 seconds to destroy the instance on the hypervisor.
Nov 24 10:02:17 compute-0 nova_compute[257700]: 2025-11-24 10:02:17.548 257704 DEBUG oslo.service.loopingcall [None req-66d6dcb9-4bb0-4e9b-8ebc-f5860d455dda 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 24 10:02:17 compute-0 nova_compute[257700]: 2025-11-24 10:02:17.549 257704 DEBUG nova.compute.manager [-] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 24 10:02:17 compute-0 nova_compute[257700]: 2025-11-24 10:02:17.549 257704 DEBUG nova.network.neutron [-] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 24 10:02:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:02:17.557Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:02:17 compute-0 nova_compute[257700]: 2025-11-24 10:02:17.598 257704 DEBUG nova.compute.manager [req-d982a4b6-38e3-446c-8346-2a5ab1fa179e req-eb00a829-ebbe-4494-a958-a894860e87b0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Received event network-vif-unplugged-9a53d19e-b3d6-44e0-9943-79a68e4f4fc3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 10:02:17 compute-0 nova_compute[257700]: 2025-11-24 10:02:17.599 257704 DEBUG oslo_concurrency.lockutils [req-d982a4b6-38e3-446c-8346-2a5ab1fa179e req-eb00a829-ebbe-4494-a958-a894860e87b0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "5c9d7984-c8b4-481b-8d02-4149b3de004a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:02:17 compute-0 nova_compute[257700]: 2025-11-24 10:02:17.599 257704 DEBUG oslo_concurrency.lockutils [req-d982a4b6-38e3-446c-8346-2a5ab1fa179e req-eb00a829-ebbe-4494-a958-a894860e87b0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "5c9d7984-c8b4-481b-8d02-4149b3de004a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:02:17 compute-0 nova_compute[257700]: 2025-11-24 10:02:17.600 257704 DEBUG oslo_concurrency.lockutils [req-d982a4b6-38e3-446c-8346-2a5ab1fa179e req-eb00a829-ebbe-4494-a958-a894860e87b0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "5c9d7984-c8b4-481b-8d02-4149b3de004a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:02:17 compute-0 nova_compute[257700]: 2025-11-24 10:02:17.600 257704 DEBUG nova.compute.manager [req-d982a4b6-38e3-446c-8346-2a5ab1fa179e req-eb00a829-ebbe-4494-a958-a894860e87b0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] No waiting events found dispatching network-vif-unplugged-9a53d19e-b3d6-44e0-9943-79a68e4f4fc3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 10:02:17 compute-0 nova_compute[257700]: 2025-11-24 10:02:17.600 257704 DEBUG nova.compute.manager [req-d982a4b6-38e3-446c-8346-2a5ab1fa179e req-eb00a829-ebbe-4494-a958-a894860e87b0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Received event network-vif-unplugged-9a53d19e-b3d6-44e0-9943-79a68e4f4fc3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 24 10:02:17 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:02:17.740 165073 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feb242b9-6422-4c37-bc7a-5c14a79beaf8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:02:17 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1066: 353 pgs: 353 active+clean; 121 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 386 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Nov 24 10:02:18 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:18 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:18 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:18.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:18 compute-0 nova_compute[257700]: 2025-11-24 10:02:18.860 257704 DEBUG nova.network.neutron [-] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 10:02:18 compute-0 nova_compute[257700]: 2025-11-24 10:02:18.873 257704 INFO nova.compute.manager [-] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Took 1.32 seconds to deallocate network for instance.
Nov 24 10:02:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:02:18.907Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:02:18 compute-0 nova_compute[257700]: 2025-11-24 10:02:18.912 257704 DEBUG oslo_concurrency.lockutils [None req-66d6dcb9-4bb0-4e9b-8ebc-f5860d455dda 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:02:18 compute-0 nova_compute[257700]: 2025-11-24 10:02:18.913 257704 DEBUG oslo_concurrency.lockutils [None req-66d6dcb9-4bb0-4e9b-8ebc-f5860d455dda 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:02:18 compute-0 nova_compute[257700]: 2025-11-24 10:02:18.938 257704 DEBUG nova.compute.manager [req-e9979bd2-d1f7-4a38-b822-c0e934372d4f req-344f24c7-c1ae-4035-8e92-5b0d109089bc 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Received event network-vif-deleted-9a53d19e-b3d6-44e0-9943-79a68e4f4fc3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 10:02:18 compute-0 nova_compute[257700]: 2025-11-24 10:02:18.940 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:02:18 compute-0 nova_compute[257700]: 2025-11-24 10:02:18.958 257704 DEBUG oslo_concurrency.processutils [None req-66d6dcb9-4bb0-4e9b-8ebc-f5860d455dda 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:02:19 compute-0 ceph-mon[74331]: pgmap v1066: 353 pgs: 353 active+clean; 121 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 386 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Nov 24 10:02:19 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:19 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:19 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:19.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:19 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:02:19 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4189311379' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:02:19 compute-0 nova_compute[257700]: 2025-11-24 10:02:19.393 257704 DEBUG oslo_concurrency.processutils [None req-66d6dcb9-4bb0-4e9b-8ebc-f5860d455dda 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:02:19 compute-0 nova_compute[257700]: 2025-11-24 10:02:19.399 257704 DEBUG nova.compute.provider_tree [None req-66d6dcb9-4bb0-4e9b-8ebc-f5860d455dda 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed in ProviderTree for provider: a50ce3b5-7e9e-4263-a4aa-c35573ac7257 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 10:02:19 compute-0 nova_compute[257700]: 2025-11-24 10:02:19.422 257704 DEBUG nova.scheduler.client.report [None req-66d6dcb9-4bb0-4e9b-8ebc-f5860d455dda 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 10:02:19 compute-0 nova_compute[257700]: 2025-11-24 10:02:19.445 257704 DEBUG oslo_concurrency.lockutils [None req-66d6dcb9-4bb0-4e9b-8ebc-f5860d455dda 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.532s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:02:19 compute-0 nova_compute[257700]: 2025-11-24 10:02:19.479 257704 INFO nova.scheduler.client.report [None req-66d6dcb9-4bb0-4e9b-8ebc-f5860d455dda 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Deleted allocations for instance 5c9d7984-c8b4-481b-8d02-4149b3de004a
Nov 24 10:02:19 compute-0 nova_compute[257700]: 2025-11-24 10:02:19.536 257704 DEBUG oslo_concurrency.lockutils [None req-66d6dcb9-4bb0-4e9b-8ebc-f5860d455dda 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "5c9d7984-c8b4-481b-8d02-4149b3de004a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.838s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:02:19 compute-0 nova_compute[257700]: 2025-11-24 10:02:19.665 257704 DEBUG nova.compute.manager [req-ebc5c432-bd60-4b14-8ecf-23c407aad7e9 req-440b18c7-c31b-4110-80d0-b30757a30ff0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Received event network-vif-plugged-9a53d19e-b3d6-44e0-9943-79a68e4f4fc3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 10:02:19 compute-0 nova_compute[257700]: 2025-11-24 10:02:19.665 257704 DEBUG oslo_concurrency.lockutils [req-ebc5c432-bd60-4b14-8ecf-23c407aad7e9 req-440b18c7-c31b-4110-80d0-b30757a30ff0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "5c9d7984-c8b4-481b-8d02-4149b3de004a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:02:19 compute-0 nova_compute[257700]: 2025-11-24 10:02:19.666 257704 DEBUG oslo_concurrency.lockutils [req-ebc5c432-bd60-4b14-8ecf-23c407aad7e9 req-440b18c7-c31b-4110-80d0-b30757a30ff0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "5c9d7984-c8b4-481b-8d02-4149b3de004a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:02:19 compute-0 nova_compute[257700]: 2025-11-24 10:02:19.666 257704 DEBUG oslo_concurrency.lockutils [req-ebc5c432-bd60-4b14-8ecf-23c407aad7e9 req-440b18c7-c31b-4110-80d0-b30757a30ff0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "5c9d7984-c8b4-481b-8d02-4149b3de004a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:02:19 compute-0 nova_compute[257700]: 2025-11-24 10:02:19.666 257704 DEBUG nova.compute.manager [req-ebc5c432-bd60-4b14-8ecf-23c407aad7e9 req-440b18c7-c31b-4110-80d0-b30757a30ff0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] No waiting events found dispatching network-vif-plugged-9a53d19e-b3d6-44e0-9943-79a68e4f4fc3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 10:02:19 compute-0 nova_compute[257700]: 2025-11-24 10:02:19.666 257704 WARNING nova.compute.manager [req-ebc5c432-bd60-4b14-8ecf-23c407aad7e9 req-440b18c7-c31b-4110-80d0-b30757a30ff0 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Received unexpected event network-vif-plugged-9a53d19e-b3d6-44e0-9943-79a68e4f4fc3 for instance with vm_state deleted and task_state None.
Nov 24 10:02:19 compute-0 nova_compute[257700]: 2025-11-24 10:02:19.723 257704 DEBUG nova.network.neutron [req-860abff8-e272-4256-a2f6-10a3f4704b12 req-da9adae5-b89a-4673-99d7-9cdc7be3c338 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Updated VIF entry in instance network info cache for port 9a53d19e-b3d6-44e0-9943-79a68e4f4fc3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 10:02:19 compute-0 nova_compute[257700]: 2025-11-24 10:02:19.724 257704 DEBUG nova.network.neutron [req-860abff8-e272-4256-a2f6-10a3f4704b12 req-da9adae5-b89a-4673-99d7-9cdc7be3c338 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Updating instance_info_cache with network_info: [{"id": "9a53d19e-b3d6-44e0-9943-79a68e4f4fc3", "address": "fa:16:3e:aa:32:5c", "network": {"id": "d754e322-9ff1-4d43-9c36-046a636812dd", "bridge": "br-int", "label": "tempest-network-smoke--1566260689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "9.8.7.6", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a53d19e-b3", "ovs_interfaceid": "9a53d19e-b3d6-44e0-9943-79a68e4f4fc3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 10:02:19 compute-0 nova_compute[257700]: 2025-11-24 10:02:19.741 257704 DEBUG oslo_concurrency.lockutils [req-860abff8-e272-4256-a2f6-10a3f4704b12 req-da9adae5-b89a-4673-99d7-9cdc7be3c338 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Releasing lock "refresh_cache-5c9d7984-c8b4-481b-8d02-4149b3de004a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 10:02:19 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1067: 353 pgs: 353 active+clean; 121 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 18 KiB/s wr, 2 op/s
Nov 24 10:02:19 compute-0 nova_compute[257700]: 2025-11-24 10:02:19.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:02:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:02:20 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:02:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:02:20 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:02:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:02:20 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:02:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:02:20 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:02:20 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/4189311379' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:02:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:02:20.573 165073 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:02:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:02:20.573 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:02:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:02:20.573 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:02:20 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:20 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:20 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:20.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:20 compute-0 podman[278837]: 2025-11-24 10:02:20.773008179 +0000 UTC m=+0.046674413 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 24 10:02:20 compute-0 nova_compute[257700]: 2025-11-24 10:02:20.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:02:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:02:20] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Nov 24 10:02:20 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:02:20] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Nov 24 10:02:21 compute-0 ceph-mon[74331]: pgmap v1067: 353 pgs: 353 active+clean; 121 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 18 KiB/s wr, 2 op/s
Nov 24 10:02:21 compute-0 nova_compute[257700]: 2025-11-24 10:02:21.168 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:21 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:21 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:02:21 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:21.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:02:21 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:02:21 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1068: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 19 KiB/s wr, 30 op/s
Nov 24 10:02:21 compute-0 nova_compute[257700]: 2025-11-24 10:02:21.920 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:02:21 compute-0 nova_compute[257700]: 2025-11-24 10:02:21.920 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 10:02:22 compute-0 nova_compute[257700]: 2025-11-24 10:02:22.010 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:22 compute-0 nova_compute[257700]: 2025-11-24 10:02:22.149 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:22 compute-0 nova_compute[257700]: 2025-11-24 10:02:22.271 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:22 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:22 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:22 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:22.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:22 compute-0 nova_compute[257700]: 2025-11-24 10:02:22.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:02:22 compute-0 nova_compute[257700]: 2025-11-24 10:02:22.922 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 10:02:22 compute-0 nova_compute[257700]: 2025-11-24 10:02:22.922 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 10:02:22 compute-0 nova_compute[257700]: 2025-11-24 10:02:22.937 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 10:02:23 compute-0 ceph-mon[74331]: pgmap v1068: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 19 KiB/s wr, 30 op/s
Nov 24 10:02:23 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:23 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:23 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:23.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:23 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1069: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 6.5 KiB/s wr, 28 op/s
Nov 24 10:02:23 compute-0 nova_compute[257700]: 2025-11-24 10:02:23.920 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:02:23 compute-0 nova_compute[257700]: 2025-11-24 10:02:23.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:02:23 compute-0 nova_compute[257700]: 2025-11-24 10:02:23.939 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:02:23 compute-0 nova_compute[257700]: 2025-11-24 10:02:23.939 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:02:23 compute-0 nova_compute[257700]: 2025-11-24 10:02:23.940 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:02:23 compute-0 nova_compute[257700]: 2025-11-24 10:02:23.940 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 10:02:23 compute-0 nova_compute[257700]: 2025-11-24 10:02:23.940 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:02:24 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/2520763458' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:02:24 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:02:24 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2457659003' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:02:24 compute-0 nova_compute[257700]: 2025-11-24 10:02:24.372 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:02:24 compute-0 nova_compute[257700]: 2025-11-24 10:02:24.510 257704 WARNING nova.virt.libvirt.driver [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 10:02:24 compute-0 nova_compute[257700]: 2025-11-24 10:02:24.511 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4586MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 10:02:24 compute-0 nova_compute[257700]: 2025-11-24 10:02:24.511 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:02:24 compute-0 nova_compute[257700]: 2025-11-24 10:02:24.511 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:02:24 compute-0 nova_compute[257700]: 2025-11-24 10:02:24.618 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 10:02:24 compute-0 nova_compute[257700]: 2025-11-24 10:02:24.619 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 10:02:24 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:24 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:24 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:24.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:24 compute-0 nova_compute[257700]: 2025-11-24 10:02:24.690 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:02:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:02:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:02:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:02:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:02:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:02:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:02:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:02:25 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:02:25 compute-0 ceph-mon[74331]: pgmap v1069: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 6.5 KiB/s wr, 28 op/s
Nov 24 10:02:25 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/281032063' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:02:25 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2457659003' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:02:25 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:02:25 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/614630583' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:02:25 compute-0 nova_compute[257700]: 2025-11-24 10:02:25.142 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:02:25 compute-0 nova_compute[257700]: 2025-11-24 10:02:25.148 257704 DEBUG nova.compute.provider_tree [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Inventory has not changed in ProviderTree for provider: a50ce3b5-7e9e-4263-a4aa-c35573ac7257 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 10:02:25 compute-0 nova_compute[257700]: 2025-11-24 10:02:25.163 257704 DEBUG nova.scheduler.client.report [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Inventory has not changed for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 10:02:25 compute-0 nova_compute[257700]: 2025-11-24 10:02:25.181 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 10:02:25 compute-0 nova_compute[257700]: 2025-11-24 10:02:25.182 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:02:25 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:25 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:25 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:25.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:25 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1070: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 6.5 KiB/s wr, 28 op/s
Nov 24 10:02:25 compute-0 nova_compute[257700]: 2025-11-24 10:02:25.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:02:26 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/614630583' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:02:26 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/3755230538' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:02:26 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/3722196526' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:02:26 compute-0 nova_compute[257700]: 2025-11-24 10:02:26.169 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:26 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:26 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:26 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:26.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:26 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:02:26 compute-0 nova_compute[257700]: 2025-11-24 10:02:26.962 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:02:27 compute-0 nova_compute[257700]: 2025-11-24 10:02:27.012 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:27 compute-0 ceph-mon[74331]: pgmap v1070: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 6.5 KiB/s wr, 28 op/s
Nov 24 10:02:27 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:27 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:27 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:27.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:02:27.559Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:02:27 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1071: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 6.5 KiB/s wr, 29 op/s
Nov 24 10:02:27 compute-0 nova_compute[257700]: 2025-11-24 10:02:27.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:02:28 compute-0 sudo[278912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:02:28 compute-0 sudo[278912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:02:28 compute-0 sudo[278912]: pam_unix(sudo:session): session closed for user root
Nov 24 10:02:28 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:28 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:28 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:28.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:02:28.908Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:02:28 compute-0 sshd-session[278812]: error: kex_exchange_identification: read: Connection timed out
Nov 24 10:02:28 compute-0 sshd-session[278812]: banner exchange: Connection from 121.31.210.125 port 38926: Connection timed out
Nov 24 10:02:29 compute-0 ceph-mon[74331]: pgmap v1071: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 6.5 KiB/s wr, 29 op/s
Nov 24 10:02:29 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:29 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:29 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:29.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:29 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1072: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Nov 24 10:02:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:02:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:02:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:02:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:02:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:02:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:02:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:02:30 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:02:30 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:30 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:30 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:30.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:02:30] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Nov 24 10:02:30 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:02:30] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Nov 24 10:02:31 compute-0 nova_compute[257700]: 2025-11-24 10:02:31.172 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:31 compute-0 ceph-mon[74331]: pgmap v1072: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Nov 24 10:02:31 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:02:31 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:31 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:31 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:31.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:31 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:02:31 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1073: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Nov 24 10:02:31 compute-0 nova_compute[257700]: 2025-11-24 10:02:31.940 257704 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763978536.9383812, 5c9d7984-c8b4-481b-8d02-4149b3de004a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 10:02:31 compute-0 nova_compute[257700]: 2025-11-24 10:02:31.940 257704 INFO nova.compute.manager [-] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] VM Stopped (Lifecycle Event)
Nov 24 10:02:31 compute-0 nova_compute[257700]: 2025-11-24 10:02:31.959 257704 DEBUG nova.compute.manager [None req-b52576d2-f8d7-4367-9448-a891362f88ec - - - - - -] [instance: 5c9d7984-c8b4-481b-8d02-4149b3de004a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 10:02:32 compute-0 nova_compute[257700]: 2025-11-24 10:02:32.054 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:32 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:32 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:32 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:32.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:32 compute-0 nova_compute[257700]: 2025-11-24 10:02:32.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:02:32 compute-0 nova_compute[257700]: 2025-11-24 10:02:32.921 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 24 10:02:33 compute-0 ceph-mon[74331]: pgmap v1073: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Nov 24 10:02:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:33.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:33 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1074: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:02:34 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:34 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:34 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:34.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:02:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:02:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:02:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:02:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:02:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:02:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:02:35 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:02:35 compute-0 ceph-mon[74331]: pgmap v1074: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:02:35 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:35 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:35 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:35.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:35 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1075: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:02:36 compute-0 nova_compute[257700]: 2025-11-24 10:02:36.175 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:36 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:36 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:36 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:36.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:36 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:02:36 compute-0 nova_compute[257700]: 2025-11-24 10:02:36.849 257704 DEBUG oslo_concurrency.lockutils [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "374e7431-b73b-4a49-8aba-9ac699a35ebf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:02:36 compute-0 nova_compute[257700]: 2025-11-24 10:02:36.850 257704 DEBUG oslo_concurrency.lockutils [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "374e7431-b73b-4a49-8aba-9ac699a35ebf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:02:36 compute-0 nova_compute[257700]: 2025-11-24 10:02:36.863 257704 DEBUG nova.compute.manager [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 24 10:02:36 compute-0 nova_compute[257700]: 2025-11-24 10:02:36.920 257704 DEBUG oslo_concurrency.lockutils [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:02:36 compute-0 nova_compute[257700]: 2025-11-24 10:02:36.921 257704 DEBUG oslo_concurrency.lockutils [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:02:36 compute-0 nova_compute[257700]: 2025-11-24 10:02:36.927 257704 DEBUG nova.virt.hardware [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 24 10:02:36 compute-0 nova_compute[257700]: 2025-11-24 10:02:36.928 257704 INFO nova.compute.claims [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Claim successful on node compute-0.ctlplane.example.com
Nov 24 10:02:37 compute-0 nova_compute[257700]: 2025-11-24 10:02:37.056 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:37 compute-0 nova_compute[257700]: 2025-11-24 10:02:37.079 257704 DEBUG oslo_concurrency.processutils [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:02:37 compute-0 ceph-mon[74331]: pgmap v1075: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:02:37 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:37 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:37 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:37.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:37 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:02:37 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3448334267' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:02:37 compute-0 nova_compute[257700]: 2025-11-24 10:02:37.529 257704 DEBUG oslo_concurrency.processutils [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:02:37 compute-0 nova_compute[257700]: 2025-11-24 10:02:37.534 257704 DEBUG nova.compute.provider_tree [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed in ProviderTree for provider: a50ce3b5-7e9e-4263-a4aa-c35573ac7257 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 10:02:37 compute-0 nova_compute[257700]: 2025-11-24 10:02:37.547 257704 DEBUG nova.scheduler.client.report [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 10:02:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:02:37.560Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 10:02:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:02:37.561Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:02:37 compute-0 nova_compute[257700]: 2025-11-24 10:02:37.571 257704 DEBUG oslo_concurrency.lockutils [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:02:37 compute-0 nova_compute[257700]: 2025-11-24 10:02:37.572 257704 DEBUG nova.compute.manager [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 24 10:02:37 compute-0 nova_compute[257700]: 2025-11-24 10:02:37.648 257704 DEBUG nova.compute.manager [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 24 10:02:37 compute-0 nova_compute[257700]: 2025-11-24 10:02:37.648 257704 DEBUG nova.network.neutron [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 24 10:02:37 compute-0 nova_compute[257700]: 2025-11-24 10:02:37.668 257704 INFO nova.virt.libvirt.driver [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 24 10:02:37 compute-0 nova_compute[257700]: 2025-11-24 10:02:37.685 257704 DEBUG nova.compute.manager [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 24 10:02:37 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1076: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:02:37 compute-0 nova_compute[257700]: 2025-11-24 10:02:37.785 257704 DEBUG nova.compute.manager [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 24 10:02:37 compute-0 nova_compute[257700]: 2025-11-24 10:02:37.786 257704 DEBUG nova.virt.libvirt.driver [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 24 10:02:37 compute-0 nova_compute[257700]: 2025-11-24 10:02:37.786 257704 INFO nova.virt.libvirt.driver [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Creating image(s)
Nov 24 10:02:37 compute-0 nova_compute[257700]: 2025-11-24 10:02:37.811 257704 DEBUG nova.storage.rbd_utils [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 374e7431-b73b-4a49-8aba-9ac699a35ebf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 10:02:37 compute-0 nova_compute[257700]: 2025-11-24 10:02:37.836 257704 DEBUG nova.storage.rbd_utils [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 374e7431-b73b-4a49-8aba-9ac699a35ebf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 10:02:37 compute-0 nova_compute[257700]: 2025-11-24 10:02:37.863 257704 DEBUG nova.storage.rbd_utils [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 374e7431-b73b-4a49-8aba-9ac699a35ebf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 10:02:37 compute-0 nova_compute[257700]: 2025-11-24 10:02:37.867 257704 DEBUG oslo_concurrency.processutils [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:02:37 compute-0 nova_compute[257700]: 2025-11-24 10:02:37.938 257704 DEBUG oslo_concurrency.processutils [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:02:37 compute-0 nova_compute[257700]: 2025-11-24 10:02:37.939 257704 DEBUG oslo_concurrency.lockutils [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "2ed5c667523487159c4c4503c82babbc95dbae40" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:02:37 compute-0 nova_compute[257700]: 2025-11-24 10:02:37.940 257704 DEBUG oslo_concurrency.lockutils [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "2ed5c667523487159c4c4503c82babbc95dbae40" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:02:37 compute-0 nova_compute[257700]: 2025-11-24 10:02:37.940 257704 DEBUG oslo_concurrency.lockutils [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "2ed5c667523487159c4c4503c82babbc95dbae40" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:02:37 compute-0 nova_compute[257700]: 2025-11-24 10:02:37.964 257704 DEBUG nova.storage.rbd_utils [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 374e7431-b73b-4a49-8aba-9ac699a35ebf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 10:02:37 compute-0 nova_compute[257700]: 2025-11-24 10:02:37.968 257704 DEBUG oslo_concurrency.processutils [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 374e7431-b73b-4a49-8aba-9ac699a35ebf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:02:38 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3448334267' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:02:38 compute-0 nova_compute[257700]: 2025-11-24 10:02:38.261 257704 DEBUG oslo_concurrency.processutils [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/2ed5c667523487159c4c4503c82babbc95dbae40 374e7431-b73b-4a49-8aba-9ac699a35ebf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.293s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:02:38 compute-0 nova_compute[257700]: 2025-11-24 10:02:38.320 257704 DEBUG nova.storage.rbd_utils [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] resizing rbd image 374e7431-b73b-4a49-8aba-9ac699a35ebf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 24 10:02:38 compute-0 nova_compute[257700]: 2025-11-24 10:02:38.419 257704 DEBUG nova.objects.instance [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'migration_context' on Instance uuid 374e7431-b73b-4a49-8aba-9ac699a35ebf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 10:02:38 compute-0 nova_compute[257700]: 2025-11-24 10:02:38.431 257704 DEBUG nova.virt.libvirt.driver [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 24 10:02:38 compute-0 nova_compute[257700]: 2025-11-24 10:02:38.432 257704 DEBUG nova.virt.libvirt.driver [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Ensure instance console log exists: /var/lib/nova/instances/374e7431-b73b-4a49-8aba-9ac699a35ebf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 24 10:02:38 compute-0 nova_compute[257700]: 2025-11-24 10:02:38.432 257704 DEBUG oslo_concurrency.lockutils [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:02:38 compute-0 nova_compute[257700]: 2025-11-24 10:02:38.433 257704 DEBUG oslo_concurrency.lockutils [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:02:38 compute-0 nova_compute[257700]: 2025-11-24 10:02:38.433 257704 DEBUG oslo_concurrency.lockutils [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:02:38 compute-0 nova_compute[257700]: 2025-11-24 10:02:38.482 257704 DEBUG nova.policy [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '43f79ff3105e4372a3c095e8057d4f1f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '94d069fc040647d5a6e54894eec915fe', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 24 10:02:38 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:38 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:38 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:38.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:02:38.909Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:02:39 compute-0 ceph-mon[74331]: pgmap v1076: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:02:39 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:39 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:02:39 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:39.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:02:39 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1077: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:02:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:02:39 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:02:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:02:39 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:02:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:02:39 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:02:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:02:40 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:02:40 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:40 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:40 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:40.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:02:40] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Nov 24 10:02:40 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:02:40] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Nov 24 10:02:41 compute-0 nova_compute[257700]: 2025-11-24 10:02:41.177 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:41 compute-0 nova_compute[257700]: 2025-11-24 10:02:41.234 257704 DEBUG nova.network.neutron [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Successfully created port: 6f615f70-f3a3-45d6-8078-6f32abae3c0b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 24 10:02:41 compute-0 ceph-mon[74331]: pgmap v1077: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:02:41 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:41 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:41 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:41.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:41 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:02:41 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1078: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 24 10:02:42 compute-0 nova_compute[257700]: 2025-11-24 10:02:42.098 257704 DEBUG nova.network.neutron [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Successfully updated port: 6f615f70-f3a3-45d6-8078-6f32abae3c0b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 24 10:02:42 compute-0 nova_compute[257700]: 2025-11-24 10:02:42.100 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:42 compute-0 nova_compute[257700]: 2025-11-24 10:02:42.108 257704 DEBUG oslo_concurrency.lockutils [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "refresh_cache-374e7431-b73b-4a49-8aba-9ac699a35ebf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 10:02:42 compute-0 nova_compute[257700]: 2025-11-24 10:02:42.109 257704 DEBUG oslo_concurrency.lockutils [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquired lock "refresh_cache-374e7431-b73b-4a49-8aba-9ac699a35ebf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 10:02:42 compute-0 nova_compute[257700]: 2025-11-24 10:02:42.109 257704 DEBUG nova.network.neutron [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 24 10:02:42 compute-0 nova_compute[257700]: 2025-11-24 10:02:42.163 257704 DEBUG nova.compute.manager [req-984be270-968b-4651-9787-f6ad42fe60ad req-7e69d48f-4f84-435c-b38e-587976afc99e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Received event network-changed-6f615f70-f3a3-45d6-8078-6f32abae3c0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 10:02:42 compute-0 nova_compute[257700]: 2025-11-24 10:02:42.163 257704 DEBUG nova.compute.manager [req-984be270-968b-4651-9787-f6ad42fe60ad req-7e69d48f-4f84-435c-b38e-587976afc99e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Refreshing instance network info cache due to event network-changed-6f615f70-f3a3-45d6-8078-6f32abae3c0b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 10:02:42 compute-0 nova_compute[257700]: 2025-11-24 10:02:42.163 257704 DEBUG oslo_concurrency.lockutils [req-984be270-968b-4651-9787-f6ad42fe60ad req-7e69d48f-4f84-435c-b38e-587976afc99e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "refresh_cache-374e7431-b73b-4a49-8aba-9ac699a35ebf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 10:02:42 compute-0 nova_compute[257700]: 2025-11-24 10:02:42.223 257704 DEBUG nova.network.neutron [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 24 10:02:42 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:42 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:02:42 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:42.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:02:42 compute-0 nova_compute[257700]: 2025-11-24 10:02:42.835 257704 DEBUG nova.network.neutron [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Updating instance_info_cache with network_info: [{"id": "6f615f70-f3a3-45d6-8078-6f32abae3c0b", "address": "fa:16:3e:3a:6a:6d", "network": {"id": "d9ce2622-5822-4ecf-9fb9-f5f15c8ea094", "bridge": "br-int", "label": "tempest-network-smoke--73093411", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f615f70-f3", "ovs_interfaceid": "6f615f70-f3a3-45d6-8078-6f32abae3c0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 10:02:42 compute-0 nova_compute[257700]: 2025-11-24 10:02:42.854 257704 DEBUG oslo_concurrency.lockutils [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Releasing lock "refresh_cache-374e7431-b73b-4a49-8aba-9ac699a35ebf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 10:02:42 compute-0 nova_compute[257700]: 2025-11-24 10:02:42.855 257704 DEBUG nova.compute.manager [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Instance network_info: |[{"id": "6f615f70-f3a3-45d6-8078-6f32abae3c0b", "address": "fa:16:3e:3a:6a:6d", "network": {"id": "d9ce2622-5822-4ecf-9fb9-f5f15c8ea094", "bridge": "br-int", "label": "tempest-network-smoke--73093411", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f615f70-f3", "ovs_interfaceid": "6f615f70-f3a3-45d6-8078-6f32abae3c0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 24 10:02:42 compute-0 nova_compute[257700]: 2025-11-24 10:02:42.855 257704 DEBUG oslo_concurrency.lockutils [req-984be270-968b-4651-9787-f6ad42fe60ad req-7e69d48f-4f84-435c-b38e-587976afc99e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquired lock "refresh_cache-374e7431-b73b-4a49-8aba-9ac699a35ebf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 10:02:42 compute-0 nova_compute[257700]: 2025-11-24 10:02:42.855 257704 DEBUG nova.network.neutron [req-984be270-968b-4651-9787-f6ad42fe60ad req-7e69d48f-4f84-435c-b38e-587976afc99e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Refreshing network info cache for port 6f615f70-f3a3-45d6-8078-6f32abae3c0b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 10:02:42 compute-0 nova_compute[257700]: 2025-11-24 10:02:42.858 257704 DEBUG nova.virt.libvirt.driver [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Start _get_guest_xml network_info=[{"id": "6f615f70-f3a3-45d6-8078-6f32abae3c0b", "address": "fa:16:3e:3a:6a:6d", "network": {"id": "d9ce2622-5822-4ecf-9fb9-f5f15c8ea094", "bridge": "br-int", "label": "tempest-network-smoke--73093411", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f615f70-f3", "ovs_interfaceid": "6f615f70-f3a3-45d6-8078-6f32abae3c0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T09:52:37Z,direct_url=<?>,disk_format='qcow2',id=6ef14bdf-4f04-4400-8040-4409d9d5271e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='cf636babb68a4ebe9bf137d3fe0e4c0c',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T09:52:41Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'device_name': '/dev/vda', 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_format': None, 'boot_index': 0, 'device_type': 'disk', 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': '6ef14bdf-4f04-4400-8040-4409d9d5271e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 24 10:02:42 compute-0 nova_compute[257700]: 2025-11-24 10:02:42.865 257704 WARNING nova.virt.libvirt.driver [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 10:02:42 compute-0 nova_compute[257700]: 2025-11-24 10:02:42.872 257704 DEBUG nova.virt.libvirt.host [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 24 10:02:42 compute-0 nova_compute[257700]: 2025-11-24 10:02:42.873 257704 DEBUG nova.virt.libvirt.host [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 24 10:02:42 compute-0 nova_compute[257700]: 2025-11-24 10:02:42.883 257704 DEBUG nova.virt.libvirt.host [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 24 10:02:42 compute-0 nova_compute[257700]: 2025-11-24 10:02:42.884 257704 DEBUG nova.virt.libvirt.host [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 24 10:02:42 compute-0 nova_compute[257700]: 2025-11-24 10:02:42.884 257704 DEBUG nova.virt.libvirt.driver [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 24 10:02:42 compute-0 nova_compute[257700]: 2025-11-24 10:02:42.884 257704 DEBUG nova.virt.hardware [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-24T09:52:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='4a5d03ad-925b-45f1-89bd-f1325f9f3292',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-24T09:52:37Z,direct_url=<?>,disk_format='qcow2',id=6ef14bdf-4f04-4400-8040-4409d9d5271e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='cf636babb68a4ebe9bf137d3fe0e4c0c',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-24T09:52:41Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 24 10:02:42 compute-0 nova_compute[257700]: 2025-11-24 10:02:42.885 257704 DEBUG nova.virt.hardware [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 24 10:02:42 compute-0 nova_compute[257700]: 2025-11-24 10:02:42.885 257704 DEBUG nova.virt.hardware [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 24 10:02:42 compute-0 nova_compute[257700]: 2025-11-24 10:02:42.885 257704 DEBUG nova.virt.hardware [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 24 10:02:42 compute-0 nova_compute[257700]: 2025-11-24 10:02:42.885 257704 DEBUG nova.virt.hardware [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 24 10:02:42 compute-0 nova_compute[257700]: 2025-11-24 10:02:42.886 257704 DEBUG nova.virt.hardware [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 24 10:02:42 compute-0 nova_compute[257700]: 2025-11-24 10:02:42.886 257704 DEBUG nova.virt.hardware [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 24 10:02:42 compute-0 nova_compute[257700]: 2025-11-24 10:02:42.886 257704 DEBUG nova.virt.hardware [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 24 10:02:42 compute-0 nova_compute[257700]: 2025-11-24 10:02:42.886 257704 DEBUG nova.virt.hardware [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 24 10:02:42 compute-0 nova_compute[257700]: 2025-11-24 10:02:42.886 257704 DEBUG nova.virt.hardware [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 24 10:02:42 compute-0 nova_compute[257700]: 2025-11-24 10:02:42.887 257704 DEBUG nova.virt.hardware [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 24 10:02:42 compute-0 nova_compute[257700]: 2025-11-24 10:02:42.889 257704 DEBUG oslo_concurrency.processutils [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:02:43 compute-0 ceph-mon[74331]: pgmap v1078: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 24 10:02:43 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:43 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:43 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:43.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 24 10:02:43 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2006428667' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 10:02:43 compute-0 nova_compute[257700]: 2025-11-24 10:02:43.405 257704 DEBUG oslo_concurrency.processutils [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:02:43 compute-0 nova_compute[257700]: 2025-11-24 10:02:43.439 257704 DEBUG nova.storage.rbd_utils [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 374e7431-b73b-4a49-8aba-9ac699a35ebf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 10:02:43 compute-0 nova_compute[257700]: 2025-11-24 10:02:43.448 257704 DEBUG oslo_concurrency.processutils [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:02:43 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1079: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 24 10:02:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 24 10:02:43 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1474341597' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 10:02:43 compute-0 nova_compute[257700]: 2025-11-24 10:02:43.913 257704 DEBUG oslo_concurrency.processutils [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:02:43 compute-0 nova_compute[257700]: 2025-11-24 10:02:43.914 257704 DEBUG nova.virt.libvirt.vif [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T10:02:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1423137428',display_name='tempest-TestNetworkBasicOps-server-1423137428',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1423137428',id=11,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPoYJrNlBcldqeAeHx35OCB0CcI0kZ2sbqn3p9f2hVqq2CzZeVKoOWsTSbQ3/Y8hxBs5OloguADBMRRRYFv0gtRH9qAkoMCy9kFYI8rxuxHCJ5atJHHGqVmT9XSSSKf04A==',key_name='tempest-TestNetworkBasicOps-2017730832',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-emakj80b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T10:02:37Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=374e7431-b73b-4a49-8aba-9ac699a35ebf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6f615f70-f3a3-45d6-8078-6f32abae3c0b", "address": "fa:16:3e:3a:6a:6d", "network": {"id": "d9ce2622-5822-4ecf-9fb9-f5f15c8ea094", "bridge": "br-int", "label": "tempest-network-smoke--73093411", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f615f70-f3", "ovs_interfaceid": "6f615f70-f3a3-45d6-8078-6f32abae3c0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 24 10:02:43 compute-0 nova_compute[257700]: 2025-11-24 10:02:43.915 257704 DEBUG nova.network.os_vif_util [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "6f615f70-f3a3-45d6-8078-6f32abae3c0b", "address": "fa:16:3e:3a:6a:6d", "network": {"id": "d9ce2622-5822-4ecf-9fb9-f5f15c8ea094", "bridge": "br-int", "label": "tempest-network-smoke--73093411", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f615f70-f3", "ovs_interfaceid": "6f615f70-f3a3-45d6-8078-6f32abae3c0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 10:02:43 compute-0 nova_compute[257700]: 2025-11-24 10:02:43.915 257704 DEBUG nova.network.os_vif_util [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:6a:6d,bridge_name='br-int',has_traffic_filtering=True,id=6f615f70-f3a3-45d6-8078-6f32abae3c0b,network=Network(d9ce2622-5822-4ecf-9fb9-f5f15c8ea094),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f615f70-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 10:02:43 compute-0 nova_compute[257700]: 2025-11-24 10:02:43.916 257704 DEBUG nova.objects.instance [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'pci_devices' on Instance uuid 374e7431-b73b-4a49-8aba-9ac699a35ebf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 10:02:43 compute-0 nova_compute[257700]: 2025-11-24 10:02:43.932 257704 DEBUG nova.virt.libvirt.driver [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] End _get_guest_xml xml=<domain type="kvm">
Nov 24 10:02:43 compute-0 nova_compute[257700]:   <uuid>374e7431-b73b-4a49-8aba-9ac699a35ebf</uuid>
Nov 24 10:02:43 compute-0 nova_compute[257700]:   <name>instance-0000000b</name>
Nov 24 10:02:43 compute-0 nova_compute[257700]:   <memory>131072</memory>
Nov 24 10:02:43 compute-0 nova_compute[257700]:   <vcpu>1</vcpu>
Nov 24 10:02:43 compute-0 nova_compute[257700]:   <metadata>
Nov 24 10:02:43 compute-0 nova_compute[257700]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 10:02:43 compute-0 nova_compute[257700]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:       <nova:name>tempest-TestNetworkBasicOps-server-1423137428</nova:name>
Nov 24 10:02:43 compute-0 nova_compute[257700]:       <nova:creationTime>2025-11-24 10:02:42</nova:creationTime>
Nov 24 10:02:43 compute-0 nova_compute[257700]:       <nova:flavor name="m1.nano">
Nov 24 10:02:43 compute-0 nova_compute[257700]:         <nova:memory>128</nova:memory>
Nov 24 10:02:43 compute-0 nova_compute[257700]:         <nova:disk>1</nova:disk>
Nov 24 10:02:43 compute-0 nova_compute[257700]:         <nova:swap>0</nova:swap>
Nov 24 10:02:43 compute-0 nova_compute[257700]:         <nova:ephemeral>0</nova:ephemeral>
Nov 24 10:02:43 compute-0 nova_compute[257700]:         <nova:vcpus>1</nova:vcpus>
Nov 24 10:02:43 compute-0 nova_compute[257700]:       </nova:flavor>
Nov 24 10:02:43 compute-0 nova_compute[257700]:       <nova:owner>
Nov 24 10:02:43 compute-0 nova_compute[257700]:         <nova:user uuid="43f79ff3105e4372a3c095e8057d4f1f">tempest-TestNetworkBasicOps-1844071378-project-member</nova:user>
Nov 24 10:02:43 compute-0 nova_compute[257700]:         <nova:project uuid="94d069fc040647d5a6e54894eec915fe">tempest-TestNetworkBasicOps-1844071378</nova:project>
Nov 24 10:02:43 compute-0 nova_compute[257700]:       </nova:owner>
Nov 24 10:02:43 compute-0 nova_compute[257700]:       <nova:root type="image" uuid="6ef14bdf-4f04-4400-8040-4409d9d5271e"/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:       <nova:ports>
Nov 24 10:02:43 compute-0 nova_compute[257700]:         <nova:port uuid="6f615f70-f3a3-45d6-8078-6f32abae3c0b">
Nov 24 10:02:43 compute-0 nova_compute[257700]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:         </nova:port>
Nov 24 10:02:43 compute-0 nova_compute[257700]:       </nova:ports>
Nov 24 10:02:43 compute-0 nova_compute[257700]:     </nova:instance>
Nov 24 10:02:43 compute-0 nova_compute[257700]:   </metadata>
Nov 24 10:02:43 compute-0 nova_compute[257700]:   <sysinfo type="smbios">
Nov 24 10:02:43 compute-0 nova_compute[257700]:     <system>
Nov 24 10:02:43 compute-0 nova_compute[257700]:       <entry name="manufacturer">RDO</entry>
Nov 24 10:02:43 compute-0 nova_compute[257700]:       <entry name="product">OpenStack Compute</entry>
Nov 24 10:02:43 compute-0 nova_compute[257700]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 10:02:43 compute-0 nova_compute[257700]:       <entry name="serial">374e7431-b73b-4a49-8aba-9ac699a35ebf</entry>
Nov 24 10:02:43 compute-0 nova_compute[257700]:       <entry name="uuid">374e7431-b73b-4a49-8aba-9ac699a35ebf</entry>
Nov 24 10:02:43 compute-0 nova_compute[257700]:       <entry name="family">Virtual Machine</entry>
Nov 24 10:02:43 compute-0 nova_compute[257700]:     </system>
Nov 24 10:02:43 compute-0 nova_compute[257700]:   </sysinfo>
Nov 24 10:02:43 compute-0 nova_compute[257700]:   <os>
Nov 24 10:02:43 compute-0 nova_compute[257700]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 24 10:02:43 compute-0 nova_compute[257700]:     <boot dev="hd"/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:     <smbios mode="sysinfo"/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:   </os>
Nov 24 10:02:43 compute-0 nova_compute[257700]:   <features>
Nov 24 10:02:43 compute-0 nova_compute[257700]:     <acpi/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:     <apic/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:     <vmcoreinfo/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:   </features>
Nov 24 10:02:43 compute-0 nova_compute[257700]:   <clock offset="utc">
Nov 24 10:02:43 compute-0 nova_compute[257700]:     <timer name="pit" tickpolicy="delay"/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:     <timer name="hpet" present="no"/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:   </clock>
Nov 24 10:02:43 compute-0 nova_compute[257700]:   <cpu mode="host-model" match="exact">
Nov 24 10:02:43 compute-0 nova_compute[257700]:     <topology sockets="1" cores="1" threads="1"/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:   </cpu>
Nov 24 10:02:43 compute-0 nova_compute[257700]:   <devices>
Nov 24 10:02:43 compute-0 nova_compute[257700]:     <disk type="network" device="disk">
Nov 24 10:02:43 compute-0 nova_compute[257700]:       <driver type="raw" cache="none"/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:       <source protocol="rbd" name="vms/374e7431-b73b-4a49-8aba-9ac699a35ebf_disk">
Nov 24 10:02:43 compute-0 nova_compute[257700]:         <host name="192.168.122.100" port="6789"/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:         <host name="192.168.122.102" port="6789"/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:         <host name="192.168.122.101" port="6789"/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:       </source>
Nov 24 10:02:43 compute-0 nova_compute[257700]:       <auth username="openstack">
Nov 24 10:02:43 compute-0 nova_compute[257700]:         <secret type="ceph" uuid="84a084c3-61a7-5de7-8207-1f88efa59a64"/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:       </auth>
Nov 24 10:02:43 compute-0 nova_compute[257700]:       <target dev="vda" bus="virtio"/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:     </disk>
Nov 24 10:02:43 compute-0 nova_compute[257700]:     <disk type="network" device="cdrom">
Nov 24 10:02:43 compute-0 nova_compute[257700]:       <driver type="raw" cache="none"/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:       <source protocol="rbd" name="vms/374e7431-b73b-4a49-8aba-9ac699a35ebf_disk.config">
Nov 24 10:02:43 compute-0 nova_compute[257700]:         <host name="192.168.122.100" port="6789"/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:         <host name="192.168.122.102" port="6789"/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:         <host name="192.168.122.101" port="6789"/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:       </source>
Nov 24 10:02:43 compute-0 nova_compute[257700]:       <auth username="openstack">
Nov 24 10:02:43 compute-0 nova_compute[257700]:         <secret type="ceph" uuid="84a084c3-61a7-5de7-8207-1f88efa59a64"/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:       </auth>
Nov 24 10:02:43 compute-0 nova_compute[257700]:       <target dev="sda" bus="sata"/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:     </disk>
Nov 24 10:02:43 compute-0 nova_compute[257700]:     <interface type="ethernet">
Nov 24 10:02:43 compute-0 nova_compute[257700]:       <mac address="fa:16:3e:3a:6a:6d"/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:       <model type="virtio"/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:       <driver name="vhost" rx_queue_size="512"/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:       <mtu size="1442"/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:       <target dev="tap6f615f70-f3"/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:     </interface>
Nov 24 10:02:43 compute-0 nova_compute[257700]:     <serial type="pty">
Nov 24 10:02:43 compute-0 nova_compute[257700]:       <log file="/var/lib/nova/instances/374e7431-b73b-4a49-8aba-9ac699a35ebf/console.log" append="off"/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:     </serial>
Nov 24 10:02:43 compute-0 nova_compute[257700]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:     <video>
Nov 24 10:02:43 compute-0 nova_compute[257700]:       <model type="virtio"/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:     </video>
Nov 24 10:02:43 compute-0 nova_compute[257700]:     <input type="tablet" bus="usb"/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:     <rng model="virtio">
Nov 24 10:02:43 compute-0 nova_compute[257700]:       <backend model="random">/dev/urandom</backend>
Nov 24 10:02:43 compute-0 nova_compute[257700]:     </rng>
Nov 24 10:02:43 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root"/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:     <controller type="pci" model="pcie-root-port"/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:     <controller type="usb" index="0"/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:     <memballoon model="virtio">
Nov 24 10:02:43 compute-0 nova_compute[257700]:       <stats period="10"/>
Nov 24 10:02:43 compute-0 nova_compute[257700]:     </memballoon>
Nov 24 10:02:43 compute-0 nova_compute[257700]:   </devices>
Nov 24 10:02:43 compute-0 nova_compute[257700]: </domain>
Nov 24 10:02:43 compute-0 nova_compute[257700]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 24 10:02:43 compute-0 nova_compute[257700]: 2025-11-24 10:02:43.933 257704 DEBUG nova.compute.manager [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Preparing to wait for external event network-vif-plugged-6f615f70-f3a3-45d6-8078-6f32abae3c0b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 24 10:02:43 compute-0 nova_compute[257700]: 2025-11-24 10:02:43.934 257704 DEBUG oslo_concurrency.lockutils [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "374e7431-b73b-4a49-8aba-9ac699a35ebf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:02:43 compute-0 nova_compute[257700]: 2025-11-24 10:02:43.934 257704 DEBUG oslo_concurrency.lockutils [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "374e7431-b73b-4a49-8aba-9ac699a35ebf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:02:43 compute-0 nova_compute[257700]: 2025-11-24 10:02:43.934 257704 DEBUG oslo_concurrency.lockutils [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "374e7431-b73b-4a49-8aba-9ac699a35ebf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:02:43 compute-0 nova_compute[257700]: 2025-11-24 10:02:43.935 257704 DEBUG nova.virt.libvirt.vif [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-24T10:02:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1423137428',display_name='tempest-TestNetworkBasicOps-server-1423137428',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1423137428',id=11,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPoYJrNlBcldqeAeHx35OCB0CcI0kZ2sbqn3p9f2hVqq2CzZeVKoOWsTSbQ3/Y8hxBs5OloguADBMRRRYFv0gtRH9qAkoMCy9kFYI8rxuxHCJ5atJHHGqVmT9XSSSKf04A==',key_name='tempest-TestNetworkBasicOps-2017730832',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-emakj80b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-24T10:02:37Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=374e7431-b73b-4a49-8aba-9ac699a35ebf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6f615f70-f3a3-45d6-8078-6f32abae3c0b", "address": "fa:16:3e:3a:6a:6d", "network": {"id": "d9ce2622-5822-4ecf-9fb9-f5f15c8ea094", "bridge": "br-int", "label": "tempest-network-smoke--73093411", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f615f70-f3", "ovs_interfaceid": "6f615f70-f3a3-45d6-8078-6f32abae3c0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 24 10:02:43 compute-0 nova_compute[257700]: 2025-11-24 10:02:43.935 257704 DEBUG nova.network.os_vif_util [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "6f615f70-f3a3-45d6-8078-6f32abae3c0b", "address": "fa:16:3e:3a:6a:6d", "network": {"id": "d9ce2622-5822-4ecf-9fb9-f5f15c8ea094", "bridge": "br-int", "label": "tempest-network-smoke--73093411", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f615f70-f3", "ovs_interfaceid": "6f615f70-f3a3-45d6-8078-6f32abae3c0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 10:02:43 compute-0 nova_compute[257700]: 2025-11-24 10:02:43.936 257704 DEBUG nova.network.os_vif_util [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:6a:6d,bridge_name='br-int',has_traffic_filtering=True,id=6f615f70-f3a3-45d6-8078-6f32abae3c0b,network=Network(d9ce2622-5822-4ecf-9fb9-f5f15c8ea094),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f615f70-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 10:02:43 compute-0 nova_compute[257700]: 2025-11-24 10:02:43.936 257704 DEBUG os_vif [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:6a:6d,bridge_name='br-int',has_traffic_filtering=True,id=6f615f70-f3a3-45d6-8078-6f32abae3c0b,network=Network(d9ce2622-5822-4ecf-9fb9-f5f15c8ea094),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f615f70-f3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 24 10:02:43 compute-0 nova_compute[257700]: 2025-11-24 10:02:43.936 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:43 compute-0 nova_compute[257700]: 2025-11-24 10:02:43.937 257704 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:02:43 compute-0 nova_compute[257700]: 2025-11-24 10:02:43.937 257704 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 10:02:43 compute-0 nova_compute[257700]: 2025-11-24 10:02:43.939 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:43 compute-0 nova_compute[257700]: 2025-11-24 10:02:43.939 257704 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6f615f70-f3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:02:43 compute-0 nova_compute[257700]: 2025-11-24 10:02:43.940 257704 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6f615f70-f3, col_values=(('external_ids', {'iface-id': '6f615f70-f3a3-45d6-8078-6f32abae3c0b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3a:6a:6d', 'vm-uuid': '374e7431-b73b-4a49-8aba-9ac699a35ebf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:02:43 compute-0 nova_compute[257700]: 2025-11-24 10:02:43.941 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:43 compute-0 NetworkManager[48883]: <info>  [1763978563.9425] manager: (tap6f615f70-f3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Nov 24 10:02:43 compute-0 nova_compute[257700]: 2025-11-24 10:02:43.943 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 10:02:43 compute-0 nova_compute[257700]: 2025-11-24 10:02:43.947 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:43 compute-0 nova_compute[257700]: 2025-11-24 10:02:43.947 257704 INFO os_vif [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:6a:6d,bridge_name='br-int',has_traffic_filtering=True,id=6f615f70-f3a3-45d6-8078-6f32abae3c0b,network=Network(d9ce2622-5822-4ecf-9fb9-f5f15c8ea094),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f615f70-f3')
Nov 24 10:02:43 compute-0 nova_compute[257700]: 2025-11-24 10:02:43.995 257704 DEBUG nova.virt.libvirt.driver [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 10:02:43 compute-0 nova_compute[257700]: 2025-11-24 10:02:43.996 257704 DEBUG nova.virt.libvirt.driver [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 24 10:02:43 compute-0 nova_compute[257700]: 2025-11-24 10:02:43.996 257704 DEBUG nova.virt.libvirt.driver [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] No VIF found with MAC fa:16:3e:3a:6a:6d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 24 10:02:43 compute-0 nova_compute[257700]: 2025-11-24 10:02:43.996 257704 INFO nova.virt.libvirt.driver [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Using config drive
Nov 24 10:02:44 compute-0 nova_compute[257700]: 2025-11-24 10:02:44.020 257704 DEBUG nova.storage.rbd_utils [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 374e7431-b73b-4a49-8aba-9ac699a35ebf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 10:02:44 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2006428667' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 10:02:44 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1474341597' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 10:02:44 compute-0 nova_compute[257700]: 2025-11-24 10:02:44.585 257704 INFO nova.virt.libvirt.driver [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Creating config drive at /var/lib/nova/instances/374e7431-b73b-4a49-8aba-9ac699a35ebf/disk.config
Nov 24 10:02:44 compute-0 nova_compute[257700]: 2025-11-24 10:02:44.592 257704 DEBUG oslo_concurrency.processutils [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/374e7431-b73b-4a49-8aba-9ac699a35ebf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvhe1obm2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:02:44 compute-0 nova_compute[257700]: 2025-11-24 10:02:44.649 257704 DEBUG nova.network.neutron [req-984be270-968b-4651-9787-f6ad42fe60ad req-7e69d48f-4f84-435c-b38e-587976afc99e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Updated VIF entry in instance network info cache for port 6f615f70-f3a3-45d6-8078-6f32abae3c0b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 10:02:44 compute-0 nova_compute[257700]: 2025-11-24 10:02:44.650 257704 DEBUG nova.network.neutron [req-984be270-968b-4651-9787-f6ad42fe60ad req-7e69d48f-4f84-435c-b38e-587976afc99e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Updating instance_info_cache with network_info: [{"id": "6f615f70-f3a3-45d6-8078-6f32abae3c0b", "address": "fa:16:3e:3a:6a:6d", "network": {"id": "d9ce2622-5822-4ecf-9fb9-f5f15c8ea094", "bridge": "br-int", "label": "tempest-network-smoke--73093411", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f615f70-f3", "ovs_interfaceid": "6f615f70-f3a3-45d6-8078-6f32abae3c0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 10:02:44 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:44 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:44 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:44.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:44 compute-0 nova_compute[257700]: 2025-11-24 10:02:44.668 257704 DEBUG oslo_concurrency.lockutils [req-984be270-968b-4651-9787-f6ad42fe60ad req-7e69d48f-4f84-435c-b38e-587976afc99e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Releasing lock "refresh_cache-374e7431-b73b-4a49-8aba-9ac699a35ebf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 10:02:44 compute-0 nova_compute[257700]: 2025-11-24 10:02:44.721 257704 DEBUG oslo_concurrency.processutils [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/374e7431-b73b-4a49-8aba-9ac699a35ebf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvhe1obm2" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:02:44 compute-0 nova_compute[257700]: 2025-11-24 10:02:44.762 257704 DEBUG nova.storage.rbd_utils [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] rbd image 374e7431-b73b-4a49-8aba-9ac699a35ebf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 24 10:02:44 compute-0 nova_compute[257700]: 2025-11-24 10:02:44.766 257704 DEBUG oslo_concurrency.processutils [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/374e7431-b73b-4a49-8aba-9ac699a35ebf/disk.config 374e7431-b73b-4a49-8aba-9ac699a35ebf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:02:44 compute-0 nova_compute[257700]: 2025-11-24 10:02:44.961 257704 DEBUG oslo_concurrency.processutils [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/374e7431-b73b-4a49-8aba-9ac699a35ebf/disk.config 374e7431-b73b-4a49-8aba-9ac699a35ebf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.195s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:02:44 compute-0 nova_compute[257700]: 2025-11-24 10:02:44.962 257704 INFO nova.virt.libvirt.driver [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Deleting local config drive /var/lib/nova/instances/374e7431-b73b-4a49-8aba-9ac699a35ebf/disk.config because it was imported into RBD.
Nov 24 10:02:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:02:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:02:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:02:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:02:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:02:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:02:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:02:45 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:02:45 compute-0 kernel: tap6f615f70-f3: entered promiscuous mode
Nov 24 10:02:45 compute-0 NetworkManager[48883]: <info>  [1763978565.0370] manager: (tap6f615f70-f3): new Tun device (/org/freedesktop/NetworkManager/Devices/59)
Nov 24 10:02:45 compute-0 ovn_controller[155123]: 2025-11-24T10:02:45Z|00087|binding|INFO|Claiming lport 6f615f70-f3a3-45d6-8078-6f32abae3c0b for this chassis.
Nov 24 10:02:45 compute-0 ovn_controller[155123]: 2025-11-24T10:02:45Z|00088|binding|INFO|6f615f70-f3a3-45d6-8078-6f32abae3c0b: Claiming fa:16:3e:3a:6a:6d 10.100.0.8
Nov 24 10:02:45 compute-0 nova_compute[257700]: 2025-11-24 10:02:45.038 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:45 compute-0 nova_compute[257700]: 2025-11-24 10:02:45.043 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:45 compute-0 nova_compute[257700]: 2025-11-24 10:02:45.050 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:45 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:02:45.068 165073 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:6a:6d 10.100.0.8'], port_security=['fa:16:3e:3a:6a:6d 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '374e7431-b73b-4a49-8aba-9ac699a35ebf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94d069fc040647d5a6e54894eec915fe', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4e79fa3a-aa58-45ad-be12-11ed04eeadbe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5c42cc1-2181-41fb-bb98-22dec924e208, chassis=[<ovs.db.idl.Row object at 0x7f45b2855760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f45b2855760>], logical_port=6f615f70-f3a3-45d6-8078-6f32abae3c0b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 10:02:45 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:02:45.071 165073 INFO neutron.agent.ovn.metadata.agent [-] Port 6f615f70-f3a3-45d6-8078-6f32abae3c0b in datapath d9ce2622-5822-4ecf-9fb9-f5f15c8ea094 bound to our chassis
Nov 24 10:02:45 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:02:45.073 165073 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d9ce2622-5822-4ecf-9fb9-f5f15c8ea094
Nov 24 10:02:45 compute-0 systemd-machined[219130]: New machine qemu-7-instance-0000000b.
Nov 24 10:02:45 compute-0 systemd-udevd[279294]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 10:02:45 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:02:45.096 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[fbd8e43f-80ef-4882-b85e-329c726f39f1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:02:45 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:02:45.098 165073 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd9ce2622-51 in ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 24 10:02:45 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:02:45.103 264910 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd9ce2622-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 24 10:02:45 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:02:45.104 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[bee7d547-25bb-4a61-9b84-79a103d716c3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:02:45 compute-0 systemd[1]: Started Virtual Machine qemu-7-instance-0000000b.
Nov 24 10:02:45 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:02:45.106 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[ac86761e-c0d9-4faf-9467-d617dd4d620f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:02:45 compute-0 NetworkManager[48883]: <info>  [1763978565.1147] device (tap6f615f70-f3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 10:02:45 compute-0 NetworkManager[48883]: <info>  [1763978565.1157] device (tap6f615f70-f3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 24 10:02:45 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:02:45.128 165227 DEBUG oslo.privsep.daemon [-] privsep: reply[cd885fdd-48b0-4ba3-81e2-8af1a4d30240]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:02:45 compute-0 nova_compute[257700]: 2025-11-24 10:02:45.154 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:45 compute-0 ovn_controller[155123]: 2025-11-24T10:02:45Z|00089|binding|INFO|Setting lport 6f615f70-f3a3-45d6-8078-6f32abae3c0b ovn-installed in OVS
Nov 24 10:02:45 compute-0 ovn_controller[155123]: 2025-11-24T10:02:45Z|00090|binding|INFO|Setting lport 6f615f70-f3a3-45d6-8078-6f32abae3c0b up in Southbound
Nov 24 10:02:45 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:02:45.157 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[0a2bda3f-cf40-42e8-815e-86f79b2754c5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:02:45 compute-0 nova_compute[257700]: 2025-11-24 10:02:45.160 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:45 compute-0 podman[279275]: 2025-11-24 10:02:45.168845034 +0000 UTC m=+0.102244603 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS)
Nov 24 10:02:45 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:02:45.192 264951 DEBUG oslo.privsep.daemon [-] privsep: reply[57e45679-593b-4a29-88b3-7f8210d4ddc2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:02:45 compute-0 NetworkManager[48883]: <info>  [1763978565.1993] manager: (tapd9ce2622-50): new Veth device (/org/freedesktop/NetworkManager/Devices/60)
Nov 24 10:02:45 compute-0 podman[279276]: 2025-11-24 10:02:45.199217953 +0000 UTC m=+0.117589971 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Nov 24 10:02:45 compute-0 systemd-udevd[279302]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 10:02:45 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:02:45.198 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[28fdc5a4-c7ab-4440-878c-c1e993408aa7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:02:45 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:02:45.240 264951 DEBUG oslo.privsep.daemon [-] privsep: reply[45c36f36-b468-4e4a-b4a7-ddf61a8462c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:02:45 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:02:45.244 264951 DEBUG oslo.privsep.daemon [-] privsep: reply[2b063b8e-20da-4305-9a29-36c9bf329833]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:02:45 compute-0 NetworkManager[48883]: <info>  [1763978565.2674] device (tapd9ce2622-50): carrier: link connected
Nov 24 10:02:45 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:02:45.276 264951 DEBUG oslo.privsep.daemon [-] privsep: reply[8063d946-6d48-4b33-a1d9-d24ef5faa8dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:02:45 compute-0 sshd-session[278944]: error: kex_exchange_identification: read: Connection timed out
Nov 24 10:02:45 compute-0 sshd-session[278944]: banner exchange: Connection from 14.215.126.91 port 36122: Connection timed out
Nov 24 10:02:45 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:02:45.295 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[f97cc021-17dc-4248-85e2-a177c5e25f94]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd9ce2622-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:88:68:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 451283, 'reachable_time': 35605, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279355, 'error': None, 'target': 'ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:02:45 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:02:45.310 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[fcf3b53f-e8b9-4e65-829a-3626d518bde8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe88:68d4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 451283, 'tstamp': 451283}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279371, 'error': None, 'target': 'ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:02:45 compute-0 ceph-mon[74331]: pgmap v1079: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 24 10:02:45 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:02:45.337 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[800b9bdc-1d56-4746-a62a-f62c192fe8c5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd9ce2622-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:88:68:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 451283, 'reachable_time': 35605, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 279373, 'error': None, 'target': 'ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:02:45 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:45 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:02:45 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:45.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:02:45 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:02:45.381 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[98910360-2de0-40fb-b601-3d1f9213dfa6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:02:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Optimize plan auto_2025-11-24_10:02:45
Nov 24 10:02:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 10:02:45 compute-0 ceph-mgr[74626]: [balancer INFO root] do_upmap
Nov 24 10:02:45 compute-0 ceph-mgr[74626]: [balancer INFO root] pools ['images', '.nfs', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', '.rgw.root', 'backups', '.mgr', 'default.rgw.log', 'default.rgw.meta']
Nov 24 10:02:45 compute-0 ceph-mgr[74626]: [balancer INFO root] prepared 0/10 upmap changes
Nov 24 10:02:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:02:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:02:45 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:02:45.458 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[3e7e1c76-a538-4ddd-afa4-fa3607230a2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:02:45 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:02:45.459 165073 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd9ce2622-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:02:45 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:02:45.460 165073 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 24 10:02:45 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:02:45.460 165073 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd9ce2622-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:02:45 compute-0 kernel: tapd9ce2622-50: entered promiscuous mode
Nov 24 10:02:45 compute-0 NetworkManager[48883]: <info>  [1763978565.4630] manager: (tapd9ce2622-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/61)
Nov 24 10:02:45 compute-0 nova_compute[257700]: 2025-11-24 10:02:45.462 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:45 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:02:45.467 165073 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd9ce2622-50, col_values=(('external_ids', {'iface-id': '7ff70316-0c3c-4814-add9-f5919c7adc2b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:02:45 compute-0 ovn_controller[155123]: 2025-11-24T10:02:45Z|00091|binding|INFO|Releasing lport 7ff70316-0c3c-4814-add9-f5919c7adc2b from this chassis (sb_readonly=0)
Nov 24 10:02:45 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:02:45.470 165073 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d9ce2622-5822-4ecf-9fb9-f5f15c8ea094.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d9ce2622-5822-4ecf-9fb9-f5f15c8ea094.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 24 10:02:45 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:02:45.471 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[4c4bd457-d172-45f0-9f7b-506c7c6970ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:02:45 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:02:45.471 165073 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 24 10:02:45 compute-0 ovn_metadata_agent[165067]: global
Nov 24 10:02:45 compute-0 ovn_metadata_agent[165067]:     log         /dev/log local0 debug
Nov 24 10:02:45 compute-0 ovn_metadata_agent[165067]:     log-tag     haproxy-metadata-proxy-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094
Nov 24 10:02:45 compute-0 ovn_metadata_agent[165067]:     user        root
Nov 24 10:02:45 compute-0 ovn_metadata_agent[165067]:     group       root
Nov 24 10:02:45 compute-0 ovn_metadata_agent[165067]:     maxconn     1024
Nov 24 10:02:45 compute-0 ovn_metadata_agent[165067]:     pidfile     /var/lib/neutron/external/pids/d9ce2622-5822-4ecf-9fb9-f5f15c8ea094.pid.haproxy
Nov 24 10:02:45 compute-0 ovn_metadata_agent[165067]:     daemon
Nov 24 10:02:45 compute-0 ovn_metadata_agent[165067]: 
Nov 24 10:02:45 compute-0 ovn_metadata_agent[165067]: defaults
Nov 24 10:02:45 compute-0 ovn_metadata_agent[165067]:     log global
Nov 24 10:02:45 compute-0 ovn_metadata_agent[165067]:     mode http
Nov 24 10:02:45 compute-0 ovn_metadata_agent[165067]:     option httplog
Nov 24 10:02:45 compute-0 ovn_metadata_agent[165067]:     option dontlognull
Nov 24 10:02:45 compute-0 ovn_metadata_agent[165067]:     option http-server-close
Nov 24 10:02:45 compute-0 ovn_metadata_agent[165067]:     option forwardfor
Nov 24 10:02:45 compute-0 ovn_metadata_agent[165067]:     retries                 3
Nov 24 10:02:45 compute-0 ovn_metadata_agent[165067]:     timeout http-request    30s
Nov 24 10:02:45 compute-0 ovn_metadata_agent[165067]:     timeout connect         30s
Nov 24 10:02:45 compute-0 ovn_metadata_agent[165067]:     timeout client          32s
Nov 24 10:02:45 compute-0 ovn_metadata_agent[165067]:     timeout server          32s
Nov 24 10:02:45 compute-0 ovn_metadata_agent[165067]:     timeout http-keep-alive 30s
Nov 24 10:02:45 compute-0 ovn_metadata_agent[165067]: 
Nov 24 10:02:45 compute-0 ovn_metadata_agent[165067]: 
Nov 24 10:02:45 compute-0 ovn_metadata_agent[165067]: listen listener
Nov 24 10:02:45 compute-0 ovn_metadata_agent[165067]:     bind 169.254.169.254:80
Nov 24 10:02:45 compute-0 ovn_metadata_agent[165067]:     server metadata /var/lib/neutron/metadata_proxy
Nov 24 10:02:45 compute-0 ovn_metadata_agent[165067]:     http-request add-header X-OVN-Network-ID d9ce2622-5822-4ecf-9fb9-f5f15c8ea094
Nov 24 10:02:45 compute-0 ovn_metadata_agent[165067]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 24 10:02:45 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:02:45.472 165073 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094', 'env', 'PROCESS_TAG=haproxy-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d9ce2622-5822-4ecf-9fb9-f5f15c8ea094.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 24 10:02:45 compute-0 nova_compute[257700]: 2025-11-24 10:02:45.485 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:02:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:02:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:02:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:02:45 compute-0 nova_compute[257700]: 2025-11-24 10:02:45.500 257704 DEBUG nova.virt.driver [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Emitting event <LifecycleEvent: 1763978565.4998338, 374e7431-b73b-4a49-8aba-9ac699a35ebf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 10:02:45 compute-0 nova_compute[257700]: 2025-11-24 10:02:45.500 257704 INFO nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] VM Started (Lifecycle Event)
Nov 24 10:02:45 compute-0 nova_compute[257700]: 2025-11-24 10:02:45.533 257704 DEBUG nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 10:02:45 compute-0 nova_compute[257700]: 2025-11-24 10:02:45.538 257704 DEBUG nova.virt.driver [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Emitting event <LifecycleEvent: 1763978565.5029962, 374e7431-b73b-4a49-8aba-9ac699a35ebf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 10:02:45 compute-0 nova_compute[257700]: 2025-11-24 10:02:45.538 257704 INFO nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] VM Paused (Lifecycle Event)
Nov 24 10:02:45 compute-0 nova_compute[257700]: 2025-11-24 10:02:45.556 257704 DEBUG nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 10:02:45 compute-0 nova_compute[257700]: 2025-11-24 10:02:45.560 257704 DEBUG nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 10:02:45 compute-0 nova_compute[257700]: 2025-11-24 10:02:45.591 257704 INFO nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 10:02:45 compute-0 nova_compute[257700]: 2025-11-24 10:02:45.674 257704 DEBUG nova.compute.manager [req-25f3d0d6-6545-4c95-93ce-1fff97c7d2f4 req-16707e09-c92b-4e14-8104-55e410f225d7 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Received event network-vif-plugged-6f615f70-f3a3-45d6-8078-6f32abae3c0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 10:02:45 compute-0 nova_compute[257700]: 2025-11-24 10:02:45.675 257704 DEBUG oslo_concurrency.lockutils [req-25f3d0d6-6545-4c95-93ce-1fff97c7d2f4 req-16707e09-c92b-4e14-8104-55e410f225d7 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "374e7431-b73b-4a49-8aba-9ac699a35ebf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:02:45 compute-0 nova_compute[257700]: 2025-11-24 10:02:45.675 257704 DEBUG oslo_concurrency.lockutils [req-25f3d0d6-6545-4c95-93ce-1fff97c7d2f4 req-16707e09-c92b-4e14-8104-55e410f225d7 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "374e7431-b73b-4a49-8aba-9ac699a35ebf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:02:45 compute-0 nova_compute[257700]: 2025-11-24 10:02:45.675 257704 DEBUG oslo_concurrency.lockutils [req-25f3d0d6-6545-4c95-93ce-1fff97c7d2f4 req-16707e09-c92b-4e14-8104-55e410f225d7 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "374e7431-b73b-4a49-8aba-9ac699a35ebf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:02:45 compute-0 nova_compute[257700]: 2025-11-24 10:02:45.676 257704 DEBUG nova.compute.manager [req-25f3d0d6-6545-4c95-93ce-1fff97c7d2f4 req-16707e09-c92b-4e14-8104-55e410f225d7 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Processing event network-vif-plugged-6f615f70-f3a3-45d6-8078-6f32abae3c0b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 24 10:02:45 compute-0 nova_compute[257700]: 2025-11-24 10:02:45.676 257704 DEBUG nova.compute.manager [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 24 10:02:45 compute-0 nova_compute[257700]: 2025-11-24 10:02:45.680 257704 DEBUG nova.virt.driver [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] Emitting event <LifecycleEvent: 1763978565.6797612, 374e7431-b73b-4a49-8aba-9ac699a35ebf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 10:02:45 compute-0 nova_compute[257700]: 2025-11-24 10:02:45.680 257704 INFO nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] VM Resumed (Lifecycle Event)
Nov 24 10:02:45 compute-0 nova_compute[257700]: 2025-11-24 10:02:45.681 257704 DEBUG nova.virt.libvirt.driver [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 24 10:02:45 compute-0 nova_compute[257700]: 2025-11-24 10:02:45.685 257704 INFO nova.virt.libvirt.driver [-] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Instance spawned successfully.
Nov 24 10:02:45 compute-0 nova_compute[257700]: 2025-11-24 10:02:45.685 257704 DEBUG nova.virt.libvirt.driver [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 24 10:02:45 compute-0 nova_compute[257700]: 2025-11-24 10:02:45.706 257704 DEBUG nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 10:02:45 compute-0 nova_compute[257700]: 2025-11-24 10:02:45.710 257704 DEBUG nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 24 10:02:45 compute-0 nova_compute[257700]: 2025-11-24 10:02:45.727 257704 DEBUG nova.virt.libvirt.driver [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 10:02:45 compute-0 nova_compute[257700]: 2025-11-24 10:02:45.728 257704 DEBUG nova.virt.libvirt.driver [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 10:02:45 compute-0 nova_compute[257700]: 2025-11-24 10:02:45.731 257704 DEBUG nova.virt.libvirt.driver [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 10:02:45 compute-0 nova_compute[257700]: 2025-11-24 10:02:45.732 257704 DEBUG nova.virt.libvirt.driver [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 10:02:45 compute-0 nova_compute[257700]: 2025-11-24 10:02:45.733 257704 DEBUG nova.virt.libvirt.driver [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 10:02:45 compute-0 nova_compute[257700]: 2025-11-24 10:02:45.733 257704 DEBUG nova.virt.libvirt.driver [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 24 10:02:45 compute-0 nova_compute[257700]: 2025-11-24 10:02:45.737 257704 INFO nova.compute.manager [None req-5019cbb5-01a9-4c52-b47e-176005817fb9 - - - - - -] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 24 10:02:45 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1080: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 24 10:02:45 compute-0 nova_compute[257700]: 2025-11-24 10:02:45.801 257704 INFO nova.compute.manager [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Took 8.02 seconds to spawn the instance on the hypervisor.
Nov 24 10:02:45 compute-0 nova_compute[257700]: 2025-11-24 10:02:45.801 257704 DEBUG nova.compute.manager [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 10:02:45 compute-0 podman[279431]: 2025-11-24 10:02:45.841149436 +0000 UTC m=+0.048246361 container create 6394699b3a0c2c4fc5433a4c70985566ceb3edc4584262e53990ee078e99ebcc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 24 10:02:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 10:02:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:02:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 10:02:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:02:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003459970412515465 of space, bias 1.0, pg target 0.10379911237546395 quantized to 32 (current 32)
Nov 24 10:02:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:02:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:02:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:02:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:02:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:02:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 24 10:02:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:02:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 10:02:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:02:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:02:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:02:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 24 10:02:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:02:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 10:02:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:02:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:02:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:02:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 10:02:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:02:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 10:02:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 10:02:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 10:02:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 10:02:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 10:02:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 10:02:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 10:02:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 10:02:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 10:02:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 10:02:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 10:02:45 compute-0 systemd[1]: Started libpod-conmon-6394699b3a0c2c4fc5433a4c70985566ceb3edc4584262e53990ee078e99ebcc.scope.
Nov 24 10:02:45 compute-0 nova_compute[257700]: 2025-11-24 10:02:45.905 257704 INFO nova.compute.manager [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Took 9.01 seconds to build instance.
Nov 24 10:02:45 compute-0 podman[279431]: 2025-11-24 10:02:45.815495133 +0000 UTC m=+0.022592078 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 24 10:02:45 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:02:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d63a08d9f038fc9668efc04ae3cb149eb7b182e1d89a04f14d50d676f266685/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 24 10:02:45 compute-0 podman[279431]: 2025-11-24 10:02:45.932305424 +0000 UTC m=+0.139402389 container init 6394699b3a0c2c4fc5433a4c70985566ceb3edc4584262e53990ee078e99ebcc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118)
Nov 24 10:02:45 compute-0 nova_compute[257700]: 2025-11-24 10:02:45.932 257704 DEBUG oslo_concurrency.lockutils [None req-082188d7-9fd9-45b7-a981-eef36e6b563e 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "374e7431-b73b-4a49-8aba-9ac699a35ebf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.082s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:02:45 compute-0 podman[279431]: 2025-11-24 10:02:45.937450061 +0000 UTC m=+0.144546996 container start 6394699b3a0c2c4fc5433a4c70985566ceb3edc4584262e53990ee078e99ebcc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 24 10:02:45 compute-0 neutron-haproxy-ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094[279446]: [NOTICE]   (279450) : New worker (279452) forked
Nov 24 10:02:45 compute-0 neutron-haproxy-ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094[279446]: [NOTICE]   (279450) : Loading success.
Nov 24 10:02:46 compute-0 nova_compute[257700]: 2025-11-24 10:02:46.180 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:46 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:02:46 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:46 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:02:46 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:46.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:02:46 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:02:47 compute-0 ceph-mon[74331]: pgmap v1080: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 24 10:02:47 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:47 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:47 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:47.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:02:47.561Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:02:47 compute-0 nova_compute[257700]: 2025-11-24 10:02:47.747 257704 DEBUG nova.compute.manager [req-a4c10904-0fc4-47b4-951a-9eddf0b24987 req-1c7590dc-8770-42a7-b3a7-88493c6c6239 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Received event network-vif-plugged-6f615f70-f3a3-45d6-8078-6f32abae3c0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 10:02:47 compute-0 nova_compute[257700]: 2025-11-24 10:02:47.747 257704 DEBUG oslo_concurrency.lockutils [req-a4c10904-0fc4-47b4-951a-9eddf0b24987 req-1c7590dc-8770-42a7-b3a7-88493c6c6239 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "374e7431-b73b-4a49-8aba-9ac699a35ebf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:02:47 compute-0 nova_compute[257700]: 2025-11-24 10:02:47.748 257704 DEBUG oslo_concurrency.lockutils [req-a4c10904-0fc4-47b4-951a-9eddf0b24987 req-1c7590dc-8770-42a7-b3a7-88493c6c6239 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "374e7431-b73b-4a49-8aba-9ac699a35ebf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:02:47 compute-0 nova_compute[257700]: 2025-11-24 10:02:47.748 257704 DEBUG oslo_concurrency.lockutils [req-a4c10904-0fc4-47b4-951a-9eddf0b24987 req-1c7590dc-8770-42a7-b3a7-88493c6c6239 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "374e7431-b73b-4a49-8aba-9ac699a35ebf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:02:47 compute-0 nova_compute[257700]: 2025-11-24 10:02:47.748 257704 DEBUG nova.compute.manager [req-a4c10904-0fc4-47b4-951a-9eddf0b24987 req-1c7590dc-8770-42a7-b3a7-88493c6c6239 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] No waiting events found dispatching network-vif-plugged-6f615f70-f3a3-45d6-8078-6f32abae3c0b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 10:02:47 compute-0 nova_compute[257700]: 2025-11-24 10:02:47.748 257704 WARNING nova.compute.manager [req-a4c10904-0fc4-47b4-951a-9eddf0b24987 req-1c7590dc-8770-42a7-b3a7-88493c6c6239 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Received unexpected event network-vif-plugged-6f615f70-f3a3-45d6-8078-6f32abae3c0b for instance with vm_state active and task_state None.
Nov 24 10:02:47 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1081: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 96 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Nov 24 10:02:48 compute-0 NetworkManager[48883]: <info>  [1763978568.1442] manager: (patch-br-int-to-provnet-aec09a4d-39ae-42d2-80ba-0cd5b53fed5d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Nov 24 10:02:48 compute-0 NetworkManager[48883]: <info>  [1763978568.1451] manager: (patch-provnet-aec09a4d-39ae-42d2-80ba-0cd5b53fed5d-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Nov 24 10:02:48 compute-0 ovn_controller[155123]: 2025-11-24T10:02:48Z|00092|binding|INFO|Releasing lport 7ff70316-0c3c-4814-add9-f5919c7adc2b from this chassis (sb_readonly=0)
Nov 24 10:02:48 compute-0 nova_compute[257700]: 2025-11-24 10:02:48.143 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:48 compute-0 nova_compute[257700]: 2025-11-24 10:02:48.148 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:48 compute-0 ovn_controller[155123]: 2025-11-24T10:02:48Z|00093|binding|INFO|Releasing lport 7ff70316-0c3c-4814-add9-f5919c7adc2b from this chassis (sb_readonly=0)
Nov 24 10:02:48 compute-0 sudo[279464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:02:48 compute-0 sudo[279464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:02:48 compute-0 sudo[279464]: pam_unix(sudo:session): session closed for user root
Nov 24 10:02:48 compute-0 nova_compute[257700]: 2025-11-24 10:02:48.350 257704 DEBUG nova.compute.manager [req-b20e18e7-6390-4531-8aa2-41b33a85d556 req-e3744d34-f81b-4f97-80af-3eb289788927 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Received event network-changed-6f615f70-f3a3-45d6-8078-6f32abae3c0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 10:02:48 compute-0 nova_compute[257700]: 2025-11-24 10:02:48.350 257704 DEBUG nova.compute.manager [req-b20e18e7-6390-4531-8aa2-41b33a85d556 req-e3744d34-f81b-4f97-80af-3eb289788927 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Refreshing instance network info cache due to event network-changed-6f615f70-f3a3-45d6-8078-6f32abae3c0b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 10:02:48 compute-0 nova_compute[257700]: 2025-11-24 10:02:48.350 257704 DEBUG oslo_concurrency.lockutils [req-b20e18e7-6390-4531-8aa2-41b33a85d556 req-e3744d34-f81b-4f97-80af-3eb289788927 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "refresh_cache-374e7431-b73b-4a49-8aba-9ac699a35ebf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 10:02:48 compute-0 nova_compute[257700]: 2025-11-24 10:02:48.350 257704 DEBUG oslo_concurrency.lockutils [req-b20e18e7-6390-4531-8aa2-41b33a85d556 req-e3744d34-f81b-4f97-80af-3eb289788927 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquired lock "refresh_cache-374e7431-b73b-4a49-8aba-9ac699a35ebf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 10:02:48 compute-0 nova_compute[257700]: 2025-11-24 10:02:48.350 257704 DEBUG nova.network.neutron [req-b20e18e7-6390-4531-8aa2-41b33a85d556 req-e3744d34-f81b-4f97-80af-3eb289788927 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Refreshing network info cache for port 6f615f70-f3a3-45d6-8078-6f32abae3c0b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 10:02:48 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:48 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:48 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:48.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:02:48.910Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 10:02:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:02:48.910Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 10:02:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:02:48.910Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 10:02:48 compute-0 nova_compute[257700]: 2025-11-24 10:02:48.941 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:49 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:49 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:49 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:49.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:49 compute-0 ceph-mon[74331]: pgmap v1081: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 96 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Nov 24 10:02:49 compute-0 nova_compute[257700]: 2025-11-24 10:02:49.603 257704 DEBUG nova.network.neutron [req-b20e18e7-6390-4531-8aa2-41b33a85d556 req-e3744d34-f81b-4f97-80af-3eb289788927 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Updated VIF entry in instance network info cache for port 6f615f70-f3a3-45d6-8078-6f32abae3c0b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 10:02:49 compute-0 nova_compute[257700]: 2025-11-24 10:02:49.604 257704 DEBUG nova.network.neutron [req-b20e18e7-6390-4531-8aa2-41b33a85d556 req-e3744d34-f81b-4f97-80af-3eb289788927 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Updating instance_info_cache with network_info: [{"id": "6f615f70-f3a3-45d6-8078-6f32abae3c0b", "address": "fa:16:3e:3a:6a:6d", "network": {"id": "d9ce2622-5822-4ecf-9fb9-f5f15c8ea094", "bridge": "br-int", "label": "tempest-network-smoke--73093411", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f615f70-f3", "ovs_interfaceid": "6f615f70-f3a3-45d6-8078-6f32abae3c0b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 10:02:49 compute-0 nova_compute[257700]: 2025-11-24 10:02:49.621 257704 DEBUG oslo_concurrency.lockutils [req-b20e18e7-6390-4531-8aa2-41b33a85d556 req-e3744d34-f81b-4f97-80af-3eb289788927 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Releasing lock "refresh_cache-374e7431-b73b-4a49-8aba-9ac699a35ebf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 10:02:49 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1082: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 95 KiB/s rd, 1.8 MiB/s wr, 40 op/s
Nov 24 10:02:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:02:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:02:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:02:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:02:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:02:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:02:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:02:50 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:02:50 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:50 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:50 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:50.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:02:50] "GET /metrics HTTP/1.1" 200 48480 "" "Prometheus/2.51.0"
Nov 24 10:02:50 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:02:50] "GET /metrics HTTP/1.1" 200 48480 "" "Prometheus/2.51.0"
Nov 24 10:02:51 compute-0 nova_compute[257700]: 2025-11-24 10:02:51.182 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:51 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:51 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:51 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:51.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:51 compute-0 ceph-mon[74331]: pgmap v1082: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 95 KiB/s rd, 1.8 MiB/s wr, 40 op/s
Nov 24 10:02:51 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:02:51 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1083: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Nov 24 10:02:51 compute-0 podman[279493]: 2025-11-24 10:02:51.793321292 +0000 UTC m=+0.065584139 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 24 10:02:52 compute-0 sudo[279512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:02:52 compute-0 sudo[279512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:02:52 compute-0 sudo[279512]: pam_unix(sudo:session): session closed for user root
Nov 24 10:02:52 compute-0 sudo[279537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 10:02:52 compute-0 sudo[279537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:02:52 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:52 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:52 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:52.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:52 compute-0 sudo[279537]: pam_unix(sudo:session): session closed for user root
Nov 24 10:02:52 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1084: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 13 KiB/s wr, 80 op/s
Nov 24 10:02:52 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 10:02:52 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:02:52 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 10:02:53 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:02:53 compute-0 sudo[279592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:02:53 compute-0 sudo[279592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:02:53 compute-0 sudo[279592]: pam_unix(sudo:session): session closed for user root
Nov 24 10:02:53 compute-0 sudo[279617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 24 10:02:53 compute-0 sudo[279617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:02:53 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:53 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:53 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:53.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:53 compute-0 ceph-mon[74331]: pgmap v1083: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Nov 24 10:02:53 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:02:53 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 10:02:53 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:02:53 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:02:53 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 10:02:53 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 10:02:53 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:02:53 compute-0 podman[279687]: 2025-11-24 10:02:53.630287809 +0000 UTC m=+0.061568599 container create 6ad8f4a3278e84b9dbce66c50225be054f2dcdf4e9eec23572fce1cb3af21858 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_diffie, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 10:02:53 compute-0 podman[279687]: 2025-11-24 10:02:53.596953027 +0000 UTC m=+0.028233887 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:02:53 compute-0 systemd[1]: Started libpod-conmon-6ad8f4a3278e84b9dbce66c50225be054f2dcdf4e9eec23572fce1cb3af21858.scope.
Nov 24 10:02:53 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:02:53 compute-0 podman[279687]: 2025-11-24 10:02:53.771523573 +0000 UTC m=+0.202804443 container init 6ad8f4a3278e84b9dbce66c50225be054f2dcdf4e9eec23572fce1cb3af21858 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_diffie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 24 10:02:53 compute-0 podman[279687]: 2025-11-24 10:02:53.781267243 +0000 UTC m=+0.212548043 container start 6ad8f4a3278e84b9dbce66c50225be054f2dcdf4e9eec23572fce1cb3af21858 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_diffie, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 10:02:53 compute-0 podman[279687]: 2025-11-24 10:02:53.785344653 +0000 UTC m=+0.216625503 container attach 6ad8f4a3278e84b9dbce66c50225be054f2dcdf4e9eec23572fce1cb3af21858 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Nov 24 10:02:53 compute-0 keen_diffie[279704]: 167 167
Nov 24 10:02:53 compute-0 systemd[1]: libpod-6ad8f4a3278e84b9dbce66c50225be054f2dcdf4e9eec23572fce1cb3af21858.scope: Deactivated successfully.
Nov 24 10:02:53 compute-0 podman[279687]: 2025-11-24 10:02:53.790618313 +0000 UTC m=+0.221899073 container died 6ad8f4a3278e84b9dbce66c50225be054f2dcdf4e9eec23572fce1cb3af21858 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 10:02:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea01eca3815983749bbe80f0ace4940afd93ae294daf0b5aabdade3b6f9c8d08-merged.mount: Deactivated successfully.
Nov 24 10:02:53 compute-0 podman[279687]: 2025-11-24 10:02:53.825666937 +0000 UTC m=+0.256947697 container remove 6ad8f4a3278e84b9dbce66c50225be054f2dcdf4e9eec23572fce1cb3af21858 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 10:02:53 compute-0 systemd[1]: libpod-conmon-6ad8f4a3278e84b9dbce66c50225be054f2dcdf4e9eec23572fce1cb3af21858.scope: Deactivated successfully.
Nov 24 10:02:53 compute-0 nova_compute[257700]: 2025-11-24 10:02:53.944 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:54 compute-0 podman[279728]: 2025-11-24 10:02:54.048485853 +0000 UTC m=+0.051221044 container create 095eb9e97b4921f51a86c93fee612c55d2d72aa968136f6ca16d2150259d1369 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lederberg, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 10:02:54 compute-0 systemd[1]: Started libpod-conmon-095eb9e97b4921f51a86c93fee612c55d2d72aa968136f6ca16d2150259d1369.scope.
Nov 24 10:02:54 compute-0 podman[279728]: 2025-11-24 10:02:54.025160178 +0000 UTC m=+0.027895389 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:02:54 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:02:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ab43024f41dfcc1f8ef8e30075a8d969435eaba6c7ff486297d68925493beaf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 10:02:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ab43024f41dfcc1f8ef8e30075a8d969435eaba6c7ff486297d68925493beaf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 10:02:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ab43024f41dfcc1f8ef8e30075a8d969435eaba6c7ff486297d68925493beaf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 10:02:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ab43024f41dfcc1f8ef8e30075a8d969435eaba6c7ff486297d68925493beaf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 10:02:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ab43024f41dfcc1f8ef8e30075a8d969435eaba6c7ff486297d68925493beaf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 10:02:54 compute-0 podman[279728]: 2025-11-24 10:02:54.144284036 +0000 UTC m=+0.147019227 container init 095eb9e97b4921f51a86c93fee612c55d2d72aa968136f6ca16d2150259d1369 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lederberg, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 10:02:54 compute-0 podman[279728]: 2025-11-24 10:02:54.153134944 +0000 UTC m=+0.155870135 container start 095eb9e97b4921f51a86c93fee612c55d2d72aa968136f6ca16d2150259d1369 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lederberg, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Nov 24 10:02:54 compute-0 podman[279728]: 2025-11-24 10:02:54.158274771 +0000 UTC m=+0.161009982 container attach 095eb9e97b4921f51a86c93fee612c55d2d72aa968136f6ca16d2150259d1369 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Nov 24 10:02:54 compute-0 ceph-mon[74331]: pgmap v1084: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 13 KiB/s wr, 80 op/s
Nov 24 10:02:54 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/4175873145' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:02:54 compute-0 naughty_lederberg[279745]: --> passed data devices: 0 physical, 1 LVM
Nov 24 10:02:54 compute-0 naughty_lederberg[279745]: --> All data devices are unavailable
Nov 24 10:02:54 compute-0 systemd[1]: libpod-095eb9e97b4921f51a86c93fee612c55d2d72aa968136f6ca16d2150259d1369.scope: Deactivated successfully.
Nov 24 10:02:54 compute-0 podman[279761]: 2025-11-24 10:02:54.535682729 +0000 UTC m=+0.022865594 container died 095eb9e97b4921f51a86c93fee612c55d2d72aa968136f6ca16d2150259d1369 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lederberg, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 10:02:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ab43024f41dfcc1f8ef8e30075a8d969435eaba6c7ff486297d68925493beaf-merged.mount: Deactivated successfully.
Nov 24 10:02:54 compute-0 podman[279761]: 2025-11-24 10:02:54.573847061 +0000 UTC m=+0.061029866 container remove 095eb9e97b4921f51a86c93fee612c55d2d72aa968136f6ca16d2150259d1369 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 10:02:54 compute-0 systemd[1]: libpod-conmon-095eb9e97b4921f51a86c93fee612c55d2d72aa968136f6ca16d2150259d1369.scope: Deactivated successfully.
Nov 24 10:02:54 compute-0 sudo[279617]: pam_unix(sudo:session): session closed for user root
Nov 24 10:02:54 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:54 compute-0 sudo[279776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:02:54 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:02:54 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:54.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:02:54 compute-0 sudo[279776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:02:54 compute-0 sudo[279776]: pam_unix(sudo:session): session closed for user root
Nov 24 10:02:54 compute-0 sudo[279801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- lvm list --format json
Nov 24 10:02:54 compute-0 sudo[279801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:02:54 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1085: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 13 KiB/s wr, 80 op/s
Nov 24 10:02:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:02:54 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:02:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:02:55 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:02:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:02:55 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:02:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:02:55 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:02:55 compute-0 podman[279866]: 2025-11-24 10:02:55.161210228 +0000 UTC m=+0.047891272 container create 380b0e8b5522f45a78c790b655e365c13b4acb4c431565a1c90a9ec606b9b28b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_jones, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 10:02:55 compute-0 systemd[1]: Started libpod-conmon-380b0e8b5522f45a78c790b655e365c13b4acb4c431565a1c90a9ec606b9b28b.scope.
Nov 24 10:02:55 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:02:55 compute-0 podman[279866]: 2025-11-24 10:02:55.139224115 +0000 UTC m=+0.025905189 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:02:55 compute-0 podman[279866]: 2025-11-24 10:02:55.248977153 +0000 UTC m=+0.135658257 container init 380b0e8b5522f45a78c790b655e365c13b4acb4c431565a1c90a9ec606b9b28b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_jones, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 10:02:55 compute-0 podman[279866]: 2025-11-24 10:02:55.25617634 +0000 UTC m=+0.142857384 container start 380b0e8b5522f45a78c790b655e365c13b4acb4c431565a1c90a9ec606b9b28b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_jones, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 10:02:55 compute-0 podman[279866]: 2025-11-24 10:02:55.259993565 +0000 UTC m=+0.146674669 container attach 380b0e8b5522f45a78c790b655e365c13b4acb4c431565a1c90a9ec606b9b28b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_jones, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1)
Nov 24 10:02:55 compute-0 reverent_jones[279882]: 167 167
Nov 24 10:02:55 compute-0 systemd[1]: libpod-380b0e8b5522f45a78c790b655e365c13b4acb4c431565a1c90a9ec606b9b28b.scope: Deactivated successfully.
Nov 24 10:02:55 compute-0 podman[279866]: 2025-11-24 10:02:55.26427185 +0000 UTC m=+0.150952914 container died 380b0e8b5522f45a78c790b655e365c13b4acb4c431565a1c90a9ec606b9b28b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 10:02:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f121585ba7e43c639563abb79aa16ced02290160dbcd0b2eb4d01255a6017a6-merged.mount: Deactivated successfully.
Nov 24 10:02:55 compute-0 podman[279866]: 2025-11-24 10:02:55.307729562 +0000 UTC m=+0.194410606 container remove 380b0e8b5522f45a78c790b655e365c13b4acb4c431565a1c90a9ec606b9b28b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_jones, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 10:02:55 compute-0 systemd[1]: libpod-conmon-380b0e8b5522f45a78c790b655e365c13b4acb4c431565a1c90a9ec606b9b28b.scope: Deactivated successfully.
Nov 24 10:02:55 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:55 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:55 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:55.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:55 compute-0 podman[279906]: 2025-11-24 10:02:55.499763978 +0000 UTC m=+0.047348079 container create 15d59069f30111805c98b481ac3a0bbacaaf81f3dff17b69254e4a5038e9ef8d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Nov 24 10:02:55 compute-0 systemd[1]: Started libpod-conmon-15d59069f30111805c98b481ac3a0bbacaaf81f3dff17b69254e4a5038e9ef8d.scope.
Nov 24 10:02:55 compute-0 podman[279906]: 2025-11-24 10:02:55.478204087 +0000 UTC m=+0.025788198 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:02:55 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:02:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b13211a2ac8a76b747dbb393208095f4ad07b8c8695f1a91c00e9c26c776dd11/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 10:02:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b13211a2ac8a76b747dbb393208095f4ad07b8c8695f1a91c00e9c26c776dd11/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 10:02:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b13211a2ac8a76b747dbb393208095f4ad07b8c8695f1a91c00e9c26c776dd11/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 10:02:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b13211a2ac8a76b747dbb393208095f4ad07b8c8695f1a91c00e9c26c776dd11/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 10:02:55 compute-0 podman[279906]: 2025-11-24 10:02:55.595767246 +0000 UTC m=+0.143351337 container init 15d59069f30111805c98b481ac3a0bbacaaf81f3dff17b69254e4a5038e9ef8d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 10:02:55 compute-0 podman[279906]: 2025-11-24 10:02:55.607720811 +0000 UTC m=+0.155304892 container start 15d59069f30111805c98b481ac3a0bbacaaf81f3dff17b69254e4a5038e9ef8d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_feynman, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 24 10:02:55 compute-0 podman[279906]: 2025-11-24 10:02:55.610612332 +0000 UTC m=+0.158196413 container attach 15d59069f30111805c98b481ac3a0bbacaaf81f3dff17b69254e4a5038e9ef8d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_feynman, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 10:02:55 compute-0 nostalgic_feynman[279924]: {
Nov 24 10:02:55 compute-0 nostalgic_feynman[279924]:     "0": [
Nov 24 10:02:55 compute-0 nostalgic_feynman[279924]:         {
Nov 24 10:02:55 compute-0 nostalgic_feynman[279924]:             "devices": [
Nov 24 10:02:55 compute-0 nostalgic_feynman[279924]:                 "/dev/loop3"
Nov 24 10:02:55 compute-0 nostalgic_feynman[279924]:             ],
Nov 24 10:02:55 compute-0 nostalgic_feynman[279924]:             "lv_name": "ceph_lv0",
Nov 24 10:02:55 compute-0 nostalgic_feynman[279924]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 10:02:55 compute-0 nostalgic_feynman[279924]:             "lv_size": "21470642176",
Nov 24 10:02:55 compute-0 nostalgic_feynman[279924]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=84a084c3-61a7-5de7-8207-1f88efa59a64,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 24 10:02:55 compute-0 nostalgic_feynman[279924]:             "lv_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 10:02:55 compute-0 nostalgic_feynman[279924]:             "name": "ceph_lv0",
Nov 24 10:02:55 compute-0 nostalgic_feynman[279924]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 10:02:55 compute-0 nostalgic_feynman[279924]:             "tags": {
Nov 24 10:02:55 compute-0 nostalgic_feynman[279924]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 10:02:55 compute-0 nostalgic_feynman[279924]:                 "ceph.block_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 10:02:55 compute-0 nostalgic_feynman[279924]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 10:02:55 compute-0 nostalgic_feynman[279924]:                 "ceph.cluster_fsid": "84a084c3-61a7-5de7-8207-1f88efa59a64",
Nov 24 10:02:55 compute-0 nostalgic_feynman[279924]:                 "ceph.cluster_name": "ceph",
Nov 24 10:02:55 compute-0 nostalgic_feynman[279924]:                 "ceph.crush_device_class": "",
Nov 24 10:02:55 compute-0 nostalgic_feynman[279924]:                 "ceph.encrypted": "0",
Nov 24 10:02:55 compute-0 nostalgic_feynman[279924]:                 "ceph.osd_fsid": "4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c",
Nov 24 10:02:55 compute-0 nostalgic_feynman[279924]:                 "ceph.osd_id": "0",
Nov 24 10:02:55 compute-0 nostalgic_feynman[279924]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 10:02:55 compute-0 nostalgic_feynman[279924]:                 "ceph.type": "block",
Nov 24 10:02:55 compute-0 nostalgic_feynman[279924]:                 "ceph.vdo": "0",
Nov 24 10:02:55 compute-0 nostalgic_feynman[279924]:                 "ceph.with_tpm": "0"
Nov 24 10:02:55 compute-0 nostalgic_feynman[279924]:             },
Nov 24 10:02:55 compute-0 nostalgic_feynman[279924]:             "type": "block",
Nov 24 10:02:55 compute-0 nostalgic_feynman[279924]:             "vg_name": "ceph_vg0"
Nov 24 10:02:55 compute-0 nostalgic_feynman[279924]:         }
Nov 24 10:02:55 compute-0 nostalgic_feynman[279924]:     ]
Nov 24 10:02:55 compute-0 nostalgic_feynman[279924]: }
Nov 24 10:02:55 compute-0 systemd[1]: libpod-15d59069f30111805c98b481ac3a0bbacaaf81f3dff17b69254e4a5038e9ef8d.scope: Deactivated successfully.
Nov 24 10:02:55 compute-0 podman[279906]: 2025-11-24 10:02:55.948460654 +0000 UTC m=+0.496044775 container died 15d59069f30111805c98b481ac3a0bbacaaf81f3dff17b69254e4a5038e9ef8d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_feynman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 10:02:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-b13211a2ac8a76b747dbb393208095f4ad07b8c8695f1a91c00e9c26c776dd11-merged.mount: Deactivated successfully.
Nov 24 10:02:55 compute-0 podman[279906]: 2025-11-24 10:02:55.993085915 +0000 UTC m=+0.540670006 container remove 15d59069f30111805c98b481ac3a0bbacaaf81f3dff17b69254e4a5038e9ef8d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_feynman, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 10:02:56 compute-0 systemd[1]: libpod-conmon-15d59069f30111805c98b481ac3a0bbacaaf81f3dff17b69254e4a5038e9ef8d.scope: Deactivated successfully.
Nov 24 10:02:56 compute-0 sudo[279801]: pam_unix(sudo:session): session closed for user root
Nov 24 10:02:56 compute-0 sudo[279947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:02:56 compute-0 sudo[279947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:02:56 compute-0 sudo[279947]: pam_unix(sudo:session): session closed for user root
Nov 24 10:02:56 compute-0 nova_compute[257700]: 2025-11-24 10:02:56.183 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:56 compute-0 sudo[279972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- raw list --format json
Nov 24 10:02:56 compute-0 sudo[279972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:02:56 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:56 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:56 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:56.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:56 compute-0 podman[280040]: 2025-11-24 10:02:56.649175348 +0000 UTC m=+0.030387311 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:02:56 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1086: 353 pgs: 353 active+clean; 134 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.9 MiB/s wr, 109 op/s
Nov 24 10:02:57 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:02:57 compute-0 podman[280040]: 2025-11-24 10:02:57.036470279 +0000 UTC m=+0.417682212 container create 26d775542c1a91f347a0cb767967f8abb19cbe4718de679a7413db689cfb2d13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_bardeen, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 10:02:57 compute-0 ceph-mon[74331]: pgmap v1085: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 13 KiB/s wr, 80 op/s
Nov 24 10:02:57 compute-0 systemd[1]: Started libpod-conmon-26d775542c1a91f347a0cb767967f8abb19cbe4718de679a7413db689cfb2d13.scope.
Nov 24 10:02:57 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:02:57 compute-0 podman[280040]: 2025-11-24 10:02:57.147086838 +0000 UTC m=+0.528298771 container init 26d775542c1a91f347a0cb767967f8abb19cbe4718de679a7413db689cfb2d13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_bardeen, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Nov 24 10:02:57 compute-0 podman[280040]: 2025-11-24 10:02:57.15650301 +0000 UTC m=+0.537714943 container start 26d775542c1a91f347a0cb767967f8abb19cbe4718de679a7413db689cfb2d13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_bardeen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 24 10:02:57 compute-0 podman[280040]: 2025-11-24 10:02:57.159599296 +0000 UTC m=+0.540811239 container attach 26d775542c1a91f347a0cb767967f8abb19cbe4718de679a7413db689cfb2d13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_bardeen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Nov 24 10:02:57 compute-0 happy_bardeen[280057]: 167 167
Nov 24 10:02:57 compute-0 systemd[1]: libpod-26d775542c1a91f347a0cb767967f8abb19cbe4718de679a7413db689cfb2d13.scope: Deactivated successfully.
Nov 24 10:02:57 compute-0 conmon[280057]: conmon 26d775542c1a91f347a0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-26d775542c1a91f347a0cb767967f8abb19cbe4718de679a7413db689cfb2d13.scope/container/memory.events
Nov 24 10:02:57 compute-0 podman[280040]: 2025-11-24 10:02:57.165184154 +0000 UTC m=+0.546396097 container died 26d775542c1a91f347a0cb767967f8abb19cbe4718de679a7413db689cfb2d13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_bardeen, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Nov 24 10:02:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf47fa43d028836d2c609a10e9f6f04404200909d551bbb9a16d329e9972105e-merged.mount: Deactivated successfully.
Nov 24 10:02:57 compute-0 podman[280040]: 2025-11-24 10:02:57.205376706 +0000 UTC m=+0.586588669 container remove 26d775542c1a91f347a0cb767967f8abb19cbe4718de679a7413db689cfb2d13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 10:02:57 compute-0 systemd[1]: libpod-conmon-26d775542c1a91f347a0cb767967f8abb19cbe4718de679a7413db689cfb2d13.scope: Deactivated successfully.
Nov 24 10:02:57 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:57 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:02:57 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:57.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:02:57 compute-0 podman[280079]: 2025-11-24 10:02:57.401034672 +0000 UTC m=+0.057268024 container create 9ea54bed5d05d48f27d4971c95c7b9c6db851393baaec63ed791cb1849a103fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_lovelace, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Nov 24 10:02:57 compute-0 systemd[1]: Started libpod-conmon-9ea54bed5d05d48f27d4971c95c7b9c6db851393baaec63ed791cb1849a103fc.scope.
Nov 24 10:02:57 compute-0 podman[280079]: 2025-11-24 10:02:57.383957081 +0000 UTC m=+0.040190453 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:02:57 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:02:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0cbb9ce7ee33dc5150b6cd0e490ef5600dafb2da26741c7896bb0997099addc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 10:02:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0cbb9ce7ee33dc5150b6cd0e490ef5600dafb2da26741c7896bb0997099addc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 10:02:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0cbb9ce7ee33dc5150b6cd0e490ef5600dafb2da26741c7896bb0997099addc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 10:02:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0cbb9ce7ee33dc5150b6cd0e490ef5600dafb2da26741c7896bb0997099addc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 10:02:57 compute-0 podman[280079]: 2025-11-24 10:02:57.503777385 +0000 UTC m=+0.160010737 container init 9ea54bed5d05d48f27d4971c95c7b9c6db851393baaec63ed791cb1849a103fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_lovelace, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 10:02:57 compute-0 podman[280079]: 2025-11-24 10:02:57.510299206 +0000 UTC m=+0.166532558 container start 9ea54bed5d05d48f27d4971c95c7b9c6db851393baaec63ed791cb1849a103fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 10:02:57 compute-0 podman[280079]: 2025-11-24 10:02:57.513871394 +0000 UTC m=+0.170104766 container attach 9ea54bed5d05d48f27d4971c95c7b9c6db851393baaec63ed791cb1849a103fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2)
Nov 24 10:02:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:02:57.562Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:02:58 compute-0 ceph-mon[74331]: pgmap v1086: 353 pgs: 353 active+clean; 134 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.9 MiB/s wr, 109 op/s
Nov 24 10:02:58 compute-0 lvm[280172]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 10:02:58 compute-0 lvm[280172]: VG ceph_vg0 finished
Nov 24 10:02:58 compute-0 fervent_lovelace[280095]: {}
Nov 24 10:02:58 compute-0 systemd[1]: libpod-9ea54bed5d05d48f27d4971c95c7b9c6db851393baaec63ed791cb1849a103fc.scope: Deactivated successfully.
Nov 24 10:02:58 compute-0 systemd[1]: libpod-9ea54bed5d05d48f27d4971c95c7b9c6db851393baaec63ed791cb1849a103fc.scope: Consumed 1.342s CPU time.
Nov 24 10:02:58 compute-0 podman[280079]: 2025-11-24 10:02:58.337414797 +0000 UTC m=+0.993648149 container died 9ea54bed5d05d48f27d4971c95c7b9c6db851393baaec63ed791cb1849a103fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_lovelace, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Nov 24 10:02:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0cbb9ce7ee33dc5150b6cd0e490ef5600dafb2da26741c7896bb0997099addc-merged.mount: Deactivated successfully.
Nov 24 10:02:58 compute-0 podman[280079]: 2025-11-24 10:02:58.387615794 +0000 UTC m=+1.043849156 container remove 9ea54bed5d05d48f27d4971c95c7b9c6db851393baaec63ed791cb1849a103fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_lovelace, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Nov 24 10:02:58 compute-0 systemd[1]: libpod-conmon-9ea54bed5d05d48f27d4971c95c7b9c6db851393baaec63ed791cb1849a103fc.scope: Deactivated successfully.
Nov 24 10:02:58 compute-0 sudo[279972]: pam_unix(sudo:session): session closed for user root
Nov 24 10:02:58 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 10:02:58 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:02:58 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 10:02:58 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:02:58 compute-0 sudo[280190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 10:02:58 compute-0 sudo[280190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:02:58 compute-0 sudo[280190]: pam_unix(sudo:session): session closed for user root
Nov 24 10:02:58 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:58 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:02:58 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:02:58.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:02:58 compute-0 sshd-session[280101]: Invalid user root2 from 36.255.3.203 port 48178
Nov 24 10:02:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:02:58.911Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 10:02:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:02:58.914Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 10:02:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:02:58.915Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:02:58 compute-0 ovn_controller[155123]: 2025-11-24T10:02:58Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3a:6a:6d 10.100.0.8
Nov 24 10:02:58 compute-0 ovn_controller[155123]: 2025-11-24T10:02:58Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3a:6a:6d 10.100.0.8
Nov 24 10:02:58 compute-0 nova_compute[257700]: 2025-11-24 10:02:58.948 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:02:58 compute-0 sshd-session[280101]: Received disconnect from 36.255.3.203 port 48178:11: Bye Bye [preauth]
Nov 24 10:02:58 compute-0 sshd-session[280101]: Disconnected from invalid user root2 36.255.3.203 port 48178 [preauth]
Nov 24 10:02:58 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1087: 353 pgs: 353 active+clean; 134 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 95 op/s
Nov 24 10:02:59 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:02:59 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:02:59 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:02:59.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:02:59 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:02:59 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:03:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:02:59 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:03:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:03:00 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:03:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:03:00 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:03:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:03:00 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:03:00 compute-0 ceph-mon[74331]: pgmap v1087: 353 pgs: 353 active+clean; 134 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 95 op/s
Nov 24 10:03:00 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/139457227' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 10:03:00 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:03:00 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:00 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:03:00 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:00.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:03:00 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1088: 353 pgs: 353 active+clean; 134 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 95 op/s
Nov 24 10:03:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:03:00] "GET /metrics HTTP/1.1" 200 48480 "" "Prometheus/2.51.0"
Nov 24 10:03:00 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:03:00] "GET /metrics HTTP/1.1" 200 48480 "" "Prometheus/2.51.0"
Nov 24 10:03:01 compute-0 nova_compute[257700]: 2025-11-24 10:03:01.186 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:01 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:01 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:01 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:01.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:01 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/3946666677' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 10:03:01 compute-0 sshd-session[280217]: Invalid user tv from 83.229.122.23 port 54014
Nov 24 10:03:02 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:03:02 compute-0 sshd-session[280217]: Received disconnect from 83.229.122.23 port 54014:11: Bye Bye [preauth]
Nov 24 10:03:02 compute-0 sshd-session[280217]: Disconnected from invalid user tv 83.229.122.23 port 54014 [preauth]
Nov 24 10:03:02 compute-0 ceph-mon[74331]: pgmap v1088: 353 pgs: 353 active+clean; 134 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 95 op/s
Nov 24 10:03:02 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/865843196' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 10:03:02 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/865843196' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 10:03:02 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:02 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:02 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:02.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:02 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1089: 353 pgs: 353 active+clean; 167 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 369 KiB/s rd, 4.2 MiB/s wr, 98 op/s
Nov 24 10:03:03 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:03 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:03:03 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:03.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:03:03 compute-0 nova_compute[257700]: 2025-11-24 10:03:03.950 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:04 compute-0 ceph-mon[74331]: pgmap v1089: 353 pgs: 353 active+clean; 167 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 369 KiB/s rd, 4.2 MiB/s wr, 98 op/s
Nov 24 10:03:04 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:04 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:04 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:04.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:04 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1090: 353 pgs: 353 active+clean; 167 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 345 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Nov 24 10:03:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:03:04 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:03:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:03:04 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:03:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:03:04 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:03:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:03:05 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:03:05 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:05 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:05 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:05.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:06 compute-0 nova_compute[257700]: 2025-11-24 10:03:06.189 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:06 compute-0 ceph-mon[74331]: pgmap v1090: 353 pgs: 353 active+clean; 167 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 345 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Nov 24 10:03:06 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:06 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:06 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:06.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:06 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1091: 353 pgs: 353 active+clean; 167 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 166 op/s
Nov 24 10:03:07 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:03:07 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:07 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:07 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:07.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:03:07.564Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:03:08 compute-0 sudo[280228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:03:08 compute-0 sudo[280228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:03:08 compute-0 sudo[280228]: pam_unix(sudo:session): session closed for user root
Nov 24 10:03:08 compute-0 ceph-mon[74331]: pgmap v1091: 353 pgs: 353 active+clean; 167 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 166 op/s
Nov 24 10:03:08 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:08 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:08 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:08.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:03:08.916Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:03:08 compute-0 nova_compute[257700]: 2025-11-24 10:03:08.952 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:08 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1092: 353 pgs: 353 active+clean; 167 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Nov 24 10:03:09 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:09 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:03:09 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:09.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:03:09 compute-0 ceph-mon[74331]: pgmap v1092: 353 pgs: 353 active+clean; 167 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Nov 24 10:03:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:03:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:03:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:03:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:03:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:03:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:03:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:03:10 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:03:10 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:10 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:10 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:10.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:03:10] "GET /metrics HTTP/1.1" 200 48480 "" "Prometheus/2.51.0"
Nov 24 10:03:10 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1093: 353 pgs: 353 active+clean; 167 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Nov 24 10:03:10 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:03:10] "GET /metrics HTTP/1.1" 200 48480 "" "Prometheus/2.51.0"
Nov 24 10:03:11 compute-0 nova_compute[257700]: 2025-11-24 10:03:11.192 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:11 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:11 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:11 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:11.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:12 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:03:12 compute-0 ceph-mon[74331]: pgmap v1093: 353 pgs: 353 active+clean; 167 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Nov 24 10:03:12 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:12 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:03:12 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:12.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:03:12 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1094: 353 pgs: 353 active+clean; 167 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 139 op/s
Nov 24 10:03:13 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:13 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:13 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:13.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:13 compute-0 nova_compute[257700]: 2025-11-24 10:03:13.954 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:14 compute-0 ceph-mon[74331]: pgmap v1094: 353 pgs: 353 active+clean; 167 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 139 op/s
Nov 24 10:03:14 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:14 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:14 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:14.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:14 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1095: 353 pgs: 353 active+clean; 167 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 75 op/s
Nov 24 10:03:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:03:14 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:03:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:03:14 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:03:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:03:14 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:03:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:03:15 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:03:15 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:15 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:03:15 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:15.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:03:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:03:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:03:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:03:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:03:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:03:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:03:15 compute-0 podman[280260]: 2025-11-24 10:03:15.864628419 +0000 UTC m=+0.132707744 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 24 10:03:15 compute-0 podman[280261]: 2025-11-24 10:03:15.887851471 +0000 UTC m=+0.151000054 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 10:03:16 compute-0 ceph-mon[74331]: pgmap v1095: 353 pgs: 353 active+clean; 167 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 75 op/s
Nov 24 10:03:16 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:03:16 compute-0 nova_compute[257700]: 2025-11-24 10:03:16.193 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:16 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:16 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:03:16 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:16.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:03:16 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1096: 353 pgs: 353 active+clean; 188 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 119 op/s
Nov 24 10:03:17 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:03:17 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:17 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:03:17 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:17.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:03:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:03:17.565Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:03:18 compute-0 ceph-mon[74331]: pgmap v1096: 353 pgs: 353 active+clean; 188 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 119 op/s
Nov 24 10:03:18 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:18 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:18 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:18.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:03:18.918Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 10:03:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:03:18.918Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 10:03:18 compute-0 nova_compute[257700]: 2025-11-24 10:03:18.957 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:18 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1097: 353 pgs: 353 active+clean; 188 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 237 KiB/s rd, 2.0 MiB/s wr, 44 op/s
Nov 24 10:03:19 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:19 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:03:19 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:19.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:03:19 compute-0 nova_compute[257700]: 2025-11-24 10:03:19.936 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:03:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:03:19 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:03:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:03:19 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:03:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:03:19 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:03:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:03:20 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:03:20 compute-0 ceph-mon[74331]: pgmap v1097: 353 pgs: 353 active+clean; 188 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 237 KiB/s rd, 2.0 MiB/s wr, 44 op/s
Nov 24 10:03:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:03:20.574 165073 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:03:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:03:20.575 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:03:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:03:20.575 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:03:20 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:20 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:20 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:20.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:20 compute-0 nova_compute[257700]: 2025-11-24 10:03:20.922 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:03:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:03:20] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Nov 24 10:03:20 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:03:20] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Nov 24 10:03:20 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1098: 353 pgs: 353 active+clean; 188 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 237 KiB/s rd, 2.0 MiB/s wr, 44 op/s
Nov 24 10:03:21 compute-0 nova_compute[257700]: 2025-11-24 10:03:21.195 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:21 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:21 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:03:21 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:21.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:03:21 compute-0 nova_compute[257700]: 2025-11-24 10:03:21.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:03:22 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:03:22 compute-0 ceph-mon[74331]: pgmap v1098: 353 pgs: 353 active+clean; 188 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 237 KiB/s rd, 2.0 MiB/s wr, 44 op/s
Nov 24 10:03:22 compute-0 nova_compute[257700]: 2025-11-24 10:03:22.334 257704 INFO nova.compute.manager [None req-608aaa98-9e3c-4302-8c1b-2214c047dc8a 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Get console output
Nov 24 10:03:22 compute-0 nova_compute[257700]: 2025-11-24 10:03:22.338 266539 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 24 10:03:22 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:22 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:03:22 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:22.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:03:22 compute-0 podman[280310]: 2025-11-24 10:03:22.780090032 +0000 UTC m=+0.052267870 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 24 10:03:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:03:22.900 165073 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:13:51', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4e:f0:a8:6f:5e:1b'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 10:03:22 compute-0 nova_compute[257700]: 2025-11-24 10:03:22.900 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:22 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:03:22.901 165073 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 10:03:22 compute-0 nova_compute[257700]: 2025-11-24 10:03:22.916 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:03:22 compute-0 nova_compute[257700]: 2025-11-24 10:03:22.937 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:03:22 compute-0 nova_compute[257700]: 2025-11-24 10:03:22.937 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 10:03:22 compute-0 nova_compute[257700]: 2025-11-24 10:03:22.937 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 10:03:22 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1099: 353 pgs: 353 active+clean; 200 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 391 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Nov 24 10:03:23 compute-0 nova_compute[257700]: 2025-11-24 10:03:23.052 257704 DEBUG nova.compute.manager [req-681fcb9b-972b-41b9-9633-e32a62ca4a0a req-d5416298-4992-43e1-b61d-684b30cbf849 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Received event network-changed-6f615f70-f3a3-45d6-8078-6f32abae3c0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 10:03:23 compute-0 nova_compute[257700]: 2025-11-24 10:03:23.053 257704 DEBUG nova.compute.manager [req-681fcb9b-972b-41b9-9633-e32a62ca4a0a req-d5416298-4992-43e1-b61d-684b30cbf849 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Refreshing instance network info cache due to event network-changed-6f615f70-f3a3-45d6-8078-6f32abae3c0b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 10:03:23 compute-0 nova_compute[257700]: 2025-11-24 10:03:23.053 257704 DEBUG oslo_concurrency.lockutils [req-681fcb9b-972b-41b9-9633-e32a62ca4a0a req-d5416298-4992-43e1-b61d-684b30cbf849 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "refresh_cache-374e7431-b73b-4a49-8aba-9ac699a35ebf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 10:03:23 compute-0 nova_compute[257700]: 2025-11-24 10:03:23.053 257704 DEBUG oslo_concurrency.lockutils [req-681fcb9b-972b-41b9-9633-e32a62ca4a0a req-d5416298-4992-43e1-b61d-684b30cbf849 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquired lock "refresh_cache-374e7431-b73b-4a49-8aba-9ac699a35ebf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 10:03:23 compute-0 nova_compute[257700]: 2025-11-24 10:03:23.053 257704 DEBUG nova.network.neutron [req-681fcb9b-972b-41b9-9633-e32a62ca4a0a req-d5416298-4992-43e1-b61d-684b30cbf849 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Refreshing network info cache for port 6f615f70-f3a3-45d6-8078-6f32abae3c0b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 10:03:23 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:23 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:03:23 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:23.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:03:23 compute-0 nova_compute[257700]: 2025-11-24 10:03:23.500 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "refresh_cache-374e7431-b73b-4a49-8aba-9ac699a35ebf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 10:03:23 compute-0 nova_compute[257700]: 2025-11-24 10:03:23.668 257704 DEBUG nova.compute.manager [req-2201df8e-038c-458d-bed2-dbb2ac170a06 req-50233413-4f66-44e3-825e-e30dfaf2b73b 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Received event network-vif-unplugged-6f615f70-f3a3-45d6-8078-6f32abae3c0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 10:03:23 compute-0 nova_compute[257700]: 2025-11-24 10:03:23.669 257704 DEBUG oslo_concurrency.lockutils [req-2201df8e-038c-458d-bed2-dbb2ac170a06 req-50233413-4f66-44e3-825e-e30dfaf2b73b 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "374e7431-b73b-4a49-8aba-9ac699a35ebf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:03:23 compute-0 nova_compute[257700]: 2025-11-24 10:03:23.670 257704 DEBUG oslo_concurrency.lockutils [req-2201df8e-038c-458d-bed2-dbb2ac170a06 req-50233413-4f66-44e3-825e-e30dfaf2b73b 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "374e7431-b73b-4a49-8aba-9ac699a35ebf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:03:23 compute-0 nova_compute[257700]: 2025-11-24 10:03:23.670 257704 DEBUG oslo_concurrency.lockutils [req-2201df8e-038c-458d-bed2-dbb2ac170a06 req-50233413-4f66-44e3-825e-e30dfaf2b73b 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "374e7431-b73b-4a49-8aba-9ac699a35ebf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:03:23 compute-0 nova_compute[257700]: 2025-11-24 10:03:23.671 257704 DEBUG nova.compute.manager [req-2201df8e-038c-458d-bed2-dbb2ac170a06 req-50233413-4f66-44e3-825e-e30dfaf2b73b 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] No waiting events found dispatching network-vif-unplugged-6f615f70-f3a3-45d6-8078-6f32abae3c0b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 10:03:23 compute-0 nova_compute[257700]: 2025-11-24 10:03:23.671 257704 WARNING nova.compute.manager [req-2201df8e-038c-458d-bed2-dbb2ac170a06 req-50233413-4f66-44e3-825e-e30dfaf2b73b 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Received unexpected event network-vif-unplugged-6f615f70-f3a3-45d6-8078-6f32abae3c0b for instance with vm_state active and task_state None.
Nov 24 10:03:23 compute-0 nova_compute[257700]: 2025-11-24 10:03:23.986 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:24 compute-0 nova_compute[257700]: 2025-11-24 10:03:24.089 257704 INFO nova.compute.manager [None req-9be9bab9-9b35-4b64-a143-1f9a7c5bdafe 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Get console output
Nov 24 10:03:24 compute-0 nova_compute[257700]: 2025-11-24 10:03:24.094 266539 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 24 10:03:24 compute-0 ceph-mon[74331]: pgmap v1099: 353 pgs: 353 active+clean; 200 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 391 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Nov 24 10:03:24 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:24 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:03:24 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:24.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:03:24 compute-0 nova_compute[257700]: 2025-11-24 10:03:24.892 257704 DEBUG nova.network.neutron [req-681fcb9b-972b-41b9-9633-e32a62ca4a0a req-d5416298-4992-43e1-b61d-684b30cbf849 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Updated VIF entry in instance network info cache for port 6f615f70-f3a3-45d6-8078-6f32abae3c0b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 10:03:24 compute-0 nova_compute[257700]: 2025-11-24 10:03:24.893 257704 DEBUG nova.network.neutron [req-681fcb9b-972b-41b9-9633-e32a62ca4a0a req-d5416298-4992-43e1-b61d-684b30cbf849 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Updating instance_info_cache with network_info: [{"id": "6f615f70-f3a3-45d6-8078-6f32abae3c0b", "address": "fa:16:3e:3a:6a:6d", "network": {"id": "d9ce2622-5822-4ecf-9fb9-f5f15c8ea094", "bridge": "br-int", "label": "tempest-network-smoke--73093411", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f615f70-f3", "ovs_interfaceid": "6f615f70-f3a3-45d6-8078-6f32abae3c0b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 10:03:24 compute-0 nova_compute[257700]: 2025-11-24 10:03:24.912 257704 DEBUG oslo_concurrency.lockutils [req-681fcb9b-972b-41b9-9633-e32a62ca4a0a req-d5416298-4992-43e1-b61d-684b30cbf849 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Releasing lock "refresh_cache-374e7431-b73b-4a49-8aba-9ac699a35ebf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 10:03:24 compute-0 nova_compute[257700]: 2025-11-24 10:03:24.912 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquired lock "refresh_cache-374e7431-b73b-4a49-8aba-9ac699a35ebf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 10:03:24 compute-0 nova_compute[257700]: 2025-11-24 10:03:24.913 257704 DEBUG nova.network.neutron [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 24 10:03:24 compute-0 nova_compute[257700]: 2025-11-24 10:03:24.913 257704 DEBUG nova.objects.instance [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 374e7431-b73b-4a49-8aba-9ac699a35ebf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 10:03:24 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1100: 353 pgs: 353 active+clean; 200 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 391 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Nov 24 10:03:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:03:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:03:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:03:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:03:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:03:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:03:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:03:25 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:03:25 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/2656099280' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:03:25 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:25 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:25 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:25.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:25 compute-0 nova_compute[257700]: 2025-11-24 10:03:25.799 257704 DEBUG nova.compute.manager [req-0dfe56a7-e755-4cdb-b16b-0e510e10bedf req-08769a1b-e627-44d6-9900-78131cf7106d 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Received event network-changed-6f615f70-f3a3-45d6-8078-6f32abae3c0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 10:03:25 compute-0 nova_compute[257700]: 2025-11-24 10:03:25.799 257704 DEBUG nova.compute.manager [req-0dfe56a7-e755-4cdb-b16b-0e510e10bedf req-08769a1b-e627-44d6-9900-78131cf7106d 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Refreshing instance network info cache due to event network-changed-6f615f70-f3a3-45d6-8078-6f32abae3c0b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 10:03:25 compute-0 nova_compute[257700]: 2025-11-24 10:03:25.800 257704 DEBUG oslo_concurrency.lockutils [req-0dfe56a7-e755-4cdb-b16b-0e510e10bedf req-08769a1b-e627-44d6-9900-78131cf7106d 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "refresh_cache-374e7431-b73b-4a49-8aba-9ac699a35ebf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 10:03:25 compute-0 nova_compute[257700]: 2025-11-24 10:03:25.821 257704 DEBUG nova.compute.manager [req-58f174ef-5eb9-4747-985f-94dfcdcfeed1 req-b120979c-df26-49c8-b620-c7b6b8d6087a 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Received event network-vif-plugged-6f615f70-f3a3-45d6-8078-6f32abae3c0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 10:03:25 compute-0 nova_compute[257700]: 2025-11-24 10:03:25.822 257704 DEBUG oslo_concurrency.lockutils [req-58f174ef-5eb9-4747-985f-94dfcdcfeed1 req-b120979c-df26-49c8-b620-c7b6b8d6087a 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "374e7431-b73b-4a49-8aba-9ac699a35ebf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:03:25 compute-0 nova_compute[257700]: 2025-11-24 10:03:25.822 257704 DEBUG oslo_concurrency.lockutils [req-58f174ef-5eb9-4747-985f-94dfcdcfeed1 req-b120979c-df26-49c8-b620-c7b6b8d6087a 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "374e7431-b73b-4a49-8aba-9ac699a35ebf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:03:25 compute-0 nova_compute[257700]: 2025-11-24 10:03:25.823 257704 DEBUG oslo_concurrency.lockutils [req-58f174ef-5eb9-4747-985f-94dfcdcfeed1 req-b120979c-df26-49c8-b620-c7b6b8d6087a 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "374e7431-b73b-4a49-8aba-9ac699a35ebf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:03:25 compute-0 nova_compute[257700]: 2025-11-24 10:03:25.823 257704 DEBUG nova.compute.manager [req-58f174ef-5eb9-4747-985f-94dfcdcfeed1 req-b120979c-df26-49c8-b620-c7b6b8d6087a 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] No waiting events found dispatching network-vif-plugged-6f615f70-f3a3-45d6-8078-6f32abae3c0b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 10:03:25 compute-0 nova_compute[257700]: 2025-11-24 10:03:25.823 257704 WARNING nova.compute.manager [req-58f174ef-5eb9-4747-985f-94dfcdcfeed1 req-b120979c-df26-49c8-b620-c7b6b8d6087a 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Received unexpected event network-vif-plugged-6f615f70-f3a3-45d6-8078-6f32abae3c0b for instance with vm_state active and task_state None.
Nov 24 10:03:26 compute-0 nova_compute[257700]: 2025-11-24 10:03:26.001 257704 INFO nova.compute.manager [None req-f3efdf14-593c-477a-a820-775e9e83a1ca 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Get console output
Nov 24 10:03:26 compute-0 nova_compute[257700]: 2025-11-24 10:03:26.007 266539 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 24 10:03:26 compute-0 ceph-mon[74331]: pgmap v1100: 353 pgs: 353 active+clean; 200 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 391 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Nov 24 10:03:26 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/1881790664' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:03:26 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/1244677551' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:03:26 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/1287250146' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:03:26 compute-0 nova_compute[257700]: 2025-11-24 10:03:26.198 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:26 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:26 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:26 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:26.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:26 compute-0 nova_compute[257700]: 2025-11-24 10:03:26.934 257704 DEBUG nova.network.neutron [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Updating instance_info_cache with network_info: [{"id": "6f615f70-f3a3-45d6-8078-6f32abae3c0b", "address": "fa:16:3e:3a:6a:6d", "network": {"id": "d9ce2622-5822-4ecf-9fb9-f5f15c8ea094", "bridge": "br-int", "label": "tempest-network-smoke--73093411", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f615f70-f3", "ovs_interfaceid": "6f615f70-f3a3-45d6-8078-6f32abae3c0b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 10:03:26 compute-0 nova_compute[257700]: 2025-11-24 10:03:26.949 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Releasing lock "refresh_cache-374e7431-b73b-4a49-8aba-9ac699a35ebf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 10:03:26 compute-0 nova_compute[257700]: 2025-11-24 10:03:26.950 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 24 10:03:26 compute-0 nova_compute[257700]: 2025-11-24 10:03:26.950 257704 DEBUG oslo_concurrency.lockutils [req-0dfe56a7-e755-4cdb-b16b-0e510e10bedf req-08769a1b-e627-44d6-9900-78131cf7106d 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquired lock "refresh_cache-374e7431-b73b-4a49-8aba-9ac699a35ebf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 10:03:26 compute-0 nova_compute[257700]: 2025-11-24 10:03:26.950 257704 DEBUG nova.network.neutron [req-0dfe56a7-e755-4cdb-b16b-0e510e10bedf req-08769a1b-e627-44d6-9900-78131cf7106d 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Refreshing network info cache for port 6f615f70-f3a3-45d6-8078-6f32abae3c0b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 10:03:26 compute-0 nova_compute[257700]: 2025-11-24 10:03:26.952 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:03:26 compute-0 nova_compute[257700]: 2025-11-24 10:03:26.952 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:03:26 compute-0 nova_compute[257700]: 2025-11-24 10:03:26.952 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 10:03:26 compute-0 nova_compute[257700]: 2025-11-24 10:03:26.953 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:03:26 compute-0 nova_compute[257700]: 2025-11-24 10:03:26.976 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:03:26 compute-0 nova_compute[257700]: 2025-11-24 10:03:26.976 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:03:26 compute-0 nova_compute[257700]: 2025-11-24 10:03:26.977 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:03:26 compute-0 nova_compute[257700]: 2025-11-24 10:03:26.977 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 10:03:26 compute-0 nova_compute[257700]: 2025-11-24 10:03:26.977 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:03:26 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1101: 353 pgs: 353 active+clean; 200 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 398 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Nov 24 10:03:27 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:03:27 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:27 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:03:27 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:27.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:03:27 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:03:27 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1192466239' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:03:27 compute-0 nova_compute[257700]: 2025-11-24 10:03:27.426 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:03:27 compute-0 nova_compute[257700]: 2025-11-24 10:03:27.504 257704 DEBUG nova.virt.libvirt.driver [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 24 10:03:27 compute-0 nova_compute[257700]: 2025-11-24 10:03:27.505 257704 DEBUG nova.virt.libvirt.driver [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 24 10:03:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:03:27.567Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:03:27 compute-0 nova_compute[257700]: 2025-11-24 10:03:27.711 257704 WARNING nova.virt.libvirt.driver [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 10:03:27 compute-0 nova_compute[257700]: 2025-11-24 10:03:27.712 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4374MB free_disk=59.897212982177734GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 10:03:27 compute-0 nova_compute[257700]: 2025-11-24 10:03:27.712 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:03:27 compute-0 nova_compute[257700]: 2025-11-24 10:03:27.713 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:03:27 compute-0 nova_compute[257700]: 2025-11-24 10:03:27.792 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Instance 374e7431-b73b-4a49-8aba-9ac699a35ebf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 24 10:03:27 compute-0 nova_compute[257700]: 2025-11-24 10:03:27.793 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 10:03:27 compute-0 nova_compute[257700]: 2025-11-24 10:03:27.793 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 10:03:27 compute-0 nova_compute[257700]: 2025-11-24 10:03:27.832 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:03:27 compute-0 nova_compute[257700]: 2025-11-24 10:03:27.911 257704 DEBUG nova.compute.manager [req-6c1d8fa1-385a-4c0d-b990-927d9cff6572 req-ec8b8785-441e-4947-835f-1a9658026799 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Received event network-vif-plugged-6f615f70-f3a3-45d6-8078-6f32abae3c0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 10:03:27 compute-0 nova_compute[257700]: 2025-11-24 10:03:27.912 257704 DEBUG oslo_concurrency.lockutils [req-6c1d8fa1-385a-4c0d-b990-927d9cff6572 req-ec8b8785-441e-4947-835f-1a9658026799 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "374e7431-b73b-4a49-8aba-9ac699a35ebf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:03:27 compute-0 nova_compute[257700]: 2025-11-24 10:03:27.912 257704 DEBUG oslo_concurrency.lockutils [req-6c1d8fa1-385a-4c0d-b990-927d9cff6572 req-ec8b8785-441e-4947-835f-1a9658026799 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "374e7431-b73b-4a49-8aba-9ac699a35ebf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:03:27 compute-0 nova_compute[257700]: 2025-11-24 10:03:27.912 257704 DEBUG oslo_concurrency.lockutils [req-6c1d8fa1-385a-4c0d-b990-927d9cff6572 req-ec8b8785-441e-4947-835f-1a9658026799 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "374e7431-b73b-4a49-8aba-9ac699a35ebf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:03:27 compute-0 nova_compute[257700]: 2025-11-24 10:03:27.913 257704 DEBUG nova.compute.manager [req-6c1d8fa1-385a-4c0d-b990-927d9cff6572 req-ec8b8785-441e-4947-835f-1a9658026799 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] No waiting events found dispatching network-vif-plugged-6f615f70-f3a3-45d6-8078-6f32abae3c0b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 10:03:27 compute-0 nova_compute[257700]: 2025-11-24 10:03:27.913 257704 WARNING nova.compute.manager [req-6c1d8fa1-385a-4c0d-b990-927d9cff6572 req-ec8b8785-441e-4947-835f-1a9658026799 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Received unexpected event network-vif-plugged-6f615f70-f3a3-45d6-8078-6f32abae3c0b for instance with vm_state active and task_state None.
Nov 24 10:03:27 compute-0 nova_compute[257700]: 2025-11-24 10:03:27.913 257704 DEBUG nova.compute.manager [req-6c1d8fa1-385a-4c0d-b990-927d9cff6572 req-ec8b8785-441e-4947-835f-1a9658026799 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Received event network-vif-plugged-6f615f70-f3a3-45d6-8078-6f32abae3c0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 10:03:27 compute-0 nova_compute[257700]: 2025-11-24 10:03:27.913 257704 DEBUG oslo_concurrency.lockutils [req-6c1d8fa1-385a-4c0d-b990-927d9cff6572 req-ec8b8785-441e-4947-835f-1a9658026799 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "374e7431-b73b-4a49-8aba-9ac699a35ebf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:03:27 compute-0 nova_compute[257700]: 2025-11-24 10:03:27.914 257704 DEBUG oslo_concurrency.lockutils [req-6c1d8fa1-385a-4c0d-b990-927d9cff6572 req-ec8b8785-441e-4947-835f-1a9658026799 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "374e7431-b73b-4a49-8aba-9ac699a35ebf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:03:27 compute-0 nova_compute[257700]: 2025-11-24 10:03:27.914 257704 DEBUG oslo_concurrency.lockutils [req-6c1d8fa1-385a-4c0d-b990-927d9cff6572 req-ec8b8785-441e-4947-835f-1a9658026799 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "374e7431-b73b-4a49-8aba-9ac699a35ebf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:03:27 compute-0 nova_compute[257700]: 2025-11-24 10:03:27.914 257704 DEBUG nova.compute.manager [req-6c1d8fa1-385a-4c0d-b990-927d9cff6572 req-ec8b8785-441e-4947-835f-1a9658026799 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] No waiting events found dispatching network-vif-plugged-6f615f70-f3a3-45d6-8078-6f32abae3c0b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 10:03:27 compute-0 nova_compute[257700]: 2025-11-24 10:03:27.914 257704 WARNING nova.compute.manager [req-6c1d8fa1-385a-4c0d-b990-927d9cff6572 req-ec8b8785-441e-4947-835f-1a9658026799 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Received unexpected event network-vif-plugged-6f615f70-f3a3-45d6-8078-6f32abae3c0b for instance with vm_state active and task_state None.
Nov 24 10:03:28 compute-0 ceph-mon[74331]: pgmap v1101: 353 pgs: 353 active+clean; 200 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 398 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Nov 24 10:03:28 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1192466239' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:03:28 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:03:28 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4215531718' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:03:28 compute-0 nova_compute[257700]: 2025-11-24 10:03:28.283 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:03:28 compute-0 nova_compute[257700]: 2025-11-24 10:03:28.291 257704 DEBUG nova.compute.provider_tree [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Inventory has not changed in ProviderTree for provider: a50ce3b5-7e9e-4263-a4aa-c35573ac7257 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 10:03:28 compute-0 nova_compute[257700]: 2025-11-24 10:03:28.308 257704 DEBUG nova.scheduler.client.report [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Inventory has not changed for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 10:03:28 compute-0 nova_compute[257700]: 2025-11-24 10:03:28.335 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 10:03:28 compute-0 nova_compute[257700]: 2025-11-24 10:03:28.335 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:03:28 compute-0 nova_compute[257700]: 2025-11-24 10:03:28.470 257704 DEBUG nova.network.neutron [req-0dfe56a7-e755-4cdb-b16b-0e510e10bedf req-08769a1b-e627-44d6-9900-78131cf7106d 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Updated VIF entry in instance network info cache for port 6f615f70-f3a3-45d6-8078-6f32abae3c0b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 24 10:03:28 compute-0 nova_compute[257700]: 2025-11-24 10:03:28.471 257704 DEBUG nova.network.neutron [req-0dfe56a7-e755-4cdb-b16b-0e510e10bedf req-08769a1b-e627-44d6-9900-78131cf7106d 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Updating instance_info_cache with network_info: [{"id": "6f615f70-f3a3-45d6-8078-6f32abae3c0b", "address": "fa:16:3e:3a:6a:6d", "network": {"id": "d9ce2622-5822-4ecf-9fb9-f5f15c8ea094", "bridge": "br-int", "label": "tempest-network-smoke--73093411", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f615f70-f3", "ovs_interfaceid": "6f615f70-f3a3-45d6-8078-6f32abae3c0b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 10:03:28 compute-0 nova_compute[257700]: 2025-11-24 10:03:28.481 257704 DEBUG oslo_concurrency.lockutils [req-0dfe56a7-e755-4cdb-b16b-0e510e10bedf req-08769a1b-e627-44d6-9900-78131cf7106d 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Releasing lock "refresh_cache-374e7431-b73b-4a49-8aba-9ac699a35ebf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 10:03:28 compute-0 sudo[280380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:03:28 compute-0 sudo[280380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:03:28 compute-0 sudo[280380]: pam_unix(sudo:session): session closed for user root
Nov 24 10:03:28 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:28 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:28 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:28.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:03:28.919Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:03:28 compute-0 nova_compute[257700]: 2025-11-24 10:03:28.988 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:29 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1102: 353 pgs: 353 active+clean; 200 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 161 KiB/s rd, 109 KiB/s wr, 24 op/s
Nov 24 10:03:29 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/4215531718' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:03:29 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:29 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:29 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:29.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:03:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:03:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:03:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:03:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:03:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:03:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:03:30 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:03:30 compute-0 ceph-mon[74331]: pgmap v1102: 353 pgs: 353 active+clean; 200 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 161 KiB/s rd, 109 KiB/s wr, 24 op/s
Nov 24 10:03:30 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/270683702' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:03:30 compute-0 nova_compute[257700]: 2025-11-24 10:03:30.305 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:03:30 compute-0 nova_compute[257700]: 2025-11-24 10:03:30.306 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:03:30 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:30 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:03:30 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:30.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:03:30 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:03:30.904 165073 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feb242b9-6422-4c37-bc7a-5c14a79beaf8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:03:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:03:30] "GET /metrics HTTP/1.1" 200 48466 "" "Prometheus/2.51.0"
Nov 24 10:03:30 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:03:30] "GET /metrics HTTP/1.1" 200 48466 "" "Prometheus/2.51.0"
Nov 24 10:03:31 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1103: 353 pgs: 353 active+clean; 200 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 161 KiB/s rd, 109 KiB/s wr, 24 op/s
Nov 24 10:03:31 compute-0 nova_compute[257700]: 2025-11-24 10:03:31.126 257704 DEBUG oslo_concurrency.lockutils [None req-1ba4a1de-6793-4f45-b6cf-4cedc4e3fb43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "374e7431-b73b-4a49-8aba-9ac699a35ebf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:03:31 compute-0 nova_compute[257700]: 2025-11-24 10:03:31.127 257704 DEBUG oslo_concurrency.lockutils [None req-1ba4a1de-6793-4f45-b6cf-4cedc4e3fb43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "374e7431-b73b-4a49-8aba-9ac699a35ebf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:03:31 compute-0 nova_compute[257700]: 2025-11-24 10:03:31.127 257704 DEBUG oslo_concurrency.lockutils [None req-1ba4a1de-6793-4f45-b6cf-4cedc4e3fb43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "374e7431-b73b-4a49-8aba-9ac699a35ebf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:03:31 compute-0 nova_compute[257700]: 2025-11-24 10:03:31.127 257704 DEBUG oslo_concurrency.lockutils [None req-1ba4a1de-6793-4f45-b6cf-4cedc4e3fb43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "374e7431-b73b-4a49-8aba-9ac699a35ebf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:03:31 compute-0 nova_compute[257700]: 2025-11-24 10:03:31.128 257704 DEBUG oslo_concurrency.lockutils [None req-1ba4a1de-6793-4f45-b6cf-4cedc4e3fb43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "374e7431-b73b-4a49-8aba-9ac699a35ebf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:03:31 compute-0 nova_compute[257700]: 2025-11-24 10:03:31.129 257704 INFO nova.compute.manager [None req-1ba4a1de-6793-4f45-b6cf-4cedc4e3fb43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Terminating instance
Nov 24 10:03:31 compute-0 nova_compute[257700]: 2025-11-24 10:03:31.130 257704 DEBUG nova.compute.manager [None req-1ba4a1de-6793-4f45-b6cf-4cedc4e3fb43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 24 10:03:31 compute-0 kernel: tap6f615f70-f3 (unregistering): left promiscuous mode
Nov 24 10:03:31 compute-0 NetworkManager[48883]: <info>  [1763978611.1881] device (tap6f615f70-f3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 24 10:03:31 compute-0 ovn_controller[155123]: 2025-11-24T10:03:31Z|00094|binding|INFO|Releasing lport 6f615f70-f3a3-45d6-8078-6f32abae3c0b from this chassis (sb_readonly=0)
Nov 24 10:03:31 compute-0 ovn_controller[155123]: 2025-11-24T10:03:31Z|00095|binding|INFO|Setting lport 6f615f70-f3a3-45d6-8078-6f32abae3c0b down in Southbound
Nov 24 10:03:31 compute-0 ovn_controller[155123]: 2025-11-24T10:03:31Z|00096|binding|INFO|Removing iface tap6f615f70-f3 ovn-installed in OVS
Nov 24 10:03:31 compute-0 nova_compute[257700]: 2025-11-24 10:03:31.201 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:31 compute-0 nova_compute[257700]: 2025-11-24 10:03:31.204 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:31 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:03:31.210 165073 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:6a:6d 10.100.0.8'], port_security=['fa:16:3e:3a:6a:6d 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '374e7431-b73b-4a49-8aba-9ac699a35ebf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94d069fc040647d5a6e54894eec915fe', 'neutron:revision_number': '8', 'neutron:security_group_ids': '4e79fa3a-aa58-45ad-be12-11ed04eeadbe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5c42cc1-2181-41fb-bb98-22dec924e208, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f45b2855760>], logical_port=6f615f70-f3a3-45d6-8078-6f32abae3c0b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f45b2855760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 10:03:31 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:03:31.212 165073 INFO neutron.agent.ovn.metadata.agent [-] Port 6f615f70-f3a3-45d6-8078-6f32abae3c0b in datapath d9ce2622-5822-4ecf-9fb9-f5f15c8ea094 unbound from our chassis
Nov 24 10:03:31 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:03:31.213 165073 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d9ce2622-5822-4ecf-9fb9-f5f15c8ea094, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 24 10:03:31 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:03:31.215 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[18001744-4571-4b6f-88d1-80914138f5d6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:03:31 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:03:31.216 165073 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094 namespace which is not needed anymore
Nov 24 10:03:31 compute-0 nova_compute[257700]: 2025-11-24 10:03:31.230 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:31 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:03:31 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Nov 24 10:03:31 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d0000000b.scope: Consumed 14.844s CPU time.
Nov 24 10:03:31 compute-0 systemd-machined[219130]: Machine qemu-7-instance-0000000b terminated.
Nov 24 10:03:31 compute-0 nova_compute[257700]: 2025-11-24 10:03:31.353 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:31 compute-0 neutron-haproxy-ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094[279446]: [NOTICE]   (279450) : haproxy version is 2.8.14-c23fe91
Nov 24 10:03:31 compute-0 neutron-haproxy-ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094[279446]: [NOTICE]   (279450) : path to executable is /usr/sbin/haproxy
Nov 24 10:03:31 compute-0 neutron-haproxy-ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094[279446]: [WARNING]  (279450) : Exiting Master process...
Nov 24 10:03:31 compute-0 neutron-haproxy-ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094[279446]: [WARNING]  (279450) : Exiting Master process...
Nov 24 10:03:31 compute-0 neutron-haproxy-ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094[279446]: [ALERT]    (279450) : Current worker (279452) exited with code 143 (Terminated)
Nov 24 10:03:31 compute-0 neutron-haproxy-ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094[279446]: [WARNING]  (279450) : All workers exited. Exiting... (0)
Nov 24 10:03:31 compute-0 nova_compute[257700]: 2025-11-24 10:03:31.357 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:31 compute-0 systemd[1]: libpod-6394699b3a0c2c4fc5433a4c70985566ceb3edc4584262e53990ee078e99ebcc.scope: Deactivated successfully.
Nov 24 10:03:31 compute-0 podman[280431]: 2025-11-24 10:03:31.366406545 +0000 UTC m=+0.046655501 container died 6394699b3a0c2c4fc5433a4c70985566ceb3edc4584262e53990ee078e99ebcc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 10:03:31 compute-0 nova_compute[257700]: 2025-11-24 10:03:31.371 257704 INFO nova.virt.libvirt.driver [-] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Instance destroyed successfully.
Nov 24 10:03:31 compute-0 nova_compute[257700]: 2025-11-24 10:03:31.373 257704 DEBUG nova.objects.instance [None req-1ba4a1de-6793-4f45-b6cf-4cedc4e3fb43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lazy-loading 'resources' on Instance uuid 374e7431-b73b-4a49-8aba-9ac699a35ebf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 24 10:03:31 compute-0 nova_compute[257700]: 2025-11-24 10:03:31.387 257704 DEBUG nova.virt.libvirt.vif [None req-1ba4a1de-6793-4f45-b6cf-4cedc4e3fb43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-24T10:02:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1423137428',display_name='tempest-TestNetworkBasicOps-server-1423137428',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1423137428',id=11,image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPoYJrNlBcldqeAeHx35OCB0CcI0kZ2sbqn3p9f2hVqq2CzZeVKoOWsTSbQ3/Y8hxBs5OloguADBMRRRYFv0gtRH9qAkoMCy9kFYI8rxuxHCJ5atJHHGqVmT9XSSSKf04A==',key_name='tempest-TestNetworkBasicOps-2017730832',keypairs=<?>,launch_index=0,launched_at=2025-11-24T10:02:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='94d069fc040647d5a6e54894eec915fe',ramdisk_id='',reservation_id='r-emakj80b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6ef14bdf-4f04-4400-8040-4409d9d5271e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1844071378',owner_user_name='tempest-TestNetworkBasicOps-1844071378-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-24T10:02:45Z,user_data=None,user_id='43f79ff3105e4372a3c095e8057d4f1f',uuid=374e7431-b73b-4a49-8aba-9ac699a35ebf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6f615f70-f3a3-45d6-8078-6f32abae3c0b", "address": "fa:16:3e:3a:6a:6d", "network": {"id": "d9ce2622-5822-4ecf-9fb9-f5f15c8ea094", "bridge": "br-int", "label": "tempest-network-smoke--73093411", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f615f70-f3", "ovs_interfaceid": "6f615f70-f3a3-45d6-8078-6f32abae3c0b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 24 10:03:31 compute-0 nova_compute[257700]: 2025-11-24 10:03:31.388 257704 DEBUG nova.network.os_vif_util [None req-1ba4a1de-6793-4f45-b6cf-4cedc4e3fb43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converting VIF {"id": "6f615f70-f3a3-45d6-8078-6f32abae3c0b", "address": "fa:16:3e:3a:6a:6d", "network": {"id": "d9ce2622-5822-4ecf-9fb9-f5f15c8ea094", "bridge": "br-int", "label": "tempest-network-smoke--73093411", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94d069fc040647d5a6e54894eec915fe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f615f70-f3", "ovs_interfaceid": "6f615f70-f3a3-45d6-8078-6f32abae3c0b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 24 10:03:31 compute-0 nova_compute[257700]: 2025-11-24 10:03:31.390 257704 DEBUG nova.network.os_vif_util [None req-1ba4a1de-6793-4f45-b6cf-4cedc4e3fb43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3a:6a:6d,bridge_name='br-int',has_traffic_filtering=True,id=6f615f70-f3a3-45d6-8078-6f32abae3c0b,network=Network(d9ce2622-5822-4ecf-9fb9-f5f15c8ea094),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f615f70-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 24 10:03:31 compute-0 nova_compute[257700]: 2025-11-24 10:03:31.390 257704 DEBUG os_vif [None req-1ba4a1de-6793-4f45-b6cf-4cedc4e3fb43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3a:6a:6d,bridge_name='br-int',has_traffic_filtering=True,id=6f615f70-f3a3-45d6-8078-6f32abae3c0b,network=Network(d9ce2622-5822-4ecf-9fb9-f5f15c8ea094),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f615f70-f3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 24 10:03:31 compute-0 nova_compute[257700]: 2025-11-24 10:03:31.395 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:31 compute-0 nova_compute[257700]: 2025-11-24 10:03:31.396 257704 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6f615f70-f3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:03:31 compute-0 nova_compute[257700]: 2025-11-24 10:03:31.398 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:31 compute-0 nova_compute[257700]: 2025-11-24 10:03:31.399 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:31 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6394699b3a0c2c4fc5433a4c70985566ceb3edc4584262e53990ee078e99ebcc-userdata-shm.mount: Deactivated successfully.
Nov 24 10:03:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d63a08d9f038fc9668efc04ae3cb149eb7b182e1d89a04f14d50d676f266685-merged.mount: Deactivated successfully.
Nov 24 10:03:31 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:31 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:31 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:31.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:31 compute-0 nova_compute[257700]: 2025-11-24 10:03:31.407 257704 INFO os_vif [None req-1ba4a1de-6793-4f45-b6cf-4cedc4e3fb43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3a:6a:6d,bridge_name='br-int',has_traffic_filtering=True,id=6f615f70-f3a3-45d6-8078-6f32abae3c0b,network=Network(d9ce2622-5822-4ecf-9fb9-f5f15c8ea094),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f615f70-f3')
Nov 24 10:03:31 compute-0 podman[280431]: 2025-11-24 10:03:31.413910957 +0000 UTC m=+0.094159903 container cleanup 6394699b3a0c2c4fc5433a4c70985566ceb3edc4584262e53990ee078e99ebcc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 10:03:31 compute-0 systemd[1]: libpod-conmon-6394699b3a0c2c4fc5433a4c70985566ceb3edc4584262e53990ee078e99ebcc.scope: Deactivated successfully.
Nov 24 10:03:31 compute-0 nova_compute[257700]: 2025-11-24 10:03:31.462 257704 DEBUG nova.compute.manager [req-68743e69-1882-48f6-a038-6856c27d22d3 req-d1473f95-3526-4e29-af75-6ea24aae820e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Received event network-vif-unplugged-6f615f70-f3a3-45d6-8078-6f32abae3c0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 10:03:31 compute-0 nova_compute[257700]: 2025-11-24 10:03:31.463 257704 DEBUG oslo_concurrency.lockutils [req-68743e69-1882-48f6-a038-6856c27d22d3 req-d1473f95-3526-4e29-af75-6ea24aae820e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "374e7431-b73b-4a49-8aba-9ac699a35ebf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:03:31 compute-0 nova_compute[257700]: 2025-11-24 10:03:31.463 257704 DEBUG oslo_concurrency.lockutils [req-68743e69-1882-48f6-a038-6856c27d22d3 req-d1473f95-3526-4e29-af75-6ea24aae820e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "374e7431-b73b-4a49-8aba-9ac699a35ebf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:03:31 compute-0 nova_compute[257700]: 2025-11-24 10:03:31.463 257704 DEBUG oslo_concurrency.lockutils [req-68743e69-1882-48f6-a038-6856c27d22d3 req-d1473f95-3526-4e29-af75-6ea24aae820e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "374e7431-b73b-4a49-8aba-9ac699a35ebf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:03:31 compute-0 nova_compute[257700]: 2025-11-24 10:03:31.464 257704 DEBUG nova.compute.manager [req-68743e69-1882-48f6-a038-6856c27d22d3 req-d1473f95-3526-4e29-af75-6ea24aae820e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] No waiting events found dispatching network-vif-unplugged-6f615f70-f3a3-45d6-8078-6f32abae3c0b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 10:03:31 compute-0 nova_compute[257700]: 2025-11-24 10:03:31.464 257704 DEBUG nova.compute.manager [req-68743e69-1882-48f6-a038-6856c27d22d3 req-d1473f95-3526-4e29-af75-6ea24aae820e 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Received event network-vif-unplugged-6f615f70-f3a3-45d6-8078-6f32abae3c0b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 24 10:03:31 compute-0 podman[280484]: 2025-11-24 10:03:31.495171551 +0000 UTC m=+0.053516761 container remove 6394699b3a0c2c4fc5433a4c70985566ceb3edc4584262e53990ee078e99ebcc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 24 10:03:31 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:03:31.502 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[6fe2b82b-4718-401a-8740-45ccaf364ed8]: (4, ('Mon Nov 24 10:03:31 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094 (6394699b3a0c2c4fc5433a4c70985566ceb3edc4584262e53990ee078e99ebcc)\n6394699b3a0c2c4fc5433a4c70985566ceb3edc4584262e53990ee078e99ebcc\nMon Nov 24 10:03:31 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094 (6394699b3a0c2c4fc5433a4c70985566ceb3edc4584262e53990ee078e99ebcc)\n6394699b3a0c2c4fc5433a4c70985566ceb3edc4584262e53990ee078e99ebcc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:03:31 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:03:31.504 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[1de2594b-e0b7-4c1b-96d0-8de3287deaaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:03:31 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:03:31.505 165073 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd9ce2622-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:03:31 compute-0 nova_compute[257700]: 2025-11-24 10:03:31.507 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:31 compute-0 kernel: tapd9ce2622-50: left promiscuous mode
Nov 24 10:03:31 compute-0 nova_compute[257700]: 2025-11-24 10:03:31.521 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:31 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:03:31.525 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[6458f0f3-c726-44c6-a568-97adb98e217f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:03:31 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:03:31.555 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[3ebd148c-523f-4756-99b1-1c546dd69bf1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:03:31 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:03:31.558 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[b2da00dc-6575-4246-899a-14bfa58ee5cf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:03:31 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:03:31.577 264910 DEBUG oslo.privsep.daemon [-] privsep: reply[0bbe5cb7-8eb7-4ed4-9dc6-20ca16d66eb7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 451275, 'reachable_time': 32847, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280503, 'error': None, 'target': 'ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:03:31 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:03:31.581 165227 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d9ce2622-5822-4ecf-9fb9-f5f15c8ea094 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 24 10:03:31 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:03:31.581 165227 DEBUG oslo.privsep.daemon [-] privsep: reply[ac51abf4-fa68-48e5-b958-4180049477d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 24 10:03:31 compute-0 systemd[1]: run-netns-ovnmeta\x2dd9ce2622\x2d5822\x2d4ecf\x2d9fb9\x2df5f15c8ea094.mount: Deactivated successfully.
Nov 24 10:03:31 compute-0 nova_compute[257700]: 2025-11-24 10:03:31.820 257704 INFO nova.virt.libvirt.driver [None req-1ba4a1de-6793-4f45-b6cf-4cedc4e3fb43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Deleting instance files /var/lib/nova/instances/374e7431-b73b-4a49-8aba-9ac699a35ebf_del
Nov 24 10:03:31 compute-0 nova_compute[257700]: 2025-11-24 10:03:31.821 257704 INFO nova.virt.libvirt.driver [None req-1ba4a1de-6793-4f45-b6cf-4cedc4e3fb43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Deletion of /var/lib/nova/instances/374e7431-b73b-4a49-8aba-9ac699a35ebf_del complete
Nov 24 10:03:31 compute-0 nova_compute[257700]: 2025-11-24 10:03:31.885 257704 INFO nova.compute.manager [None req-1ba4a1de-6793-4f45-b6cf-4cedc4e3fb43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Took 0.75 seconds to destroy the instance on the hypervisor.
Nov 24 10:03:31 compute-0 nova_compute[257700]: 2025-11-24 10:03:31.886 257704 DEBUG oslo.service.loopingcall [None req-1ba4a1de-6793-4f45-b6cf-4cedc4e3fb43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 24 10:03:31 compute-0 nova_compute[257700]: 2025-11-24 10:03:31.886 257704 DEBUG nova.compute.manager [-] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 24 10:03:31 compute-0 nova_compute[257700]: 2025-11-24 10:03:31.887 257704 DEBUG nova.network.neutron [-] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 24 10:03:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:03:32 compute-0 nova_compute[257700]: 2025-11-24 10:03:32.057 257704 DEBUG nova.compute.manager [req-0f3bd2f2-963b-4140-9aa7-880b564e7725 req-ca3960c1-faf3-48c8-82af-a31177b31b0d 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Received event network-changed-6f615f70-f3a3-45d6-8078-6f32abae3c0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 10:03:32 compute-0 nova_compute[257700]: 2025-11-24 10:03:32.057 257704 DEBUG nova.compute.manager [req-0f3bd2f2-963b-4140-9aa7-880b564e7725 req-ca3960c1-faf3-48c8-82af-a31177b31b0d 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Refreshing instance network info cache due to event network-changed-6f615f70-f3a3-45d6-8078-6f32abae3c0b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 24 10:03:32 compute-0 nova_compute[257700]: 2025-11-24 10:03:32.058 257704 DEBUG oslo_concurrency.lockutils [req-0f3bd2f2-963b-4140-9aa7-880b564e7725 req-ca3960c1-faf3-48c8-82af-a31177b31b0d 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "refresh_cache-374e7431-b73b-4a49-8aba-9ac699a35ebf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 24 10:03:32 compute-0 nova_compute[257700]: 2025-11-24 10:03:32.058 257704 DEBUG oslo_concurrency.lockutils [req-0f3bd2f2-963b-4140-9aa7-880b564e7725 req-ca3960c1-faf3-48c8-82af-a31177b31b0d 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquired lock "refresh_cache-374e7431-b73b-4a49-8aba-9ac699a35ebf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 24 10:03:32 compute-0 nova_compute[257700]: 2025-11-24 10:03:32.058 257704 DEBUG nova.network.neutron [req-0f3bd2f2-963b-4140-9aa7-880b564e7725 req-ca3960c1-faf3-48c8-82af-a31177b31b0d 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Refreshing network info cache for port 6f615f70-f3a3-45d6-8078-6f32abae3c0b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 24 10:03:32 compute-0 ceph-mon[74331]: pgmap v1103: 353 pgs: 353 active+clean; 200 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 161 KiB/s rd, 109 KiB/s wr, 24 op/s
Nov 24 10:03:32 compute-0 nova_compute[257700]: 2025-11-24 10:03:32.297 257704 INFO nova.network.neutron [req-0f3bd2f2-963b-4140-9aa7-880b564e7725 req-ca3960c1-faf3-48c8-82af-a31177b31b0d 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Port 6f615f70-f3a3-45d6-8078-6f32abae3c0b from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Nov 24 10:03:32 compute-0 nova_compute[257700]: 2025-11-24 10:03:32.298 257704 DEBUG nova.network.neutron [req-0f3bd2f2-963b-4140-9aa7-880b564e7725 req-ca3960c1-faf3-48c8-82af-a31177b31b0d 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 10:03:32 compute-0 nova_compute[257700]: 2025-11-24 10:03:32.320 257704 DEBUG oslo_concurrency.lockutils [req-0f3bd2f2-963b-4140-9aa7-880b564e7725 req-ca3960c1-faf3-48c8-82af-a31177b31b0d 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Releasing lock "refresh_cache-374e7431-b73b-4a49-8aba-9ac699a35ebf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 24 10:03:32 compute-0 nova_compute[257700]: 2025-11-24 10:03:32.336 257704 DEBUG nova.network.neutron [-] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 24 10:03:32 compute-0 nova_compute[257700]: 2025-11-24 10:03:32.345 257704 INFO nova.compute.manager [-] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Took 0.46 seconds to deallocate network for instance.
Nov 24 10:03:32 compute-0 nova_compute[257700]: 2025-11-24 10:03:32.392 257704 DEBUG oslo_concurrency.lockutils [None req-1ba4a1de-6793-4f45-b6cf-4cedc4e3fb43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:03:32 compute-0 nova_compute[257700]: 2025-11-24 10:03:32.392 257704 DEBUG oslo_concurrency.lockutils [None req-1ba4a1de-6793-4f45-b6cf-4cedc4e3fb43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:03:32 compute-0 nova_compute[257700]: 2025-11-24 10:03:32.445 257704 DEBUG oslo_concurrency.processutils [None req-1ba4a1de-6793-4f45-b6cf-4cedc4e3fb43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:03:32 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:32 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:32 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:32.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:03:32 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/946716586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:03:32 compute-0 nova_compute[257700]: 2025-11-24 10:03:32.890 257704 DEBUG oslo_concurrency.processutils [None req-1ba4a1de-6793-4f45-b6cf-4cedc4e3fb43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:03:32 compute-0 nova_compute[257700]: 2025-11-24 10:03:32.897 257704 DEBUG nova.compute.provider_tree [None req-1ba4a1de-6793-4f45-b6cf-4cedc4e3fb43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed in ProviderTree for provider: a50ce3b5-7e9e-4263-a4aa-c35573ac7257 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 10:03:32 compute-0 nova_compute[257700]: 2025-11-24 10:03:32.922 257704 DEBUG nova.scheduler.client.report [None req-1ba4a1de-6793-4f45-b6cf-4cedc4e3fb43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Inventory has not changed for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 10:03:32 compute-0 nova_compute[257700]: 2025-11-24 10:03:32.948 257704 DEBUG oslo_concurrency.lockutils [None req-1ba4a1de-6793-4f45-b6cf-4cedc4e3fb43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:03:32 compute-0 nova_compute[257700]: 2025-11-24 10:03:32.984 257704 INFO nova.scheduler.client.report [None req-1ba4a1de-6793-4f45-b6cf-4cedc4e3fb43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Deleted allocations for instance 374e7431-b73b-4a49-8aba-9ac699a35ebf
Nov 24 10:03:33 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1104: 353 pgs: 353 active+clean; 121 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 180 KiB/s rd, 114 KiB/s wr, 53 op/s
Nov 24 10:03:33 compute-0 nova_compute[257700]: 2025-11-24 10:03:33.056 257704 DEBUG oslo_concurrency.lockutils [None req-1ba4a1de-6793-4f45-b6cf-4cedc4e3fb43 43f79ff3105e4372a3c095e8057d4f1f 94d069fc040647d5a6e54894eec915fe - - default default] Lock "374e7431-b73b-4a49-8aba-9ac699a35ebf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.929s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:03:33 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/946716586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:03:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:33.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:33 compute-0 nova_compute[257700]: 2025-11-24 10:03:33.525 257704 DEBUG nova.compute.manager [req-d83da3ab-a690-4d92-9e96-c1a157dd1367 req-007fcd21-dc6c-4b2b-9cbc-a8836d3e11e5 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Received event network-vif-plugged-6f615f70-f3a3-45d6-8078-6f32abae3c0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 10:03:33 compute-0 nova_compute[257700]: 2025-11-24 10:03:33.526 257704 DEBUG oslo_concurrency.lockutils [req-d83da3ab-a690-4d92-9e96-c1a157dd1367 req-007fcd21-dc6c-4b2b-9cbc-a8836d3e11e5 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Acquiring lock "374e7431-b73b-4a49-8aba-9ac699a35ebf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:03:33 compute-0 nova_compute[257700]: 2025-11-24 10:03:33.526 257704 DEBUG oslo_concurrency.lockutils [req-d83da3ab-a690-4d92-9e96-c1a157dd1367 req-007fcd21-dc6c-4b2b-9cbc-a8836d3e11e5 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "374e7431-b73b-4a49-8aba-9ac699a35ebf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:03:33 compute-0 nova_compute[257700]: 2025-11-24 10:03:33.527 257704 DEBUG oslo_concurrency.lockutils [req-d83da3ab-a690-4d92-9e96-c1a157dd1367 req-007fcd21-dc6c-4b2b-9cbc-a8836d3e11e5 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] Lock "374e7431-b73b-4a49-8aba-9ac699a35ebf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:03:33 compute-0 nova_compute[257700]: 2025-11-24 10:03:33.527 257704 DEBUG nova.compute.manager [req-d83da3ab-a690-4d92-9e96-c1a157dd1367 req-007fcd21-dc6c-4b2b-9cbc-a8836d3e11e5 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] No waiting events found dispatching network-vif-plugged-6f615f70-f3a3-45d6-8078-6f32abae3c0b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 24 10:03:33 compute-0 nova_compute[257700]: 2025-11-24 10:03:33.527 257704 WARNING nova.compute.manager [req-d83da3ab-a690-4d92-9e96-c1a157dd1367 req-007fcd21-dc6c-4b2b-9cbc-a8836d3e11e5 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Received unexpected event network-vif-plugged-6f615f70-f3a3-45d6-8078-6f32abae3c0b for instance with vm_state deleted and task_state None.
Nov 24 10:03:34 compute-0 nova_compute[257700]: 2025-11-24 10:03:34.147 257704 DEBUG nova.compute.manager [req-8f77c2ea-4875-4786-89ad-d3c9a865a0ae req-a31407bf-3fab-465a-abfa-cb89c3b9bdf9 44249dd96a854e85bd606c53dd233c7e 3819d4ebd23b49ba8318637df78e23b6 - - default default] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Received event network-vif-deleted-6f615f70-f3a3-45d6-8078-6f32abae3c0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 24 10:03:34 compute-0 ceph-mon[74331]: pgmap v1104: 353 pgs: 353 active+clean; 121 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 180 KiB/s rd, 114 KiB/s wr, 53 op/s
Nov 24 10:03:34 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:34 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:34 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:34.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:03:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:03:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:03:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:03:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:03:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:03:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:03:35 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:03:35 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1105: 353 pgs: 353 active+clean; 121 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 18 KiB/s wr, 30 op/s
Nov 24 10:03:35 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:35 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:03:35 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:35.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:03:36 compute-0 nova_compute[257700]: 2025-11-24 10:03:36.232 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:36 compute-0 ceph-mon[74331]: pgmap v1105: 353 pgs: 353 active+clean; 121 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 18 KiB/s wr, 30 op/s
Nov 24 10:03:36 compute-0 nova_compute[257700]: 2025-11-24 10:03:36.400 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:36 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:36 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:03:36 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:36.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:03:37 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1106: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 19 KiB/s wr, 58 op/s
Nov 24 10:03:37 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:03:37 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:37 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:03:37 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:37.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:03:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:03:37.568Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 10:03:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:03:37.569Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:03:37 compute-0 nova_compute[257700]: 2025-11-24 10:03:37.914 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:38 compute-0 nova_compute[257700]: 2025-11-24 10:03:38.026 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:38 compute-0 ceph-mon[74331]: pgmap v1106: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 19 KiB/s wr, 58 op/s
Nov 24 10:03:38 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:38 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:03:38 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:38.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:03:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:03:38.920Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:03:39 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1107: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 6.7 KiB/s wr, 57 op/s
Nov 24 10:03:39 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:39 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:39 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:39.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:03:39 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:03:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:03:39 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:03:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:03:39 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:03:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:03:40 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:03:40 compute-0 ceph-mon[74331]: pgmap v1107: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 6.7 KiB/s wr, 57 op/s
Nov 24 10:03:40 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:40 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:03:40 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:40.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:03:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:03:40] "GET /metrics HTTP/1.1" 200 48466 "" "Prometheus/2.51.0"
Nov 24 10:03:40 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:03:40] "GET /metrics HTTP/1.1" 200 48466 "" "Prometheus/2.51.0"
Nov 24 10:03:41 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1108: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 6.7 KiB/s wr, 57 op/s
Nov 24 10:03:41 compute-0 nova_compute[257700]: 2025-11-24 10:03:41.234 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:41 compute-0 nova_compute[257700]: 2025-11-24 10:03:41.402 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:41 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:41 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:41 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:41.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:42 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:03:42 compute-0 ceph-mon[74331]: pgmap v1108: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 6.7 KiB/s wr, 57 op/s
Nov 24 10:03:42 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Nov 24 10:03:42 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:03:42.400746) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 10:03:42 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Nov 24 10:03:42 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978622400786, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2137, "num_deletes": 251, "total_data_size": 4192770, "memory_usage": 4248528, "flush_reason": "Manual Compaction"}
Nov 24 10:03:42 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Nov 24 10:03:42 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978622431416, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 4047955, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29559, "largest_seqno": 31694, "table_properties": {"data_size": 4038373, "index_size": 6012, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20157, "raw_average_key_size": 20, "raw_value_size": 4019152, "raw_average_value_size": 4088, "num_data_blocks": 258, "num_entries": 983, "num_filter_entries": 983, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763978416, "oldest_key_time": 1763978416, "file_creation_time": 1763978622, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "42aa12d2-c531-4ddc-8c4c-bc0b5971346b", "db_session_id": "RORHLERH15LC1QL8D0I4", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Nov 24 10:03:42 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 30726 microseconds, and 7404 cpu microseconds.
Nov 24 10:03:42 compute-0 ceph-mon[74331]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 10:03:42 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:03:42.431465) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 4047955 bytes OK
Nov 24 10:03:42 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:03:42.431492) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Nov 24 10:03:42 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:03:42.435821) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Nov 24 10:03:42 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:03:42.435869) EVENT_LOG_v1 {"time_micros": 1763978622435859, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 10:03:42 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:03:42.435895) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 10:03:42 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 4184076, prev total WAL file size 4184076, number of live WAL files 2.
Nov 24 10:03:42 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 10:03:42 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:03:42.437275) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Nov 24 10:03:42 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 10:03:42 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(3953KB)], [65(11MB)]
Nov 24 10:03:42 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978622437358, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 16211159, "oldest_snapshot_seqno": -1}
Nov 24 10:03:42 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 6256 keys, 14109426 bytes, temperature: kUnknown
Nov 24 10:03:42 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978622522256, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 14109426, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14068372, "index_size": 24304, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15685, "raw_key_size": 160333, "raw_average_key_size": 25, "raw_value_size": 13956611, "raw_average_value_size": 2230, "num_data_blocks": 974, "num_entries": 6256, "num_filter_entries": 6256, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976305, "oldest_key_time": 0, "file_creation_time": 1763978622, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "42aa12d2-c531-4ddc-8c4c-bc0b5971346b", "db_session_id": "RORHLERH15LC1QL8D0I4", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Nov 24 10:03:42 compute-0 ceph-mon[74331]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 10:03:42 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:03:42.522501) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 14109426 bytes
Nov 24 10:03:42 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:03:42.528191) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 190.8 rd, 166.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.9, 11.6 +0.0 blob) out(13.5 +0.0 blob), read-write-amplify(7.5) write-amplify(3.5) OK, records in: 6776, records dropped: 520 output_compression: NoCompression
Nov 24 10:03:42 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:03:42.528207) EVENT_LOG_v1 {"time_micros": 1763978622528199, "job": 36, "event": "compaction_finished", "compaction_time_micros": 84955, "compaction_time_cpu_micros": 32985, "output_level": 6, "num_output_files": 1, "total_output_size": 14109426, "num_input_records": 6776, "num_output_records": 6256, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 10:03:42 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 10:03:42 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978622529027, "job": 36, "event": "table_file_deletion", "file_number": 67}
Nov 24 10:03:42 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 10:03:42 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978622531010, "job": 36, "event": "table_file_deletion", "file_number": 65}
Nov 24 10:03:42 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:03:42.437195) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:03:42 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:03:42.531156) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:03:42 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:03:42.531163) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:03:42 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:03:42.531165) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:03:42 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:03:42.531167) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:03:42 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:03:42.531168) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:03:42 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:42 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:03:42 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:42.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:03:43 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1109: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 6.7 KiB/s wr, 57 op/s
Nov 24 10:03:43 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:43 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:03:43 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:43.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:03:44 compute-0 ceph-mon[74331]: pgmap v1109: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 6.7 KiB/s wr, 57 op/s
Nov 24 10:03:44 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:44 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:03:44 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:44.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:03:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:03:45 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:03:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:03:45 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:03:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:03:45 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:03:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:03:45 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:03:45 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1110: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Nov 24 10:03:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Optimize plan auto_2025-11-24_10:03:45
Nov 24 10:03:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 10:03:45 compute-0 ceph-mgr[74626]: [balancer INFO root] do_upmap
Nov 24 10:03:45 compute-0 ceph-mgr[74626]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', '.nfs', '.mgr', 'backups', 'images', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log', 'vms', 'default.rgw.control', 'default.rgw.meta']
Nov 24 10:03:45 compute-0 ceph-mgr[74626]: [balancer INFO root] prepared 0/10 upmap changes
Nov 24 10:03:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:03:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:03:45 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:45 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:03:45 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:45.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:03:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:03:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:03:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:03:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:03:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 10:03:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:03:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 10:03:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:03:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 10:03:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:03:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:03:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:03:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:03:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:03:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 24 10:03:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:03:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 10:03:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:03:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:03:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:03:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 24 10:03:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:03:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 10:03:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:03:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:03:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:03:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 10:03:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:03:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 10:03:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 10:03:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 10:03:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 10:03:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 10:03:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 10:03:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 10:03:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 10:03:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 10:03:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 10:03:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 10:03:46 compute-0 nova_compute[257700]: 2025-11-24 10:03:46.237 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:46 compute-0 nova_compute[257700]: 2025-11-24 10:03:46.368 257704 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763978611.3666196, 374e7431-b73b-4a49-8aba-9ac699a35ebf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 24 10:03:46 compute-0 nova_compute[257700]: 2025-11-24 10:03:46.368 257704 INFO nova.compute.manager [-] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] VM Stopped (Lifecycle Event)
Nov 24 10:03:46 compute-0 nova_compute[257700]: 2025-11-24 10:03:46.393 257704 DEBUG nova.compute.manager [None req-029fc424-48f6-4661-8eb6-26f77cb0bb23 - - - - - -] [instance: 374e7431-b73b-4a49-8aba-9ac699a35ebf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 24 10:03:46 compute-0 nova_compute[257700]: 2025-11-24 10:03:46.404 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:46 compute-0 ceph-mon[74331]: pgmap v1110: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Nov 24 10:03:46 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:03:46 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:46 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:03:46 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:46.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:03:46 compute-0 podman[280543]: 2025-11-24 10:03:46.806165926 +0000 UTC m=+0.072704355 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true)
Nov 24 10:03:46 compute-0 podman[280544]: 2025-11-24 10:03:46.850458457 +0000 UTC m=+0.111987623 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 24 10:03:47 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1111: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Nov 24 10:03:47 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:03:47 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:47 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:47 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:47.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:03:47.570Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:03:48 compute-0 ceph-mon[74331]: pgmap v1111: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Nov 24 10:03:48 compute-0 sudo[280591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:03:48 compute-0 sudo[280591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:03:48 compute-0 sudo[280591]: pam_unix(sudo:session): session closed for user root
Nov 24 10:03:48 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:48 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:48 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:48.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:03:48.921Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 10:03:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:03:48.921Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 10:03:49 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1112: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:03:49 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:49 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:49 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:49.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:03:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:03:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:03:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:03:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:03:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:03:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:03:50 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:03:50 compute-0 ceph-mon[74331]: pgmap v1112: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:03:50 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:50 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:50 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:50.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:03:50] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Nov 24 10:03:50 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:03:50] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Nov 24 10:03:51 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1113: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:03:51 compute-0 nova_compute[257700]: 2025-11-24 10:03:51.240 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:51 compute-0 nova_compute[257700]: 2025-11-24 10:03:51.406 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:51 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:51 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:51 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:51.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:52 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:03:52 compute-0 ceph-mon[74331]: pgmap v1113: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:03:52 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/3936690340' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:03:52 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:52 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:03:52 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:52.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:03:53 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1114: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:03:53 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:53 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:53 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:53.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:53 compute-0 ceph-mon[74331]: pgmap v1114: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:03:53 compute-0 podman[280622]: 2025-11-24 10:03:53.82385032 +0000 UTC m=+0.089849477 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 24 10:03:54 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:54 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:03:54 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:54.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:03:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:03:54 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:03:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:03:54 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:03:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:03:54 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:03:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:03:55 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:03:55 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1115: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:03:55 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:55 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:55 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:55.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:56 compute-0 ceph-mon[74331]: pgmap v1115: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:03:56 compute-0 nova_compute[257700]: 2025-11-24 10:03:56.241 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:56 compute-0 nova_compute[257700]: 2025-11-24 10:03:56.408 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:03:56 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:56 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:03:56 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:56.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:03:57 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1116: 353 pgs: 353 active+clean; 88 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 24 10:03:57 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:03:57 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:57 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:03:57 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:57.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:03:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:03:57.571Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 10:03:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:03:57.572Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:03:58 compute-0 ceph-mon[74331]: pgmap v1116: 353 pgs: 353 active+clean; 88 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 24 10:03:58 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:58 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:03:58 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:03:58.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:03:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:03:58.922Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 10:03:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:03:58.923Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:03:58 compute-0 sudo[280648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:03:58 compute-0 sudo[280648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:03:58 compute-0 sudo[280648]: pam_unix(sudo:session): session closed for user root
Nov 24 10:03:59 compute-0 sudo[280673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 24 10:03:59 compute-0 sudo[280673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:03:59 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1117: 353 pgs: 353 active+clean; 88 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 24 10:03:59 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:03:59 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:03:59 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:03:59.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:03:59 compute-0 podman[280772]: 2025-11-24 10:03:59.584176244 +0000 UTC m=+0.050161349 container exec 926e81c0f890a1c1ac5ebf5b0a3fc7d39273a3029701ecf933d5ab782a4c6bc4 (image=quay.io/ceph/ceph:v19, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mon-compute-0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 10:03:59 compute-0 podman[280772]: 2025-11-24 10:03:59.680413358 +0000 UTC m=+0.146398423 container exec_died 926e81c0f890a1c1ac5ebf5b0a3fc7d39273a3029701ecf933d5ab782a4c6bc4 (image=quay.io/ceph/ceph:v19, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 10:03:59 compute-0 sshd-session[280616]: error: kex_exchange_identification: read: Connection timed out
Nov 24 10:03:59 compute-0 sshd-session[280616]: banner exchange: Connection from 14.215.126.91 port 43106: Connection timed out
Nov 24 10:04:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:03:59 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:04:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:04:00 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:04:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:04:00 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:04:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:04:00 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:04:00 compute-0 ceph-mon[74331]: pgmap v1117: 353 pgs: 353 active+clean; 88 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 24 10:04:00 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/3514165054' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 10:04:00 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/3484212954' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 10:04:00 compute-0 podman[280911]: 2025-11-24 10:04:00.238157314 +0000 UTC m=+0.062525124 container exec c1042f9aaa96d1cc7323d0bb263b746783ae7f616fd1b71ffa56027caf075582 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 10:04:00 compute-0 podman[280911]: 2025-11-24 10:04:00.273609948 +0000 UTC m=+0.097977728 container exec_died c1042f9aaa96d1cc7323d0bb263b746783ae7f616fd1b71ffa56027caf075582 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 10:04:00 compute-0 sshd-session[280645]: Invalid user jason from 45.78.198.78 port 52920
Nov 24 10:04:00 compute-0 podman[280984]: 2025-11-24 10:04:00.536039651 +0000 UTC m=+0.059422377 container exec a8ff859c0ee484e58c6aaf58e6d722a3faffb91c2dea80441e79254f2043cb44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Nov 24 10:04:00 compute-0 podman[280984]: 2025-11-24 10:04:00.549342058 +0000 UTC m=+0.072724754 container exec_died a8ff859c0ee484e58c6aaf58e6d722a3faffb91c2dea80441e79254f2043cb44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 10:04:00 compute-0 sshd-session[280645]: Received disconnect from 45.78.198.78 port 52920:11: Bye Bye [preauth]
Nov 24 10:04:00 compute-0 sshd-session[280645]: Disconnected from invalid user jason 45.78.198.78 port 52920 [preauth]
Nov 24 10:04:00 compute-0 podman[281050]: 2025-11-24 10:04:00.758887617 +0000 UTC m=+0.059335794 container exec 6c3a81d73f056383702bf60c1dab3f213ae48261b4107ee30655cbadd5ed4114 (image=quay.io/ceph/haproxy:2.3, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf)
Nov 24 10:04:00 compute-0 podman[281050]: 2025-11-24 10:04:00.768382851 +0000 UTC m=+0.068831018 container exec_died 6c3a81d73f056383702bf60c1dab3f213ae48261b4107ee30655cbadd5ed4114 (image=quay.io/ceph/haproxy:2.3, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf)
Nov 24 10:04:00 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:00 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:00 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:00.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:00 compute-0 podman[281117]: 2025-11-24 10:04:00.975960821 +0000 UTC m=+0.058715489 container exec da5e2e82794b556dfcd8ea30635453752d519b3ce5ab3e77ac09ab6f644d0021 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-0-mglptr, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., description=keepalived for Ceph, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, com.redhat.component=keepalived-container, distribution-scope=public, version=2.2.4, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2)
Nov 24 10:04:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:04:00] "GET /metrics HTTP/1.1" 200 48483 "" "Prometheus/2.51.0"
Nov 24 10:04:00 compute-0 podman[281117]: 2025-11-24 10:04:00.990871789 +0000 UTC m=+0.073626387 container exec_died da5e2e82794b556dfcd8ea30635453752d519b3ce5ab3e77ac09ab6f644d0021 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-0-mglptr, com.redhat.component=keepalived-container, release=1793, vcs-type=git, io.buildah.version=1.28.2, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, version=2.2.4, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9)
Nov 24 10:04:00 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:04:00] "GET /metrics HTTP/1.1" 200 48483 "" "Prometheus/2.51.0"
Nov 24 10:04:01 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1118: 353 pgs: 353 active+clean; 88 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 24 10:04:01 compute-0 podman[281184]: 2025-11-24 10:04:01.217522108 +0000 UTC m=+0.063166389 container exec 333e8d52ac14c1ad2562a9b1108149f074ce2b54eb58b09f4ec22c7b717459e6 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 10:04:01 compute-0 nova_compute[257700]: 2025-11-24 10:04:01.244 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:01 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:04:01 compute-0 podman[281184]: 2025-11-24 10:04:01.269777338 +0000 UTC m=+0.115421589 container exec_died 333e8d52ac14c1ad2562a9b1108149f074ce2b54eb58b09f4ec22c7b717459e6 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 10:04:01 compute-0 nova_compute[257700]: 2025-11-24 10:04:01.409 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:01 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:01 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:01 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:01.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:01 compute-0 sshd-session[280950]: Invalid user zookeeper from 36.255.3.203 port 60333
Nov 24 10:04:01 compute-0 podman[281256]: 2025-11-24 10:04:01.557199947 +0000 UTC m=+0.066410469 container exec 64e58e60bc23a7d57cc9d528e4c0a82e4df02b33e046975aeb8ef22ad0995bf2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 24 10:04:01 compute-0 sshd-session[280950]: Received disconnect from 36.255.3.203 port 60333:11: Bye Bye [preauth]
Nov 24 10:04:01 compute-0 sshd-session[280950]: Disconnected from invalid user zookeeper 36.255.3.203 port 60333 [preauth]
Nov 24 10:04:01 compute-0 podman[281256]: 2025-11-24 10:04:01.744135188 +0000 UTC m=+0.253345690 container exec_died 64e58e60bc23a7d57cc9d528e4c0a82e4df02b33e046975aeb8ef22ad0995bf2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 24 10:04:02 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:04:02 compute-0 podman[281369]: 2025-11-24 10:04:02.194139526 +0000 UTC m=+0.056171196 container exec 10beeaa631829ec8676854498a3516687cc150842a3e976767e7a8406d406beb (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 10:04:02 compute-0 podman[281369]: 2025-11-24 10:04:02.235515867 +0000 UTC m=+0.097547517 container exec_died 10beeaa631829ec8676854498a3516687cc150842a3e976767e7a8406d406beb (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 10:04:02 compute-0 ceph-mon[74331]: pgmap v1118: 353 pgs: 353 active+clean; 88 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 24 10:04:02 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/271928424' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 10:04:02 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/271928424' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 10:04:02 compute-0 sudo[280673]: pam_unix(sudo:session): session closed for user root
Nov 24 10:04:02 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 10:04:02 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:04:02 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 10:04:02 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:04:02 compute-0 sudo[281411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:04:02 compute-0 sudo[281411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:04:02 compute-0 sudo[281411]: pam_unix(sudo:session): session closed for user root
Nov 24 10:04:02 compute-0 sudo[281436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 10:04:02 compute-0 sudo[281436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:04:02 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:02 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:04:02 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:02.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:04:02 compute-0 sudo[281436]: pam_unix(sudo:session): session closed for user root
Nov 24 10:04:02 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1119: 353 pgs: 353 active+clean; 88 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 24 10:04:02 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 10:04:02 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:04:02 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 10:04:02 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:04:03 compute-0 sudo[281492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:04:03 compute-0 sudo[281492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:04:03 compute-0 sudo[281492]: pam_unix(sudo:session): session closed for user root
Nov 24 10:04:03 compute-0 sudo[281517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 24 10:04:03 compute-0 sudo[281517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:04:03 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:04:03 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:04:03 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:04:03 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 10:04:03 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:04:03 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:04:03 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 10:04:03 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 10:04:03 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:04:03 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:03 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:03 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:03.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:03 compute-0 podman[281582]: 2025-11-24 10:04:03.523848663 +0000 UTC m=+0.035802315 container create ad1b75768a367128fb227d2d97932f32261304322f8cc4f07ed65b763630091a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 10:04:03 compute-0 systemd[1]: Started libpod-conmon-ad1b75768a367128fb227d2d97932f32261304322f8cc4f07ed65b763630091a.scope.
Nov 24 10:04:03 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:04:03 compute-0 podman[281582]: 2025-11-24 10:04:03.579836334 +0000 UTC m=+0.091790006 container init ad1b75768a367128fb227d2d97932f32261304322f8cc4f07ed65b763630091a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_nightingale, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 10:04:03 compute-0 podman[281582]: 2025-11-24 10:04:03.587034301 +0000 UTC m=+0.098987953 container start ad1b75768a367128fb227d2d97932f32261304322f8cc4f07ed65b763630091a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Nov 24 10:04:03 compute-0 podman[281582]: 2025-11-24 10:04:03.590582068 +0000 UTC m=+0.102535740 container attach ad1b75768a367128fb227d2d97932f32261304322f8cc4f07ed65b763630091a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 24 10:04:03 compute-0 objective_nightingale[281599]: 167 167
Nov 24 10:04:03 compute-0 systemd[1]: libpod-ad1b75768a367128fb227d2d97932f32261304322f8cc4f07ed65b763630091a.scope: Deactivated successfully.
Nov 24 10:04:03 compute-0 podman[281582]: 2025-11-24 10:04:03.595829308 +0000 UTC m=+0.107782970 container died ad1b75768a367128fb227d2d97932f32261304322f8cc4f07ed65b763630091a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_nightingale, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Nov 24 10:04:03 compute-0 podman[281582]: 2025-11-24 10:04:03.509263353 +0000 UTC m=+0.021217025 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:04:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb65f570ccaafc9e63a71fc48465a61a8a3d6675adf3b0a95defe96309d7b4a3-merged.mount: Deactivated successfully.
Nov 24 10:04:03 compute-0 podman[281582]: 2025-11-24 10:04:03.635828095 +0000 UTC m=+0.147781747 container remove ad1b75768a367128fb227d2d97932f32261304322f8cc4f07ed65b763630091a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_nightingale, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 10:04:03 compute-0 systemd[1]: libpod-conmon-ad1b75768a367128fb227d2d97932f32261304322f8cc4f07ed65b763630091a.scope: Deactivated successfully.
Nov 24 10:04:03 compute-0 podman[281622]: 2025-11-24 10:04:03.803719106 +0000 UTC m=+0.038898021 container create 535bef8d41ad4515d5417108f2074abcd2c9c52f90daa7181ace7238ec916127 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 10:04:03 compute-0 systemd[1]: Started libpod-conmon-535bef8d41ad4515d5417108f2074abcd2c9c52f90daa7181ace7238ec916127.scope.
Nov 24 10:04:03 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:04:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da102034a5874f4c15a8252b9363ea2157e03bb4615be28ee4f1d3f72b447b93/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 10:04:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da102034a5874f4c15a8252b9363ea2157e03bb4615be28ee4f1d3f72b447b93/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 10:04:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da102034a5874f4c15a8252b9363ea2157e03bb4615be28ee4f1d3f72b447b93/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 10:04:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da102034a5874f4c15a8252b9363ea2157e03bb4615be28ee4f1d3f72b447b93/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 10:04:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da102034a5874f4c15a8252b9363ea2157e03bb4615be28ee4f1d3f72b447b93/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 10:04:03 compute-0 podman[281622]: 2025-11-24 10:04:03.872881321 +0000 UTC m=+0.108060236 container init 535bef8d41ad4515d5417108f2074abcd2c9c52f90daa7181ace7238ec916127 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 24 10:04:03 compute-0 podman[281622]: 2025-11-24 10:04:03.787941446 +0000 UTC m=+0.023120381 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:04:03 compute-0 podman[281622]: 2025-11-24 10:04:03.885123223 +0000 UTC m=+0.120302138 container start 535bef8d41ad4515d5417108f2074abcd2c9c52f90daa7181ace7238ec916127 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 10:04:03 compute-0 podman[281622]: 2025-11-24 10:04:03.888364463 +0000 UTC m=+0.123543378 container attach 535bef8d41ad4515d5417108f2074abcd2c9c52f90daa7181ace7238ec916127 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 10:04:04 compute-0 crazy_mestorf[281639]: --> passed data devices: 0 physical, 1 LVM
Nov 24 10:04:04 compute-0 crazy_mestorf[281639]: --> All data devices are unavailable
Nov 24 10:04:04 compute-0 systemd[1]: libpod-535bef8d41ad4515d5417108f2074abcd2c9c52f90daa7181ace7238ec916127.scope: Deactivated successfully.
Nov 24 10:04:04 compute-0 podman[281622]: 2025-11-24 10:04:04.237973796 +0000 UTC m=+0.473152711 container died 535bef8d41ad4515d5417108f2074abcd2c9c52f90daa7181ace7238ec916127 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Nov 24 10:04:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-da102034a5874f4c15a8252b9363ea2157e03bb4615be28ee4f1d3f72b447b93-merged.mount: Deactivated successfully.
Nov 24 10:04:04 compute-0 podman[281622]: 2025-11-24 10:04:04.283016647 +0000 UTC m=+0.518195572 container remove 535bef8d41ad4515d5417108f2074abcd2c9c52f90daa7181ace7238ec916127 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True)
Nov 24 10:04:04 compute-0 systemd[1]: libpod-conmon-535bef8d41ad4515d5417108f2074abcd2c9c52f90daa7181ace7238ec916127.scope: Deactivated successfully.
Nov 24 10:04:04 compute-0 sudo[281517]: pam_unix(sudo:session): session closed for user root
Nov 24 10:04:04 compute-0 ceph-mon[74331]: pgmap v1119: 353 pgs: 353 active+clean; 88 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 24 10:04:04 compute-0 sudo[281669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:04:04 compute-0 sudo[281669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:04:04 compute-0 sudo[281669]: pam_unix(sudo:session): session closed for user root
Nov 24 10:04:04 compute-0 sudo[281694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- lvm list --format json
Nov 24 10:04:04 compute-0 sudo[281694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:04:04 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:04 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:04 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:04.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:04 compute-0 podman[281762]: 2025-11-24 10:04:04.796974223 +0000 UTC m=+0.037520146 container create aaaba1eb15751afa7fee86aea798e196b4247361a887efb850bca87686d97812 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_villani, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 10:04:04 compute-0 systemd[1]: Started libpod-conmon-aaaba1eb15751afa7fee86aea798e196b4247361a887efb850bca87686d97812.scope.
Nov 24 10:04:04 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:04:04 compute-0 podman[281762]: 2025-11-24 10:04:04.86170853 +0000 UTC m=+0.102254443 container init aaaba1eb15751afa7fee86aea798e196b4247361a887efb850bca87686d97812 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_villani, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Nov 24 10:04:04 compute-0 podman[281762]: 2025-11-24 10:04:04.869190655 +0000 UTC m=+0.109736568 container start aaaba1eb15751afa7fee86aea798e196b4247361a887efb850bca87686d97812 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_villani, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 10:04:04 compute-0 podman[281762]: 2025-11-24 10:04:04.872574948 +0000 UTC m=+0.113121001 container attach aaaba1eb15751afa7fee86aea798e196b4247361a887efb850bca87686d97812 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_villani, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Nov 24 10:04:04 compute-0 fervent_villani[281778]: 167 167
Nov 24 10:04:04 compute-0 systemd[1]: libpod-aaaba1eb15751afa7fee86aea798e196b4247361a887efb850bca87686d97812.scope: Deactivated successfully.
Nov 24 10:04:04 compute-0 podman[281762]: 2025-11-24 10:04:04.778372955 +0000 UTC m=+0.018918888 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:04:04 compute-0 podman[281762]: 2025-11-24 10:04:04.87509022 +0000 UTC m=+0.115636133 container died aaaba1eb15751afa7fee86aea798e196b4247361a887efb850bca87686d97812 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_villani, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 10:04:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-faee26d4368e732506fced6709345a40cc480ca5234242fa7ff7a0589516e3d6-merged.mount: Deactivated successfully.
Nov 24 10:04:04 compute-0 podman[281762]: 2025-11-24 10:04:04.912760129 +0000 UTC m=+0.153306042 container remove aaaba1eb15751afa7fee86aea798e196b4247361a887efb850bca87686d97812 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Nov 24 10:04:04 compute-0 systemd[1]: libpod-conmon-aaaba1eb15751afa7fee86aea798e196b4247361a887efb850bca87686d97812.scope: Deactivated successfully.
Nov 24 10:04:04 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1120: 353 pgs: 353 active+clean; 88 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 24 10:04:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:04:04 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:04:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:04:05 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:04:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:04:05 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:04:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:04:05 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:04:05 compute-0 podman[281800]: 2025-11-24 10:04:05.072323365 +0000 UTC m=+0.047524673 container create b9db333e10fc0886d112eae2126f940de88a5bc5631bad6118a5b0529f6df271 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_chandrasekhar, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 10:04:05 compute-0 systemd[1]: Started libpod-conmon-b9db333e10fc0886d112eae2126f940de88a5bc5631bad6118a5b0529f6df271.scope.
Nov 24 10:04:05 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:04:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc3ea1e54942416666ea8587d254a6f2d6e8737c44142e3a2a5f443971da5160/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 10:04:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc3ea1e54942416666ea8587d254a6f2d6e8737c44142e3a2a5f443971da5160/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 10:04:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc3ea1e54942416666ea8587d254a6f2d6e8737c44142e3a2a5f443971da5160/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 10:04:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc3ea1e54942416666ea8587d254a6f2d6e8737c44142e3a2a5f443971da5160/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 10:04:05 compute-0 podman[281800]: 2025-11-24 10:04:05.05108747 +0000 UTC m=+0.026288788 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:04:05 compute-0 podman[281800]: 2025-11-24 10:04:05.156327317 +0000 UTC m=+0.131528625 container init b9db333e10fc0886d112eae2126f940de88a5bc5631bad6118a5b0529f6df271 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_chandrasekhar, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Nov 24 10:04:05 compute-0 podman[281800]: 2025-11-24 10:04:05.162912209 +0000 UTC m=+0.138113497 container start b9db333e10fc0886d112eae2126f940de88a5bc5631bad6118a5b0529f6df271 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_chandrasekhar, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 10:04:05 compute-0 podman[281800]: 2025-11-24 10:04:05.165672927 +0000 UTC m=+0.140874245 container attach b9db333e10fc0886d112eae2126f940de88a5bc5631bad6118a5b0529f6df271 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 10:04:05 compute-0 kind_chandrasekhar[281816]: {
Nov 24 10:04:05 compute-0 kind_chandrasekhar[281816]:     "0": [
Nov 24 10:04:05 compute-0 kind_chandrasekhar[281816]:         {
Nov 24 10:04:05 compute-0 kind_chandrasekhar[281816]:             "devices": [
Nov 24 10:04:05 compute-0 kind_chandrasekhar[281816]:                 "/dev/loop3"
Nov 24 10:04:05 compute-0 kind_chandrasekhar[281816]:             ],
Nov 24 10:04:05 compute-0 kind_chandrasekhar[281816]:             "lv_name": "ceph_lv0",
Nov 24 10:04:05 compute-0 kind_chandrasekhar[281816]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 10:04:05 compute-0 kind_chandrasekhar[281816]:             "lv_size": "21470642176",
Nov 24 10:04:05 compute-0 kind_chandrasekhar[281816]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=84a084c3-61a7-5de7-8207-1f88efa59a64,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 24 10:04:05 compute-0 kind_chandrasekhar[281816]:             "lv_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 10:04:05 compute-0 kind_chandrasekhar[281816]:             "name": "ceph_lv0",
Nov 24 10:04:05 compute-0 kind_chandrasekhar[281816]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 10:04:05 compute-0 kind_chandrasekhar[281816]:             "tags": {
Nov 24 10:04:05 compute-0 kind_chandrasekhar[281816]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 10:04:05 compute-0 kind_chandrasekhar[281816]:                 "ceph.block_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 10:04:05 compute-0 kind_chandrasekhar[281816]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 10:04:05 compute-0 kind_chandrasekhar[281816]:                 "ceph.cluster_fsid": "84a084c3-61a7-5de7-8207-1f88efa59a64",
Nov 24 10:04:05 compute-0 kind_chandrasekhar[281816]:                 "ceph.cluster_name": "ceph",
Nov 24 10:04:05 compute-0 kind_chandrasekhar[281816]:                 "ceph.crush_device_class": "",
Nov 24 10:04:05 compute-0 kind_chandrasekhar[281816]:                 "ceph.encrypted": "0",
Nov 24 10:04:05 compute-0 kind_chandrasekhar[281816]:                 "ceph.osd_fsid": "4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c",
Nov 24 10:04:05 compute-0 kind_chandrasekhar[281816]:                 "ceph.osd_id": "0",
Nov 24 10:04:05 compute-0 kind_chandrasekhar[281816]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 10:04:05 compute-0 kind_chandrasekhar[281816]:                 "ceph.type": "block",
Nov 24 10:04:05 compute-0 kind_chandrasekhar[281816]:                 "ceph.vdo": "0",
Nov 24 10:04:05 compute-0 kind_chandrasekhar[281816]:                 "ceph.with_tpm": "0"
Nov 24 10:04:05 compute-0 kind_chandrasekhar[281816]:             },
Nov 24 10:04:05 compute-0 kind_chandrasekhar[281816]:             "type": "block",
Nov 24 10:04:05 compute-0 kind_chandrasekhar[281816]:             "vg_name": "ceph_vg0"
Nov 24 10:04:05 compute-0 kind_chandrasekhar[281816]:         }
Nov 24 10:04:05 compute-0 kind_chandrasekhar[281816]:     ]
Nov 24 10:04:05 compute-0 kind_chandrasekhar[281816]: }
Nov 24 10:04:05 compute-0 systemd[1]: libpod-b9db333e10fc0886d112eae2126f940de88a5bc5631bad6118a5b0529f6df271.scope: Deactivated successfully.
Nov 24 10:04:05 compute-0 podman[281800]: 2025-11-24 10:04:05.423237139 +0000 UTC m=+0.398438457 container died b9db333e10fc0886d112eae2126f940de88a5bc5631bad6118a5b0529f6df271 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_chandrasekhar, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Nov 24 10:04:05 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:05 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:05 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:05.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc3ea1e54942416666ea8587d254a6f2d6e8737c44142e3a2a5f443971da5160-merged.mount: Deactivated successfully.
Nov 24 10:04:05 compute-0 podman[281800]: 2025-11-24 10:04:05.466030395 +0000 UTC m=+0.441231683 container remove b9db333e10fc0886d112eae2126f940de88a5bc5631bad6118a5b0529f6df271 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_chandrasekhar, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 10:04:05 compute-0 systemd[1]: libpod-conmon-b9db333e10fc0886d112eae2126f940de88a5bc5631bad6118a5b0529f6df271.scope: Deactivated successfully.
Nov 24 10:04:05 compute-0 sudo[281694]: pam_unix(sudo:session): session closed for user root
Nov 24 10:04:05 compute-0 sudo[281837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:04:05 compute-0 sudo[281837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:04:05 compute-0 sudo[281837]: pam_unix(sudo:session): session closed for user root
Nov 24 10:04:05 compute-0 sudo[281863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- raw list --format json
Nov 24 10:04:05 compute-0 sudo[281863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:04:06 compute-0 podman[281929]: 2025-11-24 10:04:06.074583394 +0000 UTC m=+0.053092700 container create ee8177dd8198b0fae5df44feb88ea27ad8fca9ae73ca4429a38d975562f87a9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid)
Nov 24 10:04:06 compute-0 systemd[1]: Started libpod-conmon-ee8177dd8198b0fae5df44feb88ea27ad8fca9ae73ca4429a38d975562f87a9d.scope.
Nov 24 10:04:06 compute-0 podman[281929]: 2025-11-24 10:04:06.052755746 +0000 UTC m=+0.031265062 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:04:06 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:04:06 compute-0 podman[281929]: 2025-11-24 10:04:06.188640868 +0000 UTC m=+0.167150194 container init ee8177dd8198b0fae5df44feb88ea27ad8fca9ae73ca4429a38d975562f87a9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_cartwright, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 10:04:06 compute-0 podman[281929]: 2025-11-24 10:04:06.203123305 +0000 UTC m=+0.181632601 container start ee8177dd8198b0fae5df44feb88ea27ad8fca9ae73ca4429a38d975562f87a9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_cartwright, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 10:04:06 compute-0 podman[281929]: 2025-11-24 10:04:06.207190895 +0000 UTC m=+0.185700191 container attach ee8177dd8198b0fae5df44feb88ea27ad8fca9ae73ca4429a38d975562f87a9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 24 10:04:06 compute-0 dazzling_cartwright[281945]: 167 167
Nov 24 10:04:06 compute-0 podman[281929]: 2025-11-24 10:04:06.214348061 +0000 UTC m=+0.192857377 container died ee8177dd8198b0fae5df44feb88ea27ad8fca9ae73ca4429a38d975562f87a9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_cartwright, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 24 10:04:06 compute-0 systemd[1]: libpod-ee8177dd8198b0fae5df44feb88ea27ad8fca9ae73ca4429a38d975562f87a9d.scope: Deactivated successfully.
Nov 24 10:04:06 compute-0 nova_compute[257700]: 2025-11-24 10:04:06.246 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-24653eafe646f68809ddd4f21c3394544358fe1be645a3e9b20823034cb7d33b-merged.mount: Deactivated successfully.
Nov 24 10:04:06 compute-0 podman[281929]: 2025-11-24 10:04:06.260772446 +0000 UTC m=+0.239281742 container remove ee8177dd8198b0fae5df44feb88ea27ad8fca9ae73ca4429a38d975562f87a9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 10:04:06 compute-0 systemd[1]: libpod-conmon-ee8177dd8198b0fae5df44feb88ea27ad8fca9ae73ca4429a38d975562f87a9d.scope: Deactivated successfully.
Nov 24 10:04:06 compute-0 ceph-mon[74331]: pgmap v1120: 353 pgs: 353 active+clean; 88 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 24 10:04:06 compute-0 nova_compute[257700]: 2025-11-24 10:04:06.411 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:06 compute-0 podman[281969]: 2025-11-24 10:04:06.513222433 +0000 UTC m=+0.060318608 container create 38c5ab55050ddc99ccd1c57b4c716068bfe1468fa33b82cb5824aa78791c5faf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Nov 24 10:04:06 compute-0 systemd[1]: Started libpod-conmon-38c5ab55050ddc99ccd1c57b4c716068bfe1468fa33b82cb5824aa78791c5faf.scope.
Nov 24 10:04:06 compute-0 podman[281969]: 2025-11-24 10:04:06.488533034 +0000 UTC m=+0.035629289 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:04:06 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:04:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dadfb5ee1167b1a3a0b61defa565946d030e905b9332b9f17754b0f3727a65c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 10:04:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dadfb5ee1167b1a3a0b61defa565946d030e905b9332b9f17754b0f3727a65c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 10:04:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dadfb5ee1167b1a3a0b61defa565946d030e905b9332b9f17754b0f3727a65c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 10:04:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dadfb5ee1167b1a3a0b61defa565946d030e905b9332b9f17754b0f3727a65c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 10:04:06 compute-0 podman[281969]: 2025-11-24 10:04:06.629835219 +0000 UTC m=+0.176931474 container init 38c5ab55050ddc99ccd1c57b4c716068bfe1468fa33b82cb5824aa78791c5faf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_gauss, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Nov 24 10:04:06 compute-0 podman[281969]: 2025-11-24 10:04:06.636888453 +0000 UTC m=+0.183984618 container start 38c5ab55050ddc99ccd1c57b4c716068bfe1468fa33b82cb5824aa78791c5faf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 24 10:04:06 compute-0 podman[281969]: 2025-11-24 10:04:06.640996055 +0000 UTC m=+0.188092300 container attach 38c5ab55050ddc99ccd1c57b4c716068bfe1468fa33b82cb5824aa78791c5faf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True)
Nov 24 10:04:06 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:06 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:06 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:06.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:06 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1121: 353 pgs: 353 active+clean; 88 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Nov 24 10:04:07 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:04:07 compute-0 lvm[282060]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 10:04:07 compute-0 lvm[282060]: VG ceph_vg0 finished
Nov 24 10:04:07 compute-0 blissful_gauss[281986]: {}
Nov 24 10:04:07 compute-0 systemd[1]: libpod-38c5ab55050ddc99ccd1c57b4c716068bfe1468fa33b82cb5824aa78791c5faf.scope: Deactivated successfully.
Nov 24 10:04:07 compute-0 systemd[1]: libpod-38c5ab55050ddc99ccd1c57b4c716068bfe1468fa33b82cb5824aa78791c5faf.scope: Consumed 1.370s CPU time.
Nov 24 10:04:07 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:07 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:07 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:07.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:07 compute-0 podman[282064]: 2025-11-24 10:04:07.474934343 +0000 UTC m=+0.028979786 container died 38c5ab55050ddc99ccd1c57b4c716068bfe1468fa33b82cb5824aa78791c5faf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 24 10:04:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-5dadfb5ee1167b1a3a0b61defa565946d030e905b9332b9f17754b0f3727a65c-merged.mount: Deactivated successfully.
Nov 24 10:04:07 compute-0 podman[282064]: 2025-11-24 10:04:07.508183823 +0000 UTC m=+0.062229236 container remove 38c5ab55050ddc99ccd1c57b4c716068bfe1468fa33b82cb5824aa78791c5faf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_gauss, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid)
Nov 24 10:04:07 compute-0 systemd[1]: libpod-conmon-38c5ab55050ddc99ccd1c57b4c716068bfe1468fa33b82cb5824aa78791c5faf.scope: Deactivated successfully.
Nov 24 10:04:07 compute-0 sudo[281863]: pam_unix(sudo:session): session closed for user root
Nov 24 10:04:07 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 10:04:07 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:04:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:04:07.572Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:04:07 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 10:04:07 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:04:07 compute-0 sudo[282080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 10:04:07 compute-0 sudo[282080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:04:07 compute-0 sudo[282080]: pam_unix(sudo:session): session closed for user root
Nov 24 10:04:08 compute-0 ceph-mon[74331]: pgmap v1121: 353 pgs: 353 active+clean; 88 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Nov 24 10:04:08 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:04:08 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:04:08 compute-0 sudo[282106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:04:08 compute-0 sudo[282106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:04:08 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:08 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:08 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:08.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:08 compute-0 sudo[282106]: pam_unix(sudo:session): session closed for user root
Nov 24 10:04:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:04:08.924Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:04:08 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1122: 353 pgs: 353 active+clean; 88 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 75 op/s
Nov 24 10:04:09 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:09 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:09 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:09.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:04:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:04:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:04:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:04:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:04:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:04:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:04:10 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:04:10 compute-0 ceph-mon[74331]: pgmap v1122: 353 pgs: 353 active+clean; 88 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 75 op/s
Nov 24 10:04:10 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:10 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:10 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:10.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:10 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1123: 353 pgs: 353 active+clean; 88 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 75 op/s
Nov 24 10:04:10 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:04:10] "GET /metrics HTTP/1.1" 200 48483 "" "Prometheus/2.51.0"
Nov 24 10:04:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:04:10] "GET /metrics HTTP/1.1" 200 48483 "" "Prometheus/2.51.0"
Nov 24 10:04:11 compute-0 nova_compute[257700]: 2025-11-24 10:04:11.249 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:11 compute-0 nova_compute[257700]: 2025-11-24 10:04:11.413 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:11 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:11 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:04:11 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:11.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:04:11 compute-0 ceph-mon[74331]: pgmap v1123: 353 pgs: 353 active+clean; 88 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 75 op/s
Nov 24 10:04:12 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:04:12 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:12 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:12 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:12.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:12 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1124: 353 pgs: 353 active+clean; 88 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 75 op/s
Nov 24 10:04:13 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:13 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:13 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:13.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:13 compute-0 ovn_controller[155123]: 2025-11-24T10:04:13Z|00097|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory
Nov 24 10:04:13 compute-0 sshd-session[282135]: Received disconnect from 83.229.122.23 port 46168:11: Bye Bye [preauth]
Nov 24 10:04:13 compute-0 sshd-session[282135]: Disconnected from authenticating user root 83.229.122.23 port 46168 [preauth]
Nov 24 10:04:14 compute-0 ceph-mon[74331]: pgmap v1124: 353 pgs: 353 active+clean; 88 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 75 op/s
Nov 24 10:04:14 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:14 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:14 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:14.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:14 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1125: 353 pgs: 353 active+clean; 88 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 24 10:04:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:04:14 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:04:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:04:14 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:04:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:04:14 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:04:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:04:15 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:04:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:04:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:04:15 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:15 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:15 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:15.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:04:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:04:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:04:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:04:16 compute-0 ceph-mon[74331]: pgmap v1125: 353 pgs: 353 active+clean; 88 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 24 10:04:16 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:04:16 compute-0 nova_compute[257700]: 2025-11-24 10:04:16.250 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:16 compute-0 nova_compute[257700]: 2025-11-24 10:04:16.415 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:16 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:16 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:04:16 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:16.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:04:16 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1126: 353 pgs: 353 active+clean; 121 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Nov 24 10:04:17 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:04:17 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:17 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:17 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:17.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:04:17.575Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:04:17 compute-0 podman[282142]: 2025-11-24 10:04:17.791042731 +0000 UTC m=+0.061164819 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 24 10:04:17 compute-0 podman[282143]: 2025-11-24 10:04:17.817803022 +0000 UTC m=+0.085917331 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 24 10:04:18 compute-0 ceph-mon[74331]: pgmap v1126: 353 pgs: 353 active+clean; 121 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Nov 24 10:04:18 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:18 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:04:18 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:18.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:04:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:04:18.926Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:04:18 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1127: 353 pgs: 353 active+clean; 121 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 24 10:04:19 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:19 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:19 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:19.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:04:19 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:04:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:04:19 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:04:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:04:19 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:04:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:04:20 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:04:20 compute-0 ceph-mon[74331]: pgmap v1127: 353 pgs: 353 active+clean; 121 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 24 10:04:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:04:20.574 165073 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:04:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:04:20.575 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:04:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:04:20.575 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:04:20 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:20 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:04:20 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:20.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:04:20 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1128: 353 pgs: 353 active+clean; 121 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 24 10:04:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:04:20] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Nov 24 10:04:20 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:04:20] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Nov 24 10:04:21 compute-0 nova_compute[257700]: 2025-11-24 10:04:21.250 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:21 compute-0 nova_compute[257700]: 2025-11-24 10:04:21.415 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:21 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:21 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:04:21 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:21.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:04:21 compute-0 nova_compute[257700]: 2025-11-24 10:04:21.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:04:21 compute-0 nova_compute[257700]: 2025-11-24 10:04:21.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:04:22 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:04:22 compute-0 ceph-mon[74331]: pgmap v1128: 353 pgs: 353 active+clean; 121 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 24 10:04:22 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:22 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:22 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:22.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:22 compute-0 nova_compute[257700]: 2025-11-24 10:04:22.920 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:04:22 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1129: 353 pgs: 353 active+clean; 121 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 24 10:04:23 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:23 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:23 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:23.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:23 compute-0 nova_compute[257700]: 2025-11-24 10:04:23.922 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:04:23 compute-0 nova_compute[257700]: 2025-11-24 10:04:23.922 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 10:04:23 compute-0 nova_compute[257700]: 2025-11-24 10:04:23.922 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 10:04:23 compute-0 nova_compute[257700]: 2025-11-24 10:04:23.942 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 10:04:23 compute-0 nova_compute[257700]: 2025-11-24 10:04:23.942 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:04:23 compute-0 nova_compute[257700]: 2025-11-24 10:04:23.959 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:04:23 compute-0 nova_compute[257700]: 2025-11-24 10:04:23.960 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:04:23 compute-0 nova_compute[257700]: 2025-11-24 10:04:23.960 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:04:23 compute-0 nova_compute[257700]: 2025-11-24 10:04:23.960 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 10:04:23 compute-0 nova_compute[257700]: 2025-11-24 10:04:23.960 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:04:24 compute-0 ceph-mon[74331]: pgmap v1129: 353 pgs: 353 active+clean; 121 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 24 10:04:24 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:04:24 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3512625883' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:04:24 compute-0 nova_compute[257700]: 2025-11-24 10:04:24.400 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:04:24 compute-0 podman[282216]: 2025-11-24 10:04:24.489140446 +0000 UTC m=+0.049391699 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 24 10:04:24 compute-0 nova_compute[257700]: 2025-11-24 10:04:24.548 257704 WARNING nova.virt.libvirt.driver [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 10:04:24 compute-0 nova_compute[257700]: 2025-11-24 10:04:24.549 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4562MB free_disk=59.942752838134766GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 10:04:24 compute-0 nova_compute[257700]: 2025-11-24 10:04:24.549 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:04:24 compute-0 nova_compute[257700]: 2025-11-24 10:04:24.550 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:04:24 compute-0 nova_compute[257700]: 2025-11-24 10:04:24.601 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 10:04:24 compute-0 nova_compute[257700]: 2025-11-24 10:04:24.602 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 10:04:24 compute-0 nova_compute[257700]: 2025-11-24 10:04:24.616 257704 DEBUG nova.scheduler.client.report [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Refreshing inventories for resource provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 24 10:04:24 compute-0 nova_compute[257700]: 2025-11-24 10:04:24.631 257704 DEBUG nova.scheduler.client.report [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Updating ProviderTree inventory for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 24 10:04:24 compute-0 nova_compute[257700]: 2025-11-24 10:04:24.631 257704 DEBUG nova.compute.provider_tree [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Updating inventory in ProviderTree for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 24 10:04:24 compute-0 nova_compute[257700]: 2025-11-24 10:04:24.646 257704 DEBUG nova.scheduler.client.report [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Refreshing aggregate associations for resource provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 24 10:04:24 compute-0 nova_compute[257700]: 2025-11-24 10:04:24.670 257704 DEBUG nova.scheduler.client.report [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Refreshing trait associations for resource provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257, traits: COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_F16C,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SATA,COMPUTE_ACCELERATORS,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_SHA,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSSE3,HW_CPU_X86_AVX,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AESNI,HW_CPU_X86_BMI,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_MMX,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 24 10:04:24 compute-0 nova_compute[257700]: 2025-11-24 10:04:24.684 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:04:24 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:24 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:24 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:24.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:24 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1130: 353 pgs: 353 active+clean; 121 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 24 10:04:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:04:25 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:04:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:04:25 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:04:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:04:25 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:04:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:04:25 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:04:25 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:04:25 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4214854330' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:04:25 compute-0 nova_compute[257700]: 2025-11-24 10:04:25.112 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:04:25 compute-0 nova_compute[257700]: 2025-11-24 10:04:25.118 257704 DEBUG nova.compute.provider_tree [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Inventory has not changed in ProviderTree for provider: a50ce3b5-7e9e-4263-a4aa-c35573ac7257 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 10:04:25 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3512625883' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:04:25 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/4214854330' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:04:25 compute-0 nova_compute[257700]: 2025-11-24 10:04:25.133 257704 DEBUG nova.scheduler.client.report [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Inventory has not changed for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 10:04:25 compute-0 nova_compute[257700]: 2025-11-24 10:04:25.274 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 10:04:25 compute-0 nova_compute[257700]: 2025-11-24 10:04:25.275 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:04:25 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:25 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:04:25 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:25.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:04:26 compute-0 ceph-mon[74331]: pgmap v1130: 353 pgs: 353 active+clean; 121 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 24 10:04:26 compute-0 nova_compute[257700]: 2025-11-24 10:04:26.252 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:26 compute-0 nova_compute[257700]: 2025-11-24 10:04:26.417 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:26 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:26 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:26 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:26.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:26 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1131: 353 pgs: 353 active+clean; 121 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 24 10:04:27 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:04:27 compute-0 nova_compute[257700]: 2025-11-24 10:04:27.254 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:04:27 compute-0 nova_compute[257700]: 2025-11-24 10:04:27.254 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:04:27 compute-0 nova_compute[257700]: 2025-11-24 10:04:27.254 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 10:04:27 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/3436723982' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:04:27 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/3125762373' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:04:27 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/112966388' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:04:27 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:27 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:04:27 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:27.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:04:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:04:27.576Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:04:28 compute-0 ceph-mon[74331]: pgmap v1131: 353 pgs: 353 active+clean; 121 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 24 10:04:28 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/3276436408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:04:28 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:28 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:28 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:28.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:28 compute-0 sudo[282262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:04:28 compute-0 sudo[282262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:04:28 compute-0 sudo[282262]: pam_unix(sudo:session): session closed for user root
Nov 24 10:04:28 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:04:28.898 165073 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:13:51', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4e:f0:a8:6f:5e:1b'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 10:04:28 compute-0 nova_compute[257700]: 2025-11-24 10:04:28.899 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:28 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:04:28.900 165073 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 10:04:28 compute-0 nova_compute[257700]: 2025-11-24 10:04:28.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:04:28 compute-0 nova_compute[257700]: 2025-11-24 10:04:28.922 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:04:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:04:28.927Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:04:28 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1132: 353 pgs: 353 active+clean; 121 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 15 KiB/s wr, 1 op/s
Nov 24 10:04:29 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:29 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:29 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:29.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:04:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:04:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:04:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:04:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:04:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:04:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:04:30 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:04:30 compute-0 ceph-mon[74331]: pgmap v1132: 353 pgs: 353 active+clean; 121 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 15 KiB/s wr, 1 op/s
Nov 24 10:04:30 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:30 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:04:30 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:30.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:04:30 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1133: 353 pgs: 353 active+clean; 121 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 15 KiB/s wr, 1 op/s
Nov 24 10:04:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:04:30] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Nov 24 10:04:30 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:04:30] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Nov 24 10:04:31 compute-0 nova_compute[257700]: 2025-11-24 10:04:31.252 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:31 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:04:31 compute-0 nova_compute[257700]: 2025-11-24 10:04:31.418 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:31 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:31 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:31 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:31.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:31 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:04:31.902 165073 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feb242b9-6422-4c37-bc7a-5c14a79beaf8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:04:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:04:32 compute-0 ceph-mon[74331]: pgmap v1133: 353 pgs: 353 active+clean; 121 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 15 KiB/s wr, 1 op/s
Nov 24 10:04:32 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:32 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:32 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:32.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:32 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1134: 353 pgs: 353 active+clean; 43 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 15 KiB/s wr, 15 op/s
Nov 24 10:04:33 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/1074205317' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:04:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:33.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:34 compute-0 ceph-mon[74331]: pgmap v1134: 353 pgs: 353 active+clean; 43 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 15 KiB/s wr, 15 op/s
Nov 24 10:04:34 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:34 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:34 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:34.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:34 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1135: 353 pgs: 353 active+clean; 43 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 9.9 KiB/s rd, 2.7 KiB/s wr, 14 op/s
Nov 24 10:04:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:04:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:04:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:04:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:04:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:04:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:04:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:04:35 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:04:35 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:35 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:35 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:35.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:36 compute-0 nova_compute[257700]: 2025-11-24 10:04:36.254 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:36 compute-0 ceph-mon[74331]: pgmap v1135: 353 pgs: 353 active+clean; 43 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 9.9 KiB/s rd, 2.7 KiB/s wr, 14 op/s
Nov 24 10:04:36 compute-0 nova_compute[257700]: 2025-11-24 10:04:36.418 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:36 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:36 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:36 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:36.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:36 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1136: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.5 KiB/s wr, 29 op/s
Nov 24 10:04:37 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:04:37 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:37 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:37 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:37.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:04:37.577Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:04:38 compute-0 ceph-mon[74331]: pgmap v1136: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.5 KiB/s wr, 29 op/s
Nov 24 10:04:38 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:38 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:04:38 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:38.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:04:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:04:38.927Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:04:38 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1137: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Nov 24 10:04:39 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:39 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:39 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:39.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:04:39 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:04:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:04:39 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:04:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:04:39 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:04:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:04:40 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:04:40 compute-0 ceph-mon[74331]: pgmap v1137: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Nov 24 10:04:40 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:40 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:40 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:40.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:40 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1138: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Nov 24 10:04:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:04:40] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Nov 24 10:04:40 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:04:40] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Nov 24 10:04:41 compute-0 nova_compute[257700]: 2025-11-24 10:04:41.267 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:41 compute-0 nova_compute[257700]: 2025-11-24 10:04:41.421 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:41 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:41 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:41 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:41.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:42 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:04:42 compute-0 ceph-mon[74331]: pgmap v1138: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Nov 24 10:04:42 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:42 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:42 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:42.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:42 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1139: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Nov 24 10:04:43 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:43 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:04:43 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:43.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:04:44 compute-0 ceph-mon[74331]: pgmap v1139: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Nov 24 10:04:44 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:44 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:44 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:44.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:44 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1140: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 767 B/s wr, 14 op/s
Nov 24 10:04:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:04:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:04:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:04:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:04:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:04:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:04:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:04:45 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:04:45 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 24 10:04:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Optimize plan auto_2025-11-24_10:04:45
Nov 24 10:04:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 10:04:45 compute-0 ceph-mgr[74626]: [balancer INFO root] do_upmap
Nov 24 10:04:45 compute-0 ceph-mgr[74626]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', 'vms', 'images', 'default.rgw.log', 'backups', '.rgw.root', 'cephfs.cephfs.meta', '.mgr', '.nfs']
Nov 24 10:04:45 compute-0 ceph-mgr[74626]: [balancer INFO root] prepared 0/10 upmap changes
Nov 24 10:04:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:04:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:04:45 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:04:45 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:45 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:45 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:45.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:04:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:04:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:04:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:04:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 10:04:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 10:04:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 10:04:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 10:04:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 10:04:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:04:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 10:04:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:04:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 10:04:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:04:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:04:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:04:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:04:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:04:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 24 10:04:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:04:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 10:04:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:04:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:04:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:04:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 24 10:04:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:04:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 10:04:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:04:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:04:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:04:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 10:04:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:04:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 10:04:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 10:04:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 10:04:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 10:04:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 10:04:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 10:04:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 10:04:46 compute-0 nova_compute[257700]: 2025-11-24 10:04:46.268 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:46 compute-0 nova_compute[257700]: 2025-11-24 10:04:46.422 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:46 compute-0 ceph-mon[74331]: pgmap v1140: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 767 B/s wr, 14 op/s
Nov 24 10:04:46 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:46 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:46 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:46.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:46 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1141: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 767 B/s wr, 15 op/s
Nov 24 10:04:47 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:04:47 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:47 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:47 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:47.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:04:47.578Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 10:04:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:04:47.578Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 10:04:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:04:47.579Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 10:04:48 compute-0 ceph-mon[74331]: pgmap v1141: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 767 B/s wr, 15 op/s
Nov 24 10:04:48 compute-0 podman[282309]: 2025-11-24 10:04:48.799163912 +0000 UTC m=+0.078006275 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 24 10:04:48 compute-0 podman[282308]: 2025-11-24 10:04:48.800660299 +0000 UTC m=+0.081229095 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 24 10:04:48 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:48 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:04:48 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:48.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:04:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:04:48.928Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:04:48 compute-0 sudo[282355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:04:48 compute-0 sudo[282355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:04:48 compute-0 sudo[282355]: pam_unix(sudo:session): session closed for user root
Nov 24 10:04:48 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1142: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:04:49 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:49 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:49 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:49.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:04:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:04:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:04:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:04:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:04:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:04:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:04:50 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:04:50 compute-0 ceph-mon[74331]: pgmap v1142: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:04:50 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:50 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:50 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:50.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:50 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1143: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:04:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:04:50] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 24 10:04:50 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:04:50] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 24 10:04:51 compute-0 nova_compute[257700]: 2025-11-24 10:04:51.270 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:51 compute-0 nova_compute[257700]: 2025-11-24 10:04:51.423 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:51 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:51 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:51 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:51.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:52 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:04:52 compute-0 ceph-mon[74331]: pgmap v1143: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:04:52 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:52 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:52 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:52.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:52 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1144: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:04:53 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:53 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:53 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:53.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:53 compute-0 ceph-mon[74331]: pgmap v1144: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:04:54 compute-0 podman[282386]: 2025-11-24 10:04:54.797421884 +0000 UTC m=+0.073909844 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 24 10:04:54 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:54 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:54 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:54.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:54 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1145: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:04:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:04:54 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:04:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:04:54 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:04:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:04:54 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:04:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:04:55 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:04:55 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:55 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:04:55 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:55.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:04:56 compute-0 ceph-mon[74331]: pgmap v1145: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:04:56 compute-0 nova_compute[257700]: 2025-11-24 10:04:56.272 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:56 compute-0 nova_compute[257700]: 2025-11-24 10:04:56.424 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:04:56 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:56 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:04:56 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:56.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:04:56 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1146: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:04:57 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:04:57 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:57 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:57 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:57.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:04:57.580Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:04:58 compute-0 ceph-mon[74331]: pgmap v1146: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:04:58 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:58 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:58 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:04:58.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:04:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:04:58.928Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:04:58 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1147: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:04:59 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:04:59 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:04:59 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:04:59.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:04:59 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:05:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:04:59 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:05:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:04:59 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:05:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:05:00 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:05:00 compute-0 ceph-mon[74331]: pgmap v1147: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:05:00 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:00 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:00 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:00.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:00 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1148: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:05:00 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:05:00] "GET /metrics HTTP/1.1" 200 48463 "" "Prometheus/2.51.0"
Nov 24 10:05:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:05:00] "GET /metrics HTTP/1.1" 200 48463 "" "Prometheus/2.51.0"
Nov 24 10:05:01 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:05:01 compute-0 nova_compute[257700]: 2025-11-24 10:05:01.273 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:05:01 compute-0 nova_compute[257700]: 2025-11-24 10:05:01.426 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:05:01 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:01 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:01 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:01.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:02 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:05:02 compute-0 ceph-mon[74331]: pgmap v1148: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:05:02 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/2840377285' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 10:05:02 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/2840377285' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 10:05:02 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:02 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:05:02 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:02.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:05:02 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1149: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:05:03 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:03 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:03 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:03.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:04 compute-0 ceph-mon[74331]: pgmap v1149: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:05:04 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:04 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:04 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:04.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:04 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1150: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:05:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:05:04 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:05:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:05:04 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:05:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:05:04 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:05:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:05:05 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:05:05 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:05 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:05 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:05.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:05 compute-0 sshd-session[282414]: Received disconnect from 36.255.3.203 port 44255:11: Bye Bye [preauth]
Nov 24 10:05:05 compute-0 sshd-session[282414]: Disconnected from authenticating user root 36.255.3.203 port 44255 [preauth]
Nov 24 10:05:06 compute-0 ceph-mon[74331]: pgmap v1150: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:05:06 compute-0 nova_compute[257700]: 2025-11-24 10:05:06.275 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:05:06 compute-0 nova_compute[257700]: 2025-11-24 10:05:06.428 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:05:06 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:06 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:05:06 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:06.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:05:06 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1151: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:05:07 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:05:07 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:07 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:05:07 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:07.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:05:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:05:07.581Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:05:07 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 10:05:07 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 7334 writes, 32K keys, 7334 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s
                                           Cumulative WAL: 7334 writes, 7334 syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1571 writes, 6677 keys, 1571 commit groups, 1.0 writes per commit group, ingest: 11.56 MB, 0.02 MB/s
                                           Interval WAL: 1571 writes, 1571 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0    106.9      0.48              0.14        18    0.027       0      0       0.0       0.0
                                             L6      1/0   13.46 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.3    169.7    144.8      1.52              0.52        17    0.089     94K   9453       0.0       0.0
                                            Sum      1/0   13.46 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.3    129.0    135.8      2.00              0.66        35    0.057     94K   9453       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   5.9    160.5    164.2      0.40              0.15         8    0.051     26K   2568       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0    169.7    144.8      1.52              0.52        17    0.089     94K   9453       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    107.7      0.48              0.14        17    0.028       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.6      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.050, interval 0.011
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.27 GB write, 0.11 MB/s write, 0.25 GB read, 0.11 MB/s read, 2.0 seconds
                                           Interval compaction: 0.06 GB write, 0.11 MB/s write, 0.06 GB read, 0.11 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b87797d350#2 capacity: 304.00 MB usage: 22.81 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000957 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1230,22.07 MB,7.26057%) FilterBlock(36,275.55 KB,0.088516%) IndexBlock(36,482.30 KB,0.154932%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 24 10:05:07 compute-0 sudo[282420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:05:07 compute-0 sudo[282420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:05:07 compute-0 sudo[282420]: pam_unix(sudo:session): session closed for user root
Nov 24 10:05:07 compute-0 sudo[282445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 10:05:07 compute-0 sudo[282445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:05:08 compute-0 ceph-mon[74331]: pgmap v1151: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:05:08 compute-0 sudo[282445]: pam_unix(sudo:session): session closed for user root
Nov 24 10:05:08 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1152: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:05:08 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 10:05:08 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:05:08 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 10:05:08 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:05:08 compute-0 sudo[282503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:05:08 compute-0 sudo[282503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:05:08 compute-0 sudo[282503]: pam_unix(sudo:session): session closed for user root
Nov 24 10:05:08 compute-0 sudo[282528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 24 10:05:08 compute-0 sudo[282528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:05:08 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:08 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:05:08 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:08.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:05:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:05:08.929Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 10:05:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:05:08.930Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:05:09 compute-0 sudo[282581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:05:09 compute-0 sudo[282581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:05:09 compute-0 sudo[282581]: pam_unix(sudo:session): session closed for user root
Nov 24 10:05:09 compute-0 podman[282620]: 2025-11-24 10:05:09.119504799 +0000 UTC m=+0.054200689 container create a2afca20d14f5dd1da7b73057a4e609e714724401543606d452d4be2e705c29c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_moore, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Nov 24 10:05:09 compute-0 systemd[1]: Started libpod-conmon-a2afca20d14f5dd1da7b73057a4e609e714724401543606d452d4be2e705c29c.scope.
Nov 24 10:05:09 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:05:09 compute-0 podman[282620]: 2025-11-24 10:05:09.092889352 +0000 UTC m=+0.027585322 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:05:09 compute-0 podman[282620]: 2025-11-24 10:05:09.193461233 +0000 UTC m=+0.128157123 container init a2afca20d14f5dd1da7b73057a4e609e714724401543606d452d4be2e705c29c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_moore, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Nov 24 10:05:09 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:05:09 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 10:05:09 compute-0 ceph-mon[74331]: pgmap v1152: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:05:09 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:05:09 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:05:09 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 10:05:09 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 10:05:09 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:05:09 compute-0 podman[282620]: 2025-11-24 10:05:09.199887091 +0000 UTC m=+0.134582961 container start a2afca20d14f5dd1da7b73057a4e609e714724401543606d452d4be2e705c29c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_moore, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Nov 24 10:05:09 compute-0 elated_moore[282636]: 167 167
Nov 24 10:05:09 compute-0 systemd[1]: libpod-a2afca20d14f5dd1da7b73057a4e609e714724401543606d452d4be2e705c29c.scope: Deactivated successfully.
Nov 24 10:05:09 compute-0 podman[282620]: 2025-11-24 10:05:09.206040033 +0000 UTC m=+0.140735923 container attach a2afca20d14f5dd1da7b73057a4e609e714724401543606d452d4be2e705c29c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_moore, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Nov 24 10:05:09 compute-0 podman[282620]: 2025-11-24 10:05:09.206809872 +0000 UTC m=+0.141505752 container died a2afca20d14f5dd1da7b73057a4e609e714724401543606d452d4be2e705c29c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 10:05:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f152cb4c583c81bf9f8f3131c5397ed6c135fb7df77c738c1f722492a415401-merged.mount: Deactivated successfully.
Nov 24 10:05:09 compute-0 podman[282620]: 2025-11-24 10:05:09.243954938 +0000 UTC m=+0.178650828 container remove a2afca20d14f5dd1da7b73057a4e609e714724401543606d452d4be2e705c29c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_moore, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 10:05:09 compute-0 systemd[1]: libpod-conmon-a2afca20d14f5dd1da7b73057a4e609e714724401543606d452d4be2e705c29c.scope: Deactivated successfully.
Nov 24 10:05:09 compute-0 podman[282661]: 2025-11-24 10:05:09.396282125 +0000 UTC m=+0.048712402 container create 071469ae09b7eb50554b677c9b68971e8c31d20f7bafa3e8ae3a62fc243ac0de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_villani, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 10:05:09 compute-0 systemd[1]: Started libpod-conmon-071469ae09b7eb50554b677c9b68971e8c31d20f7bafa3e8ae3a62fc243ac0de.scope.
Nov 24 10:05:09 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:05:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5160be351649e780526ed62b1e68354eb13b02728d80d6bb97bad413d094e4e0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 10:05:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5160be351649e780526ed62b1e68354eb13b02728d80d6bb97bad413d094e4e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 10:05:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5160be351649e780526ed62b1e68354eb13b02728d80d6bb97bad413d094e4e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 10:05:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5160be351649e780526ed62b1e68354eb13b02728d80d6bb97bad413d094e4e0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 10:05:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5160be351649e780526ed62b1e68354eb13b02728d80d6bb97bad413d094e4e0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 10:05:09 compute-0 podman[282661]: 2025-11-24 10:05:09.37254104 +0000 UTC m=+0.024971407 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:05:09 compute-0 podman[282661]: 2025-11-24 10:05:09.475733504 +0000 UTC m=+0.128163801 container init 071469ae09b7eb50554b677c9b68971e8c31d20f7bafa3e8ae3a62fc243ac0de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 10:05:09 compute-0 podman[282661]: 2025-11-24 10:05:09.485464464 +0000 UTC m=+0.137894741 container start 071469ae09b7eb50554b677c9b68971e8c31d20f7bafa3e8ae3a62fc243ac0de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 10:05:09 compute-0 podman[282661]: 2025-11-24 10:05:09.488427287 +0000 UTC m=+0.140857594 container attach 071469ae09b7eb50554b677c9b68971e8c31d20f7bafa3e8ae3a62fc243ac0de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_villani, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Nov 24 10:05:09 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:09 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:09 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:09.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:09 compute-0 vigilant_villani[282677]: --> passed data devices: 0 physical, 1 LVM
Nov 24 10:05:09 compute-0 vigilant_villani[282677]: --> All data devices are unavailable
Nov 24 10:05:09 compute-0 systemd[1]: libpod-071469ae09b7eb50554b677c9b68971e8c31d20f7bafa3e8ae3a62fc243ac0de.scope: Deactivated successfully.
Nov 24 10:05:09 compute-0 podman[282661]: 2025-11-24 10:05:09.841695921 +0000 UTC m=+0.494126228 container died 071469ae09b7eb50554b677c9b68971e8c31d20f7bafa3e8ae3a62fc243ac0de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_villani, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 10:05:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-5160be351649e780526ed62b1e68354eb13b02728d80d6bb97bad413d094e4e0-merged.mount: Deactivated successfully.
Nov 24 10:05:09 compute-0 podman[282661]: 2025-11-24 10:05:09.881479132 +0000 UTC m=+0.533909409 container remove 071469ae09b7eb50554b677c9b68971e8c31d20f7bafa3e8ae3a62fc243ac0de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Nov 24 10:05:09 compute-0 systemd[1]: libpod-conmon-071469ae09b7eb50554b677c9b68971e8c31d20f7bafa3e8ae3a62fc243ac0de.scope: Deactivated successfully.
Nov 24 10:05:09 compute-0 sudo[282528]: pam_unix(sudo:session): session closed for user root
Nov 24 10:05:09 compute-0 sudo[282705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:05:09 compute-0 sudo[282705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:05:09 compute-0 sudo[282705]: pam_unix(sudo:session): session closed for user root
Nov 24 10:05:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:05:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:05:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:05:10 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:05:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:05:10 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:05:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:05:10 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:05:10 compute-0 sudo[282730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- lvm list --format json
Nov 24 10:05:10 compute-0 sudo[282730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:05:10 compute-0 podman[282797]: 2025-11-24 10:05:10.479409949 +0000 UTC m=+0.048533587 container create 2a0cf4bcfeb96107519bf79c33a4bce998a492f5f8cf36343d5e8f004b56b362 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 10:05:10 compute-0 systemd[1]: Started libpod-conmon-2a0cf4bcfeb96107519bf79c33a4bce998a492f5f8cf36343d5e8f004b56b362.scope.
Nov 24 10:05:10 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:05:10 compute-0 podman[282797]: 2025-11-24 10:05:10.458465612 +0000 UTC m=+0.027589350 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:05:10 compute-0 podman[282797]: 2025-11-24 10:05:10.554827979 +0000 UTC m=+0.123951667 container init 2a0cf4bcfeb96107519bf79c33a4bce998a492f5f8cf36343d5e8f004b56b362 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_ellis, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2)
Nov 24 10:05:10 compute-0 podman[282797]: 2025-11-24 10:05:10.567299617 +0000 UTC m=+0.136423285 container start 2a0cf4bcfeb96107519bf79c33a4bce998a492f5f8cf36343d5e8f004b56b362 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_ellis, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 10:05:10 compute-0 podman[282797]: 2025-11-24 10:05:10.571355147 +0000 UTC m=+0.140478795 container attach 2a0cf4bcfeb96107519bf79c33a4bce998a492f5f8cf36343d5e8f004b56b362 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_ellis, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 10:05:10 compute-0 interesting_ellis[282814]: 167 167
Nov 24 10:05:10 compute-0 systemd[1]: libpod-2a0cf4bcfeb96107519bf79c33a4bce998a492f5f8cf36343d5e8f004b56b362.scope: Deactivated successfully.
Nov 24 10:05:10 compute-0 conmon[282814]: conmon 2a0cf4bcfeb96107519b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2a0cf4bcfeb96107519bf79c33a4bce998a492f5f8cf36343d5e8f004b56b362.scope/container/memory.events
Nov 24 10:05:10 compute-0 podman[282797]: 2025-11-24 10:05:10.575146611 +0000 UTC m=+0.144270269 container died 2a0cf4bcfeb96107519bf79c33a4bce998a492f5f8cf36343d5e8f004b56b362 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 10:05:10 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1153: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:05:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-45744b9fe8f4bd621a1dfd61363afc654f46c3a37caf78963fad91b49f52e684-merged.mount: Deactivated successfully.
Nov 24 10:05:10 compute-0 podman[282797]: 2025-11-24 10:05:10.64121315 +0000 UTC m=+0.210336818 container remove 2a0cf4bcfeb96107519bf79c33a4bce998a492f5f8cf36343d5e8f004b56b362 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_ellis, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Nov 24 10:05:10 compute-0 systemd[1]: libpod-conmon-2a0cf4bcfeb96107519bf79c33a4bce998a492f5f8cf36343d5e8f004b56b362.scope: Deactivated successfully.
Nov 24 10:05:10 compute-0 ceph-mon[74331]: pgmap v1153: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:05:10 compute-0 podman[282838]: 2025-11-24 10:05:10.824184372 +0000 UTC m=+0.059371625 container create 4bfa738dba4525215a83929fc23586ef70e84ba34b0ac2117f4baa9462a6206b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_buck, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Nov 24 10:05:10 compute-0 systemd[1]: Started libpod-conmon-4bfa738dba4525215a83929fc23586ef70e84ba34b0ac2117f4baa9462a6206b.scope.
Nov 24 10:05:10 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:10 compute-0 podman[282838]: 2025-11-24 10:05:10.795092316 +0000 UTC m=+0.030307589 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:05:10 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:05:10 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:10.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:05:10 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:05:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7838f98098be530248d3a8d12a8947c77716d5756c003c9af98b5e42612893e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 10:05:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7838f98098be530248d3a8d12a8947c77716d5756c003c9af98b5e42612893e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 10:05:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7838f98098be530248d3a8d12a8947c77716d5756c003c9af98b5e42612893e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 10:05:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7838f98098be530248d3a8d12a8947c77716d5756c003c9af98b5e42612893e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 10:05:10 compute-0 podman[282838]: 2025-11-24 10:05:10.921330039 +0000 UTC m=+0.156517272 container init 4bfa738dba4525215a83929fc23586ef70e84ba34b0ac2117f4baa9462a6206b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0)
Nov 24 10:05:10 compute-0 podman[282838]: 2025-11-24 10:05:10.92665217 +0000 UTC m=+0.161839393 container start 4bfa738dba4525215a83929fc23586ef70e84ba34b0ac2117f4baa9462a6206b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Nov 24 10:05:10 compute-0 podman[282838]: 2025-11-24 10:05:10.929575752 +0000 UTC m=+0.164762965 container attach 4bfa738dba4525215a83929fc23586ef70e84ba34b0ac2117f4baa9462a6206b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_buck, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True)
Nov 24 10:05:10 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:05:10] "GET /metrics HTTP/1.1" 200 48463 "" "Prometheus/2.51.0"
Nov 24 10:05:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:05:10] "GET /metrics HTTP/1.1" 200 48463 "" "Prometheus/2.51.0"
Nov 24 10:05:11 compute-0 charming_buck[282855]: {
Nov 24 10:05:11 compute-0 charming_buck[282855]:     "0": [
Nov 24 10:05:11 compute-0 charming_buck[282855]:         {
Nov 24 10:05:11 compute-0 charming_buck[282855]:             "devices": [
Nov 24 10:05:11 compute-0 charming_buck[282855]:                 "/dev/loop3"
Nov 24 10:05:11 compute-0 charming_buck[282855]:             ],
Nov 24 10:05:11 compute-0 charming_buck[282855]:             "lv_name": "ceph_lv0",
Nov 24 10:05:11 compute-0 charming_buck[282855]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 10:05:11 compute-0 charming_buck[282855]:             "lv_size": "21470642176",
Nov 24 10:05:11 compute-0 charming_buck[282855]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=84a084c3-61a7-5de7-8207-1f88efa59a64,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 24 10:05:11 compute-0 charming_buck[282855]:             "lv_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 10:05:11 compute-0 charming_buck[282855]:             "name": "ceph_lv0",
Nov 24 10:05:11 compute-0 charming_buck[282855]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 10:05:11 compute-0 charming_buck[282855]:             "tags": {
Nov 24 10:05:11 compute-0 charming_buck[282855]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 10:05:11 compute-0 charming_buck[282855]:                 "ceph.block_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 10:05:11 compute-0 charming_buck[282855]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 10:05:11 compute-0 charming_buck[282855]:                 "ceph.cluster_fsid": "84a084c3-61a7-5de7-8207-1f88efa59a64",
Nov 24 10:05:11 compute-0 charming_buck[282855]:                 "ceph.cluster_name": "ceph",
Nov 24 10:05:11 compute-0 charming_buck[282855]:                 "ceph.crush_device_class": "",
Nov 24 10:05:11 compute-0 charming_buck[282855]:                 "ceph.encrypted": "0",
Nov 24 10:05:11 compute-0 charming_buck[282855]:                 "ceph.osd_fsid": "4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c",
Nov 24 10:05:11 compute-0 charming_buck[282855]:                 "ceph.osd_id": "0",
Nov 24 10:05:11 compute-0 charming_buck[282855]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 10:05:11 compute-0 charming_buck[282855]:                 "ceph.type": "block",
Nov 24 10:05:11 compute-0 charming_buck[282855]:                 "ceph.vdo": "0",
Nov 24 10:05:11 compute-0 charming_buck[282855]:                 "ceph.with_tpm": "0"
Nov 24 10:05:11 compute-0 charming_buck[282855]:             },
Nov 24 10:05:11 compute-0 charming_buck[282855]:             "type": "block",
Nov 24 10:05:11 compute-0 charming_buck[282855]:             "vg_name": "ceph_vg0"
Nov 24 10:05:11 compute-0 charming_buck[282855]:         }
Nov 24 10:05:11 compute-0 charming_buck[282855]:     ]
Nov 24 10:05:11 compute-0 charming_buck[282855]: }
Nov 24 10:05:11 compute-0 nova_compute[257700]: 2025-11-24 10:05:11.279 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:05:11 compute-0 systemd[1]: libpod-4bfa738dba4525215a83929fc23586ef70e84ba34b0ac2117f4baa9462a6206b.scope: Deactivated successfully.
Nov 24 10:05:11 compute-0 podman[282838]: 2025-11-24 10:05:11.307275098 +0000 UTC m=+0.542462371 container died 4bfa738dba4525215a83929fc23586ef70e84ba34b0ac2117f4baa9462a6206b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 24 10:05:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7838f98098be530248d3a8d12a8947c77716d5756c003c9af98b5e42612893e-merged.mount: Deactivated successfully.
Nov 24 10:05:11 compute-0 podman[282838]: 2025-11-24 10:05:11.363227847 +0000 UTC m=+0.598415100 container remove 4bfa738dba4525215a83929fc23586ef70e84ba34b0ac2117f4baa9462a6206b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_buck, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 10:05:11 compute-0 systemd[1]: libpod-conmon-4bfa738dba4525215a83929fc23586ef70e84ba34b0ac2117f4baa9462a6206b.scope: Deactivated successfully.
Nov 24 10:05:11 compute-0 nova_compute[257700]: 2025-11-24 10:05:11.429 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:05:11 compute-0 sudo[282730]: pam_unix(sudo:session): session closed for user root
Nov 24 10:05:11 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:11 compute-0 sudo[282876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:05:11 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:05:11 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:11.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:05:11 compute-0 sudo[282876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:05:11 compute-0 sudo[282876]: pam_unix(sudo:session): session closed for user root
Nov 24 10:05:11 compute-0 sudo[282902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- raw list --format json
Nov 24 10:05:11 compute-0 sudo[282902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:05:12 compute-0 podman[282968]: 2025-11-24 10:05:12.056668571 +0000 UTC m=+0.085753515 container create bc756c173fa51b5cdb96bfa4523ac6c8e7b0e77bf06bcb73b42827bc3e54d9dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_kepler, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Nov 24 10:05:12 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:05:12 compute-0 podman[282968]: 2025-11-24 10:05:11.994045417 +0000 UTC m=+0.023130381 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:05:12 compute-0 systemd[1]: Started libpod-conmon-bc756c173fa51b5cdb96bfa4523ac6c8e7b0e77bf06bcb73b42827bc3e54d9dc.scope.
Nov 24 10:05:12 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:05:12 compute-0 podman[282968]: 2025-11-24 10:05:12.146998949 +0000 UTC m=+0.176083933 container init bc756c173fa51b5cdb96bfa4523ac6c8e7b0e77bf06bcb73b42827bc3e54d9dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_kepler, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 10:05:12 compute-0 podman[282968]: 2025-11-24 10:05:12.15472665 +0000 UTC m=+0.183811594 container start bc756c173fa51b5cdb96bfa4523ac6c8e7b0e77bf06bcb73b42827bc3e54d9dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Nov 24 10:05:12 compute-0 podman[282968]: 2025-11-24 10:05:12.157642552 +0000 UTC m=+0.186727516 container attach bc756c173fa51b5cdb96bfa4523ac6c8e7b0e77bf06bcb73b42827bc3e54d9dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_kepler, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 10:05:12 compute-0 pedantic_kepler[282984]: 167 167
Nov 24 10:05:12 compute-0 systemd[1]: libpod-bc756c173fa51b5cdb96bfa4523ac6c8e7b0e77bf06bcb73b42827bc3e54d9dc.scope: Deactivated successfully.
Nov 24 10:05:12 compute-0 podman[282968]: 2025-11-24 10:05:12.162976424 +0000 UTC m=+0.192061388 container died bc756c173fa51b5cdb96bfa4523ac6c8e7b0e77bf06bcb73b42827bc3e54d9dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_kepler, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Nov 24 10:05:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed2d84e14812a4c5d76a7f15c788a3e6ff534a8e12a7b7d4773773c3ef8468a9-merged.mount: Deactivated successfully.
Nov 24 10:05:12 compute-0 podman[282968]: 2025-11-24 10:05:12.199682879 +0000 UTC m=+0.228767823 container remove bc756c173fa51b5cdb96bfa4523ac6c8e7b0e77bf06bcb73b42827bc3e54d9dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_kepler, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 10:05:12 compute-0 systemd[1]: libpod-conmon-bc756c173fa51b5cdb96bfa4523ac6c8e7b0e77bf06bcb73b42827bc3e54d9dc.scope: Deactivated successfully.
Nov 24 10:05:12 compute-0 podman[283007]: 2025-11-24 10:05:12.348964611 +0000 UTC m=+0.039231010 container create b104f0191e945c8548f748305cca9135284a288ed47f176b57dbef4884b5413b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_hypatia, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 24 10:05:12 compute-0 systemd[1]: Started libpod-conmon-b104f0191e945c8548f748305cca9135284a288ed47f176b57dbef4884b5413b.scope.
Nov 24 10:05:12 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:05:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/beedaf1bc5f9d5c0bcef46d0e012d29b0bbf9e535e7ed94549bfcb61fbfc2b03/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 10:05:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/beedaf1bc5f9d5c0bcef46d0e012d29b0bbf9e535e7ed94549bfcb61fbfc2b03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 10:05:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/beedaf1bc5f9d5c0bcef46d0e012d29b0bbf9e535e7ed94549bfcb61fbfc2b03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 10:05:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/beedaf1bc5f9d5c0bcef46d0e012d29b0bbf9e535e7ed94549bfcb61fbfc2b03/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 10:05:12 compute-0 podman[283007]: 2025-11-24 10:05:12.333769266 +0000 UTC m=+0.024035685 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:05:12 compute-0 podman[283007]: 2025-11-24 10:05:12.430118982 +0000 UTC m=+0.120385411 container init b104f0191e945c8548f748305cca9135284a288ed47f176b57dbef4884b5413b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 10:05:12 compute-0 podman[283007]: 2025-11-24 10:05:12.438460817 +0000 UTC m=+0.128727226 container start b104f0191e945c8548f748305cca9135284a288ed47f176b57dbef4884b5413b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 24 10:05:12 compute-0 podman[283007]: 2025-11-24 10:05:12.443583864 +0000 UTC m=+0.133850283 container attach b104f0191e945c8548f748305cca9135284a288ed47f176b57dbef4884b5413b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Nov 24 10:05:12 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1154: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:05:12 compute-0 ceph-mon[74331]: pgmap v1154: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:05:12 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:12 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:12 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:12.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:13 compute-0 lvm[283099]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 10:05:13 compute-0 lvm[283099]: VG ceph_vg0 finished
Nov 24 10:05:13 compute-0 vigilant_hypatia[283024]: {}
Nov 24 10:05:13 compute-0 systemd[1]: libpod-b104f0191e945c8548f748305cca9135284a288ed47f176b57dbef4884b5413b.scope: Deactivated successfully.
Nov 24 10:05:13 compute-0 podman[283007]: 2025-11-24 10:05:13.103557562 +0000 UTC m=+0.793823971 container died b104f0191e945c8548f748305cca9135284a288ed47f176b57dbef4884b5413b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 10:05:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-beedaf1bc5f9d5c0bcef46d0e012d29b0bbf9e535e7ed94549bfcb61fbfc2b03-merged.mount: Deactivated successfully.
Nov 24 10:05:13 compute-0 podman[283007]: 2025-11-24 10:05:13.14198827 +0000 UTC m=+0.832254669 container remove b104f0191e945c8548f748305cca9135284a288ed47f176b57dbef4884b5413b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_hypatia, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 10:05:13 compute-0 systemd[1]: libpod-conmon-b104f0191e945c8548f748305cca9135284a288ed47f176b57dbef4884b5413b.scope: Deactivated successfully.
Nov 24 10:05:13 compute-0 sudo[282902]: pam_unix(sudo:session): session closed for user root
Nov 24 10:05:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 10:05:13 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:05:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 10:05:13 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:05:13 compute-0 sudo[283116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 10:05:13 compute-0 sudo[283116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:05:13 compute-0 sudo[283116]: pam_unix(sudo:session): session closed for user root
Nov 24 10:05:13 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:13 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:05:13 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:13.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:05:14 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:05:14 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:05:14 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1155: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:05:14 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:14 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:14 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:14.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:05:14 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:05:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:05:15 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:05:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:05:15 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:05:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:05:15 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:05:15 compute-0 ceph-mon[74331]: pgmap v1155: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:05:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:05:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:05:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:05:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:05:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:05:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:05:15 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:15 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:15 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:15.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:16 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:05:16 compute-0 nova_compute[257700]: 2025-11-24 10:05:16.278 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:05:16 compute-0 nova_compute[257700]: 2025-11-24 10:05:16.432 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:05:16 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1156: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:05:16 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:16 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:16 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:16.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:17 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:05:17 compute-0 ceph-mon[74331]: pgmap v1156: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:05:17 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:17 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:17 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:17.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:05:17.581Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:05:18 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1157: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:05:18 compute-0 ceph-mon[74331]: pgmap v1157: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:05:18 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:18 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:18 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:18.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:05:18.931Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:05:19 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:19 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:05:19 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:19.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:05:19 compute-0 podman[283149]: 2025-11-24 10:05:19.811859076 +0000 UTC m=+0.087175022 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 24 10:05:19 compute-0 podman[283148]: 2025-11-24 10:05:19.832390132 +0000 UTC m=+0.101359881 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 10:05:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:05:19 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:05:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:05:19 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:05:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:05:19 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:05:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:05:20 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:05:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:05:20.575 165073 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:05:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:05:20.576 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:05:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:05:20.576 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:05:20 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1158: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:05:20 compute-0 ceph-mon[74331]: pgmap v1158: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:05:20 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:20 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:05:20 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:20.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:05:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:05:20] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Nov 24 10:05:20 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:05:20] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Nov 24 10:05:21 compute-0 nova_compute[257700]: 2025-11-24 10:05:21.279 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:05:21 compute-0 nova_compute[257700]: 2025-11-24 10:05:21.434 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:05:21 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:21 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:05:21 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:21.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:05:21 compute-0 nova_compute[257700]: 2025-11-24 10:05:21.920 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:05:22 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:05:22 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1159: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:05:22 compute-0 ceph-mon[74331]: pgmap v1159: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:05:22 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:22 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:05:22 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:22.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:05:22 compute-0 nova_compute[257700]: 2025-11-24 10:05:22.916 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:05:22 compute-0 sshd-session[283194]: Invalid user administrator from 83.229.122.23 port 58396
Nov 24 10:05:22 compute-0 nova_compute[257700]: 2025-11-24 10:05:22.930 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:05:23 compute-0 sshd-session[283194]: Received disconnect from 83.229.122.23 port 58396:11: Bye Bye [preauth]
Nov 24 10:05:23 compute-0 sshd-session[283194]: Disconnected from invalid user administrator 83.229.122.23 port 58396 [preauth]
Nov 24 10:05:23 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:23 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:05:23 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:23.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:05:23 compute-0 nova_compute[257700]: 2025-11-24 10:05:23.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:05:23 compute-0 nova_compute[257700]: 2025-11-24 10:05:23.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:05:23 compute-0 nova_compute[257700]: 2025-11-24 10:05:23.939 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:05:23 compute-0 nova_compute[257700]: 2025-11-24 10:05:23.940 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:05:23 compute-0 nova_compute[257700]: 2025-11-24 10:05:23.940 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:05:23 compute-0 nova_compute[257700]: 2025-11-24 10:05:23.940 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 10:05:23 compute-0 nova_compute[257700]: 2025-11-24 10:05:23.941 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:05:24 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:05:24 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2711357868' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:05:24 compute-0 nova_compute[257700]: 2025-11-24 10:05:24.394 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:05:24 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2711357868' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:05:24 compute-0 nova_compute[257700]: 2025-11-24 10:05:24.524 257704 WARNING nova.virt.libvirt.driver [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 10:05:24 compute-0 nova_compute[257700]: 2025-11-24 10:05:24.525 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4568MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 10:05:24 compute-0 nova_compute[257700]: 2025-11-24 10:05:24.525 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:05:24 compute-0 nova_compute[257700]: 2025-11-24 10:05:24.525 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:05:24 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1160: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:05:24 compute-0 nova_compute[257700]: 2025-11-24 10:05:24.600 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 10:05:24 compute-0 nova_compute[257700]: 2025-11-24 10:05:24.601 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 10:05:24 compute-0 nova_compute[257700]: 2025-11-24 10:05:24.618 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:05:24 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:24 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:24 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:24.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:05:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:05:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:05:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:05:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:05:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:05:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:05:25 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:05:25 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:05:25 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/11465984' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:05:25 compute-0 nova_compute[257700]: 2025-11-24 10:05:25.045 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:05:25 compute-0 nova_compute[257700]: 2025-11-24 10:05:25.051 257704 DEBUG nova.compute.provider_tree [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Inventory has not changed in ProviderTree for provider: a50ce3b5-7e9e-4263-a4aa-c35573ac7257 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 10:05:25 compute-0 nova_compute[257700]: 2025-11-24 10:05:25.096 257704 DEBUG nova.scheduler.client.report [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Inventory has not changed for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 10:05:25 compute-0 nova_compute[257700]: 2025-11-24 10:05:25.098 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 10:05:25 compute-0 nova_compute[257700]: 2025-11-24 10:05:25.098 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:05:25 compute-0 ceph-mon[74331]: pgmap v1160: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:05:25 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/11465984' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:05:25 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:25 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:05:25 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:25.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:05:25 compute-0 podman[283244]: 2025-11-24 10:05:25.780927568 +0000 UTC m=+0.063234849 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 24 10:05:26 compute-0 nova_compute[257700]: 2025-11-24 10:05:26.099 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:05:26 compute-0 nova_compute[257700]: 2025-11-24 10:05:26.099 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 10:05:26 compute-0 nova_compute[257700]: 2025-11-24 10:05:26.100 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 10:05:26 compute-0 nova_compute[257700]: 2025-11-24 10:05:26.120 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 10:05:26 compute-0 nova_compute[257700]: 2025-11-24 10:05:26.121 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:05:26 compute-0 nova_compute[257700]: 2025-11-24 10:05:26.121 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 10:05:26 compute-0 nova_compute[257700]: 2025-11-24 10:05:26.281 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:05:26 compute-0 nova_compute[257700]: 2025-11-24 10:05:26.436 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:05:26 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1161: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:05:26 compute-0 ceph-mon[74331]: pgmap v1161: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:05:26 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:26 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:26 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:26.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:27 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:05:27 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:27 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:27 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:27.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:05:27.582Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:05:27 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/3828311499' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:05:27 compute-0 nova_compute[257700]: 2025-11-24 10:05:27.938 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:05:28 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1162: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:05:28 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/3440100558' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:05:28 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/479419961' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:05:28 compute-0 ceph-mon[74331]: pgmap v1162: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:05:28 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:28 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:05:28 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:28.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:05:28 compute-0 nova_compute[257700]: 2025-11-24 10:05:28.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:05:28 compute-0 nova_compute[257700]: 2025-11-24 10:05:28.922 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:05:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:05:28.931Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:05:29 compute-0 sudo[283267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:05:29 compute-0 sudo[283267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:05:29 compute-0 sudo[283267]: pam_unix(sudo:session): session closed for user root
Nov 24 10:05:29 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:29 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:29 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:29.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:29 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/1358056965' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:05:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:05:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:05:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:05:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:05:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:05:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:05:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:05:30 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:05:30 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1163: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:05:30 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:05:30 compute-0 ceph-mon[74331]: pgmap v1163: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:05:30 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:30 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:05:30 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:30.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:05:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:05:30] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 24 10:05:30 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:05:30] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 24 10:05:31 compute-0 nova_compute[257700]: 2025-11-24 10:05:31.283 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:05:31 compute-0 nova_compute[257700]: 2025-11-24 10:05:31.438 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:05:31 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:31 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:31 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:31.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:05:32 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1164: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:05:32 compute-0 ceph-mon[74331]: pgmap v1164: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:05:32 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:32 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:05:32 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:32.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:05:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:33.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:34 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1165: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:05:34 compute-0 ceph-mon[74331]: pgmap v1165: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:05:34 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:34 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:05:34 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:34.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:05:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:05:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:05:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:05:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:05:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:05:35 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:05:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:05:35 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:05:35 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:35 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:35 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:35.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:36 compute-0 nova_compute[257700]: 2025-11-24 10:05:36.285 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:05:36 compute-0 nova_compute[257700]: 2025-11-24 10:05:36.439 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:05:36 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1166: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:05:36 compute-0 ceph-mon[74331]: pgmap v1166: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:05:36 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:36 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:36 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:36.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:37 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:05:37 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:37 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:37 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:37.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:05:37.583Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 10:05:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:05:37.583Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 10:05:38 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1167: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:05:38 compute-0 ceph-mon[74331]: pgmap v1167: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:05:38 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:38 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:38 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:38.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:05:38.931Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:05:39 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:39 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:39 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:39.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:05:40 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:05:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:05:40 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:05:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:05:40 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:05:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:05:40 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:05:40 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1168: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:05:40 compute-0 ceph-mon[74331]: pgmap v1168: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:05:40 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:40 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:40 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:40.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:05:40] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 24 10:05:40 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:05:40] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 24 10:05:41 compute-0 nova_compute[257700]: 2025-11-24 10:05:41.287 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:05:41 compute-0 nova_compute[257700]: 2025-11-24 10:05:41.440 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:05:41 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:41 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:41 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:41.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:41 compute-0 sshd-session[283305]: Accepted publickey for zuul from 192.168.122.10 port 41258 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 10:05:41 compute-0 systemd-logind[822]: New session 56 of user zuul.
Nov 24 10:05:41 compute-0 systemd[1]: Started Session 56 of User zuul.
Nov 24 10:05:41 compute-0 sshd-session[283305]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 10:05:42 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:05:42 compute-0 sudo[283309]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Nov 24 10:05:42 compute-0 sudo[283309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 10:05:42 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1169: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:05:42 compute-0 ceph-mon[74331]: pgmap v1169: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:05:42 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:42 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:42 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:42.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:43 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:43 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:43 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:43.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:44 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.17061 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:44 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.26591 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:44 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1170: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:05:44 compute-0 ceph-mon[74331]: from='client.17061 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:44 compute-0 ceph-mon[74331]: from='client.26591 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:44 compute-0 ceph-mon[74331]: pgmap v1170: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:05:44 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:44 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:44 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:44.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:44 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.25333 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:05:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:05:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:05:45 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:05:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:05:45 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:05:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:05:45 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:05:45 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.26597 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:45 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.17079 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-0[79585]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Nov 24 10:05:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Optimize plan auto_2025-11-24_10:05:45
Nov 24 10:05:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 10:05:45 compute-0 ceph-mgr[74626]: [balancer INFO root] do_upmap
Nov 24 10:05:45 compute-0 ceph-mgr[74626]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', 'default.rgw.log', '.mgr', '.rgw.root', 'backups', 'vms', '.nfs', 'default.rgw.meta']
Nov 24 10:05:45 compute-0 ceph-mgr[74626]: [balancer INFO root] prepared 0/10 upmap changes
Nov 24 10:05:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:05:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:05:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:05:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:05:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:05:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:05:45 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:45 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:45 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:45.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:45 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Nov 24 10:05:45 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1767787726' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 24 10:05:45 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.25339 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:45 compute-0 ceph-mon[74331]: from='client.25333 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:45 compute-0 ceph-mon[74331]: from='client.26597 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:45 compute-0 ceph-mon[74331]: from='client.17079 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:45 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:05:45 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/2646850891' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 24 10:05:45 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1767787726' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 24 10:05:45 compute-0 ceph-mon[74331]: from='client.25339 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 10:05:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:05:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 10:05:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:05:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 10:05:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:05:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:05:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:05:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:05:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:05:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 24 10:05:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:05:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 10:05:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:05:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:05:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:05:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 24 10:05:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:05:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 10:05:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:05:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:05:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:05:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 10:05:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:05:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 10:05:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 10:05:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 10:05:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 10:05:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 10:05:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 10:05:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 10:05:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 10:05:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 10:05:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 10:05:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 10:05:46 compute-0 nova_compute[257700]: 2025-11-24 10:05:46.288 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:05:46 compute-0 nova_compute[257700]: 2025-11-24 10:05:46.441 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:05:46 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1171: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:05:46 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/4057720917' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 24 10:05:46 compute-0 ceph-mon[74331]: pgmap v1171: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:05:46 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:46 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:46 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:46.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:47 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:05:47 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:47 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:47 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:47.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:05:47.584Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:05:48 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1172: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:05:48 compute-0 ceph-mon[74331]: pgmap v1172: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:05:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:05:48.932Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 10:05:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:05:48.933Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:05:48 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:48 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:05:48 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:48.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:05:49 compute-0 sudo[283645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:05:49 compute-0 sudo[283645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:05:49 compute-0 sudo[283645]: pam_unix(sudo:session): session closed for user root
Nov 24 10:05:49 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:49 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:49 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:49.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:05:50 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:05:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:05:50 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:05:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:05:50 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:05:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:05:50 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:05:50 compute-0 podman[283675]: 2025-11-24 10:05:50.208905279 +0000 UTC m=+0.058238394 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 24 10:05:50 compute-0 podman[283676]: 2025-11-24 10:05:50.280309431 +0000 UTC m=+0.129409571 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 10:05:50 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1173: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:05:50 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.26618 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:50 compute-0 ceph-mon[74331]: pgmap v1173: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:05:50 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:50 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:50 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:50.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:05:50] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Nov 24 10:05:50 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:05:50] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Nov 24 10:05:51 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Nov 24 10:05:51 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 24 10:05:51 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.26630 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:51 compute-0 ovs-vsctl[283749]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 24 10:05:51 compute-0 nova_compute[257700]: 2025-11-24 10:05:51.290 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:05:51 compute-0 nova_compute[257700]: 2025-11-24 10:05:51.441 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:05:51 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.25348 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:51 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:51 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:51 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:51.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:51 compute-0 ceph-mon[74331]: from='client.26618 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:51 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/805920003' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 24 10:05:51 compute-0 ceph-mon[74331]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 24 10:05:51 compute-0 ceph-mon[74331]: from='client.26630 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:51 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/467161652' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:05:51 compute-0 ceph-mon[74331]: from='client.25348 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:51 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.26654 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:52 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:05:52 compute-0 virtqemud[257224]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 24 10:05:52 compute-0 virtqemud[257224]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 24 10:05:52 compute-0 virtqemud[257224]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 24 10:05:52 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1174: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:05:52 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.26672 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:52 compute-0 ceph-mds[96241]: mds.cephfs.compute-0.cibmfe asok_command: cache status {prefix=cache status} (starting...)
Nov 24 10:05:52 compute-0 ceph-mds[96241]: mds.cephfs.compute-0.cibmfe Can't run that command on an inactive MDS!
Nov 24 10:05:52 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/2099222001' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 24 10:05:52 compute-0 ceph-mon[74331]: from='client.26654 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:52 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/1395852554' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 24 10:05:52 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/2236574839' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 24 10:05:52 compute-0 ceph-mon[74331]: pgmap v1174: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:05:52 compute-0 ceph-mds[96241]: mds.cephfs.compute-0.cibmfe asok_command: client ls {prefix=client ls} (starting...)
Nov 24 10:05:52 compute-0 ceph-mds[96241]: mds.cephfs.compute-0.cibmfe Can't run that command on an inactive MDS!
Nov 24 10:05:52 compute-0 lvm[284082]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 10:05:52 compute-0 lvm[284082]: VG ceph_vg0 finished
Nov 24 10:05:52 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:52 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:05:52 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:52.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:05:53 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.26684 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:53 compute-0 ceph-mds[96241]: mds.cephfs.compute-0.cibmfe asok_command: damage ls {prefix=damage ls} (starting...)
Nov 24 10:05:53 compute-0 ceph-mds[96241]: mds.cephfs.compute-0.cibmfe Can't run that command on an inactive MDS!
Nov 24 10:05:53 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:53 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:53 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:53.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:53 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Nov 24 10:05:53 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 24 10:05:53 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.17139 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:53 compute-0 ceph-mds[96241]: mds.cephfs.compute-0.cibmfe asok_command: dump loads {prefix=dump loads} (starting...)
Nov 24 10:05:53 compute-0 ceph-mds[96241]: mds.cephfs.compute-0.cibmfe Can't run that command on an inactive MDS!
Nov 24 10:05:53 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Nov 24 10:05:53 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2624586791' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 24 10:05:53 compute-0 ceph-mds[96241]: mds.cephfs.compute-0.cibmfe asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 24 10:05:53 compute-0 ceph-mds[96241]: mds.cephfs.compute-0.cibmfe Can't run that command on an inactive MDS!
Nov 24 10:05:53 compute-0 ceph-mon[74331]: from='client.26672 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:53 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/464524726' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 24 10:05:53 compute-0 ceph-mon[74331]: from='client.26684 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:53 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/2385770739' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 24 10:05:53 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/2194257639' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 24 10:05:53 compute-0 ceph-mon[74331]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 24 10:05:53 compute-0 ceph-mon[74331]: from='client.17139 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:53 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/832786423' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 24 10:05:53 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2624586791' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 24 10:05:53 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.25372 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:53 compute-0 ceph-mds[96241]: mds.cephfs.compute-0.cibmfe asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 24 10:05:53 compute-0 ceph-mds[96241]: mds.cephfs.compute-0.cibmfe Can't run that command on an inactive MDS!
Nov 24 10:05:54 compute-0 ceph-mds[96241]: mds.cephfs.compute-0.cibmfe asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 24 10:05:54 compute-0 ceph-mds[96241]: mds.cephfs.compute-0.cibmfe Can't run that command on an inactive MDS!
Nov 24 10:05:54 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.17154 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:54 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 10:05:54 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3262176359' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:05:54 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Nov 24 10:05:54 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 24 10:05:54 compute-0 ceph-mds[96241]: mds.cephfs.compute-0.cibmfe asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 24 10:05:54 compute-0 ceph-mds[96241]: mds.cephfs.compute-0.cibmfe Can't run that command on an inactive MDS!
Nov 24 10:05:54 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.25387 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:54 compute-0 ceph-mds[96241]: mds.cephfs.compute-0.cibmfe asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 24 10:05:54 compute-0 ceph-mds[96241]: mds.cephfs.compute-0.cibmfe Can't run that command on an inactive MDS!
Nov 24 10:05:54 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.17178 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:54 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.26744 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:54 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T10:05:54.473+0000 7fac1dd94640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 24 10:05:54 compute-0 ceph-mgr[74626]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 24 10:05:54 compute-0 ceph-mds[96241]: mds.cephfs.compute-0.cibmfe asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 24 10:05:54 compute-0 ceph-mds[96241]: mds.cephfs.compute-0.cibmfe Can't run that command on an inactive MDS!
Nov 24 10:05:54 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1175: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:05:54 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.25405 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:54 compute-0 ceph-mon[74331]: from='client.25372 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:54 compute-0 ceph-mon[74331]: from='client.17154 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:54 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3262176359' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:05:54 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/2100678030' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 24 10:05:54 compute-0 ceph-mds[96241]: mds.cephfs.compute-0.cibmfe asok_command: ops {prefix=ops} (starting...)
Nov 24 10:05:54 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/1728751067' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 10:05:54 compute-0 ceph-mds[96241]: mds.cephfs.compute-0.cibmfe Can't run that command on an inactive MDS!
Nov 24 10:05:54 compute-0 ceph-mon[74331]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 24 10:05:54 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/3738133278' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 24 10:05:54 compute-0 ceph-mon[74331]: from='client.25387 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:54 compute-0 ceph-mon[74331]: from='client.17178 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:54 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1555652975' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 24 10:05:54 compute-0 ceph-mon[74331]: from='client.26744 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:54 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/1643640683' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:05:54 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/422130285' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 24 10:05:54 compute-0 ceph-mon[74331]: pgmap v1175: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:05:54 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.17193 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:54 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:54 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:54 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:54.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:54 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Nov 24 10:05:54 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1251301519' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 24 10:05:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:05:54 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:05:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:05:54 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:05:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:05:54 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:05:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:05:55 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:05:55 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.25417 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:55 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.17220 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:55 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Nov 24 10:05:55 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2500613712' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 24 10:05:55 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.26810 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:55 compute-0 ceph-mds[96241]: mds.cephfs.compute-0.cibmfe asok_command: session ls {prefix=session ls} (starting...)
Nov 24 10:05:55 compute-0 ceph-mds[96241]: mds.cephfs.compute-0.cibmfe Can't run that command on an inactive MDS!
Nov 24 10:05:55 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:55 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:55 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:55.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:55 compute-0 ceph-mds[96241]: mds.cephfs.compute-0.cibmfe asok_command: status {prefix=status} (starting...)
Nov 24 10:05:55 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.25438 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:55 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.26825 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:55 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.17232 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:56 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.25453 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:56 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.26837 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:56 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Nov 24 10:05:56 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2716126495' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 24 10:05:56 compute-0 ceph-mon[74331]: from='client.25405 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:56 compute-0 ceph-mon[74331]: from='client.17193 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:56 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/1729941568' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 24 10:05:56 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/833142917' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 24 10:05:56 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1251301519' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 24 10:05:56 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/4157313813' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 24 10:05:56 compute-0 ceph-mon[74331]: from='client.25417 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:56 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/1072016513' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 24 10:05:56 compute-0 ceph-mon[74331]: from='client.17220 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:56 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2500613712' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 24 10:05:56 compute-0 ceph-mon[74331]: from='client.26810 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:56 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/1743021929' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 24 10:05:56 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/3155060605' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 24 10:05:56 compute-0 nova_compute[257700]: 2025-11-24 10:05:56.292 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:05:56 compute-0 nova_compute[257700]: 2025-11-24 10:05:56.443 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:05:56 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.26849 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:56 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Nov 24 10:05:56 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 24 10:05:56 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1176: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:05:56 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Nov 24 10:05:56 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1423102505' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 24 10:05:56 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Nov 24 10:05:56 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/220911383' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 24 10:05:56 compute-0 podman[284564]: 2025-11-24 10:05:56.788899979 +0000 UTC m=+0.060813408 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 24 10:05:56 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:56 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:56 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:56.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:57 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.26867 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:57 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Nov 24 10:05:57 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/926336242' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 24 10:05:57 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:05:57 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Nov 24 10:05:57 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3854978652' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 24 10:05:57 compute-0 ceph-mon[74331]: from='client.25438 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:57 compute-0 ceph-mon[74331]: from='client.26825 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:57 compute-0 ceph-mon[74331]: from='client.17232 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:57 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/2180658346' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 24 10:05:57 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/2333924499' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 24 10:05:57 compute-0 ceph-mon[74331]: from='client.25453 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:57 compute-0 ceph-mon[74331]: from='client.26837 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:57 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2716126495' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 24 10:05:57 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/419866527' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 24 10:05:57 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/2559918353' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 24 10:05:57 compute-0 ceph-mon[74331]: from='client.26849 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:57 compute-0 ceph-mon[74331]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 24 10:05:57 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/45030159' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 24 10:05:57 compute-0 ceph-mon[74331]: pgmap v1176: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:05:57 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1423102505' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 24 10:05:57 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/220911383' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 24 10:05:57 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/2847382315' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 24 10:05:57 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/317736950' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 24 10:05:57 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/533231242' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 24 10:05:57 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/926336242' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 24 10:05:57 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3854978652' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 24 10:05:57 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/2915704996' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 10:05:57 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.26879 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:57 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.25504 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T10:05:57.464+0000 7fac1dd94640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 24 10:05:57 compute-0 ceph-mgr[74626]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 24 10:05:57 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 24 10:05:57 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1363121793' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 10:05:57 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.17325 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T10:05:57.525+0000 7fac1dd94640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 24 10:05:57 compute-0 ceph-mgr[74626]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 24 10:05:57 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:57 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:57 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:57.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:05:57.585Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:05:57 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.26894 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:57 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Nov 24 10:05:57 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1019638436' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 24 10:05:57 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Nov 24 10:05:57 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3163920426' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 24 10:05:58 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.26915 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:05:58 compute-0 ceph-mon[74331]: from='client.26867 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:58 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/3256066392' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 10:05:58 compute-0 ceph-mon[74331]: from='client.26879 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:58 compute-0 ceph-mon[74331]: from='client.25504 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:58 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1363121793' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 10:05:58 compute-0 ceph-mon[74331]: from='client.17325 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:58 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/2950687480' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 24 10:05:58 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/239825461' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 24 10:05:58 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1019638436' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 24 10:05:58 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3163920426' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 24 10:05:58 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/3209223688' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 24 10:05:58 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/2888619743' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 24 10:05:58 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/3916645189' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 24 10:05:58 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Nov 24 10:05:58 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3328825171' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 24 10:05:58 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Nov 24 10:05:58 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1191403775' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 24 10:05:58 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1177: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:05:58 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.26939 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:05:58 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.25543 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:58 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.17385 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:58 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Nov 24 10:05:58 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1380113186' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 24 10:05:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:05:58.934Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:05:58 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:58 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:05:58 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:05:58.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:05:59 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.26948 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:05:59 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.25558 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:59 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.17406 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:59 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Nov 24 10:05:59 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2024092312' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 24 10:05:59 compute-0 ceph-mon[74331]: from='client.26894 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:59 compute-0 ceph-mon[74331]: from='client.26915 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:05:59 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3328825171' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 24 10:05:59 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1191403775' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 24 10:05:59 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/3784852311' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 24 10:05:59 compute-0 ceph-mon[74331]: pgmap v1177: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:05:59 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1380113186' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 24 10:05:59 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/3134986875' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 24 10:05:59 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/1381658976' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 24 10:05:59 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2024092312' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 24 10:05:59 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.26969 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:05:59 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.25576 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:59 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.17421 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:59 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:05:59 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:05:59 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:05:59.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:09.165895+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84697088 unmapped: 5357568 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:10.166009+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991115 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84697088 unmapped: 5357568 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:11.166155+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84697088 unmapped: 5357568 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:12.166277+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 5349376 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:13.166382+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 5349376 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:14.166520+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84713472 unmapped: 5341184 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:15.166682+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991115 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84713472 unmapped: 5341184 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:16.166814+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84721664 unmapped: 5332992 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:17.166956+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 5324800 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:18.167112+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 5324800 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:19.167441+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84738048 unmapped: 5316608 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:20.167582+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991115 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84738048 unmapped: 5316608 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:21.167750+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 5308416 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:22.167950+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 5308416 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:23.168073+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 5300224 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:24.168275+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 5300224 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:25.168469+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991115 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 5300224 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:26.168608+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 5283840 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:27.168738+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 5283840 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:28.168879+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 5283840 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:29.168959+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 5275648 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:30.169120+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991115 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 5275648 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:31.169243+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 5267456 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:32.169366+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 5267456 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:33.169508+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84795392 unmapped: 5259264 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:34.169655+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84795392 unmapped: 5259264 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:35.169814+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991115 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84803584 unmapped: 5251072 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:36.169974+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84811776 unmapped: 5242880 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:37.170178+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84811776 unmapped: 5242880 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:38.173247+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84819968 unmapped: 5234688 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:39.173399+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84819968 unmapped: 5234688 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:40.173521+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991115 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 5226496 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:41.173647+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84836352 unmapped: 5218304 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:42.173772+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84836352 unmapped: 5218304 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:43.173915+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 5210112 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:44.174037+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 5210112 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:45.174188+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991115 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84852736 unmapped: 5201920 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:46.174320+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84852736 unmapped: 5201920 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:47.174450+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 5193728 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:48.174633+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 5193728 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:49.174793+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d23882800 session 0x558d231c5e00
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d23882400 session 0x558d214bf680
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84869120 unmapped: 5185536 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:50.174920+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991115 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84869120 unmapped: 5185536 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:51.175300+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84869120 unmapped: 5185536 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:52.175445+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84877312 unmapped: 5177344 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:53.175571+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84877312 unmapped: 5177344 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:54.175705+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84877312 unmapped: 5177344 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:55.175868+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991115 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84885504 unmapped: 5169152 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:56.176008+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84885504 unmapped: 5169152 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:57.176144+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 5160960 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:58.176323+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 5160960 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:33:59.176443+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 5152768 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:00.176579+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21d78000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 79.424644470s of 79.427619934s, submitted: 1
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991247 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84918272 unmapped: 5136384 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:01.176727+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 5128192 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:02.176865+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84934656 unmapped: 5120000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:03.176996+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84934656 unmapped: 5120000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:04.177162+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84934656 unmapped: 5120000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:05.177295+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992759 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84942848 unmapped: 5111808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:06.177431+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21f32000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84942848 unmapped: 5111808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:07.177773+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84951040 unmapped: 5103616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:08.177972+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84951040 unmapped: 5103616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:09.178149+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 5095424 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:10.178268+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992759 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 5095424 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:11.178408+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 5095424 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:12.178588+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.071694374s of 12.077914238s, submitted: 2
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 5087232 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:13.178761+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 5087232 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:14.178898+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 5079040 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:15.179065+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992168 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 5079040 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:16.179205+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 5070848 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:17.179347+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84992000 unmapped: 5062656 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:18.179531+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 84992000 unmapped: 5062656 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:19.179680+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85000192 unmapped: 5054464 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:20.179807+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992036 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85000192 unmapped: 5054464 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:21.180009+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85000192 unmapped: 5054464 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:22.180153+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 5046272 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:23.181491+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 5046272 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:24.181630+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 5038080 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:25.181754+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992036 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 5029888 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:26.181909+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 5013504 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:27.182059+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 5013504 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:28.182277+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 5013504 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:29.182530+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 5005312 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:30.182653+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992036 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 5005312 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:31.182791+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 4997120 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:32.182936+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 4997120 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:33.183130+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d23882000 session 0x558d215d7860
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d21f33800 session 0x558d215c6780
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 4988928 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:34.183286+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 4988928 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:35.183435+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992036 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 4988928 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:36.183574+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 4980736 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:37.183723+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 4980736 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:38.183935+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 4972544 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:39.184413+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 4972544 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:40.184633+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992036 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 4972544 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:41.184783+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85090304 unmapped: 4964352 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:42.184929+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85090304 unmapped: 4964352 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:43.185051+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:44.185161+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85098496 unmapped: 4956160 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21f33800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 31.713342667s of 31.726411819s, submitted: 2
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:45.185299+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85098496 unmapped: 4956160 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992168 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:46.185469+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 4947968 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:47.185598+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85123072 unmapped: 4931584 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:48.185780+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85123072 unmapped: 4931584 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:49.185938+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85131264 unmapped: 4923392 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:50.186085+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85131264 unmapped: 4923392 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21611800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992168 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:51.186268+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85131264 unmapped: 4923392 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:52.186410+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85139456 unmapped: 4915200 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:53.186553+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85139456 unmapped: 4915200 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:54.186681+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 4907008 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:55.186814+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 4907008 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992168 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:56.186962+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85155840 unmapped: 4898816 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.088380814s of 12.092455864s, submitted: 1
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:57.187107+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85155840 unmapped: 4898816 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:58.187273+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85155840 unmapped: 4898816 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:34:59.187409+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85164032 unmapped: 4890624 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:00.187534+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85164032 unmapped: 4890624 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991577 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:01.187672+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85164032 unmapped: 4890624 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:02.187830+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85172224 unmapped: 4882432 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:03.187995+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85172224 unmapped: 4882432 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:04.188173+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 4874240 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:05.188369+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 4874240 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991445 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:06.188506+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 4866048 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:07.188645+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 4866048 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:08.188830+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 4866048 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:09.188972+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 4857856 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:10.189181+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 4857856 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991445 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:11.189385+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 4849664 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:12.189618+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 4841472 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:13.189987+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 4841472 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:14.190137+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 4833280 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:15.190355+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 4833280 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991445 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:16.190510+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 4825088 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:17.190649+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 4825088 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:18.190820+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 4816896 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:19.190952+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 4816896 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:20.191118+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 4816896 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991445 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:21.191284+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 4800512 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:22.191387+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 4800512 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:23.191527+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 4792320 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:24.191663+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 4792320 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:25.191803+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 4792320 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991445 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:26.191945+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 4784128 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:27.192075+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 4784128 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:28.192465+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 4784128 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:29.192628+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 4775936 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:30.192791+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 4767744 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:31.192962+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991445 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 4767744 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:32.193181+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 4767744 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:33.193318+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 4759552 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:34.193421+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 4759552 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d209d9c00 session 0x558d215d8b40
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:35.193578+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 4759552 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:36.193745+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991445 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 4751360 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:37.193915+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 4751360 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:38.194183+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 4743168 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:39.194322+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 4743168 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:40.194464+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 4734976 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:41.194615+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991445 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 4734976 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:42.194766+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 4734976 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:43.194923+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 4726784 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:44.195130+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 4726784 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:45.195323+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 4718592 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 49.173088074s of 49.179107666s, submitted: 2
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:46.195573+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991577 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 4718592 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:47.195714+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 4718592 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:48.195951+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 4710400 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:49.196140+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 4710400 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:50.196373+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 4702208 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:51.196534+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993089 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 4702208 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:52.196694+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 4702208 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:53.196838+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 4694016 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:54.196990+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 4694016 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:55.197155+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 4685824 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:56.197305+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993089 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 4685824 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:57.197461+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 4677632 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:58.197642+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 4677632 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:35:59.198043+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 4677632 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:00.198170+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.712577820s of 14.719693184s, submitted: 2
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 4653056 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:01.198352+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992957 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 4653056 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:02.198499+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 4644864 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:03.198667+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 4644864 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 8461 writes, 35K keys, 8461 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s
                                           Cumulative WAL: 8461 writes, 1673 syncs, 5.06 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8461 writes, 35K keys, 8461 commit groups, 1.0 writes per commit group, ingest: 21.65 MB, 0.04 MB/s
                                           Interval WAL: 8461 writes, 1673 syncs, 5.06 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd2f30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd2f30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd2f30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:04.198852+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85483520 unmapped: 4571136 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:05.199003+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85483520 unmapped: 4571136 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:06.199190+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992957 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85483520 unmapped: 4571136 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:07.199347+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 4562944 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:08.199561+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 4562944 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:09.199739+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 4554752 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d21611800 session 0x558d2153d2c0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d21f33800 session 0x558d215dc1e0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:10.200133+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 4546560 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:11.200286+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992957 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 4546560 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:12.200443+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85516288 unmapped: 4538368 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:13.200567+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85516288 unmapped: 4538368 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:14.200683+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 4521984 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:15.200823+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 4521984 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:16.201009+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992957 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 4521984 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:17.201195+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 4513792 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:18.201434+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 4505600 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:19.201604+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 4505600 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:20.201753+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882400
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.008600235s of 20.012044907s, submitted: 1
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 4505600 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:21.201895+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993089 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 4497408 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:22.202026+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 4497408 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:23.202177+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 4489216 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:24.202308+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 4489216 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:25.202448+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 4489216 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:26.202566+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994601 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 4481024 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:27.202701+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 4481024 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:28.202854+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 4472832 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:29.202997+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 4472832 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:30.203164+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 4472832 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:31.203308+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994010 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85590016 unmapped: 4464640 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:32.203472+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85590016 unmapped: 4464640 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:33.203601+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85598208 unmapped: 4456448 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:34.203819+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.978015900s of 13.991077423s, submitted: 3
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85598208 unmapped: 4456448 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:35.204005+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85606400 unmapped: 4448256 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:36.204164+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993878 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85606400 unmapped: 4448256 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:37.204303+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85606400 unmapped: 4448256 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:38.204674+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 4440064 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:39.204876+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 4440064 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:40.205192+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85622784 unmapped: 4431872 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:41.205378+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993878 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85622784 unmapped: 4431872 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:42.205636+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 4423680 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:43.205793+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 4423680 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:44.206006+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 4423680 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:45.206154+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85647360 unmapped: 4407296 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:46.206359+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993878 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85647360 unmapped: 4407296 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:47.206506+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 4399104 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:48.206733+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 4399104 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:49.206936+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 4390912 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:50.207074+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 4390912 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:51.207249+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993878 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 4390912 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:52.207385+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 4382720 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:53.207552+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 4382720 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:54.208961+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 4382720 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:55.209238+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85680128 unmapped: 4374528 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:56.209374+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993878 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85680128 unmapped: 4374528 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:57.209637+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85688320 unmapped: 4366336 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:58.209888+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85688320 unmapped: 4366336 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:36:59.210043+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 4358144 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:00.210169+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 4358144 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:01.210349+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993878 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 4358144 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:02.210539+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85704704 unmapped: 4349952 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:03.210727+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85704704 unmapped: 4349952 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:04.210953+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4341760 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:05.211117+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4341760 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:06.211271+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993878 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 4333568 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:07.211415+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 4333568 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:08.211610+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 4333568 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:09.211762+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 4325376 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:10.213477+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4317184 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:11.213829+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993878 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 4308992 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:12.214003+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 4308992 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:13.214192+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 4300800 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:14.214534+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 4300800 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:15.214684+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 4300800 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:16.214923+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993878 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 4292608 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:17.215121+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 4292608 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:18.215388+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 4292608 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:19.215546+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fca54000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 4284416 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:20.215950+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 4284416 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 46.532089233s of 46.769638062s, submitted: 1
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:21.216145+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993878 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 4399104 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:22.216297+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 4423680 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:23.216469+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 4358144 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:24.216634+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 4358144 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:25.216792+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 4358144 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:26.216981+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993878 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 4358144 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:27.217176+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 4358144 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:28.217537+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 4358144 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:29.217691+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d23882000 session 0x558d21e2c960
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 4358144 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:30.217918+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 4358144 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:31.218139+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993878 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 4358144 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:32.218310+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 4358144 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:33.218505+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 4358144 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:34.218648+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 4358144 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:35.218785+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 4358144 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:36.218963+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993878 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 4358144 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:37.219177+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 4358144 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:38.219390+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 4358144 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:39.219528+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 4358144 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:40.219688+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 4358144 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:41.219862+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993878 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 4358144 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:42.220015+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.024885178s of 21.219347000s, submitted: 320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 4358144 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:43.220164+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 4358144 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:44.220301+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 4358144 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:45.222314+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 4358144 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:46.222443+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994010 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 4358144 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:47.222580+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 4358144 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:48.222746+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21f33000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d23882400 session 0x558d2153dc20
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85704704 unmapped: 4349952 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:49.222912+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85704704 unmapped: 4349952 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:50.223081+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85704704 unmapped: 4349952 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:51.223275+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994931 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4341760 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:52.223429+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4341760 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:53.223613+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4341760 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:54.223832+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 4333568 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:55.224030+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.177521706s of 13.289117813s, submitted: 3
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 4333568 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:56.224215+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994799 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 4325376 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:57.224381+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 4325376 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:58.224589+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 4325376 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:37:59.224768+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d209d9c00
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4317184 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:00.224903+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4317184 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:01.225051+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994931 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 4308992 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:02.225204+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 4308992 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:03.225368+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 4300800 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:04.226234+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 4300800 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:05.226427+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21611800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.107577324s of 10.175964355s, submitted: 3
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 4300800 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:06.226639+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997955 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 4300800 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:07.227020+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 4300800 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:08.227291+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 4292608 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:09.227463+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 4284416 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:10.227669+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 4284416 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:11.227862+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997364 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 4284416 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:12.228022+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 4284416 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:13.228184+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 4284416 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:14.228335+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 4284416 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:15.228508+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 4284416 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:16.228673+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997232 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 4284416 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:17.228868+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 4284416 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:18.229072+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 4284416 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:19.229259+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 4284416 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:20.229442+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 4284416 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:21.229581+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997232 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 4284416 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:22.229720+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 4284416 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:23.229858+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d21f33000 session 0x558d23bc83c0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d23882800 session 0x558d2153c3c0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 4284416 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:24.229994+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 4284416 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:25.230253+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 4284416 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:26.230419+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997232 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 4284416 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:27.230552+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 4284416 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:28.230714+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 4284416 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:29.230841+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 4284416 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:30.231024+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 4284416 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:31.231168+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997232 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 4284416 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:32.231304+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 4284416 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:33.231454+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 4284416 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:34.231588+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21f33800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 28.505149841s of 28.555019379s, submitted: 3
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 4284416 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:35.231906+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 4284416 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:36.232042+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997364 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 4276224 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:37.232250+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 4276224 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:38.232416+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 4276224 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:39.232571+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 4276224 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:40.232717+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:41.232859+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 4276224 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998876 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:42.233205+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 4276224 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:43.233356+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 4276224 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:44.233482+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 4276224 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:45.233628+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 4276224 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:46.233781+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 4268032 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997694 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:47.233925+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 4268032 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.958631516s of 12.973783493s, submitted: 4
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d21611800 session 0x558d215d72c0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d209d9c00 session 0x558d21e2da40
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d21f32000 session 0x558d22f341e0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d21d78000 session 0x558d22f345a0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:48.234116+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 4268032 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:49.234247+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 4268032 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:50.234414+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 4268032 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:51.234715+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 4268032 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997562 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:52.234871+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 4268032 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:53.235072+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 4268032 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:54.235394+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 4268032 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:55.235609+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 4268032 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:56.235769+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 4268032 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997562 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:57.235921+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 4268032 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:58.236091+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 4268032 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d209d9c00
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.504429817s of 11.507369041s, submitted: 1
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21611800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:38:59.236277+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85794816 unmapped: 4259840 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:00.236397+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85794816 unmapped: 4259840 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:01.236564+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85794816 unmapped: 4259840 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997826 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:02.236686+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85794816 unmapped: 4259840 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:03.236854+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85794816 unmapped: 4259840 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:04.237008+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85794816 unmapped: 4259840 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:05.237175+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85794816 unmapped: 4259840 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:06.237359+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85803008 unmapped: 4251648 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999338 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:07.237498+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85803008 unmapped: 4251648 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21f32000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:08.237656+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85803008 unmapped: 4251648 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:09.237804+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85803008 unmapped: 4251648 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:10.237926+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85803008 unmapped: 4251648 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:11.238155+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85803008 unmapped: 4251648 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000850 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:12.238280+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85803008 unmapped: 4251648 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.979530334s of 14.006541252s, submitted: 4
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:13.238421+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85803008 unmapped: 4251648 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:14.238571+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85803008 unmapped: 4251648 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:15.238730+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85803008 unmapped: 4251648 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:16.238872+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85803008 unmapped: 4251648 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999995 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:17.239015+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85803008 unmapped: 4251648 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:18.239185+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85803008 unmapped: 4251648 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:19.239396+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85803008 unmapped: 4251648 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:20.239550+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85803008 unmapped: 4251648 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:21.239737+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85803008 unmapped: 4251648 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999995 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:22.239927+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85803008 unmapped: 4251648 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:23.240081+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85803008 unmapped: 4251648 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:24.240231+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85803008 unmapped: 4251648 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:25.240456+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85803008 unmapped: 4251648 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:26.240600+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85803008 unmapped: 4251648 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999995 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:27.240736+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85803008 unmapped: 4251648 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:28.240900+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85803008 unmapped: 4251648 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:29.241061+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 4243456 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:30.241177+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 4243456 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:31.241410+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 4243456 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999995 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:32.241563+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 4243456 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:33.241741+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 4243456 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:34.241886+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 4243456 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:35.242017+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 4243456 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:36.242255+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 4243456 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:37.242483+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999995 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 4243456 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:38.242663+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 4243456 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:39.242794+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 4243456 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:40.242938+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 4243456 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:41.243138+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 4243456 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:42.243289+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999995 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 4243456 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:43.243552+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 4243456 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:44.243680+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 4243456 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:45.247080+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 4243456 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:46.247244+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 4243456 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:47.247381+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999995 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 4243456 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:48.247554+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 4243456 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:49.247738+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 4243456 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:50.247925+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85819392 unmapped: 4235264 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:51.248159+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85819392 unmapped: 4235264 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:52.248304+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999995 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85819392 unmapped: 4235264 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:53.248487+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85819392 unmapped: 4235264 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:54.248634+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85819392 unmapped: 4235264 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:55.248876+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85819392 unmapped: 4235264 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:56.249034+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85819392 unmapped: 4235264 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:57.249205+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999995 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85819392 unmapped: 4235264 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:58.249444+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85819392 unmapped: 4235264 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:39:59.249577+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85819392 unmapped: 4235264 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:00.249758+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85819392 unmapped: 4235264 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:01.249896+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85819392 unmapped: 4235264 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:02.250018+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999995 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85819392 unmapped: 4235264 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:03.250207+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85819392 unmapped: 4235264 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:04.250753+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85819392 unmapped: 4235264 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:05.250895+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85819392 unmapped: 4235264 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:06.251043+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85819392 unmapped: 4235264 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:07.251180+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999995 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85819392 unmapped: 4235264 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:08.251347+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85819392 unmapped: 4235264 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:09.251494+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85819392 unmapped: 4235264 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:10.251620+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85819392 unmapped: 4235264 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:11.251767+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85819392 unmapped: 4235264 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:12.251890+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999995 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85819392 unmapped: 4235264 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:13.252340+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85819392 unmapped: 4235264 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:14.252454+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85827584 unmapped: 4227072 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:15.252587+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85827584 unmapped: 4227072 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d209d9c00 session 0x558d21c0a960
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:16.252702+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85827584 unmapped: 4227072 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:17.252844+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999995 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85827584 unmapped: 4227072 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:18.253017+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85827584 unmapped: 4227072 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:19.253142+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85827584 unmapped: 4227072 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:20.253254+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85827584 unmapped: 4227072 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:21.253431+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85827584 unmapped: 4227072 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:22.253600+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999995 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85827584 unmapped: 4227072 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:23.253745+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85827584 unmapped: 4227072 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:24.253888+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [3])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85827584 unmapped: 4227072 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:25.254063+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85827584 unmapped: 4227072 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:26.254210+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85827584 unmapped: 4227072 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21f33000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 74.023246765s of 74.035041809s, submitted: 3
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:27.254385+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000127 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85827584 unmapped: 4227072 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:28.254579+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85827584 unmapped: 4227072 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:29.254711+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85827584 unmapped: 4227072 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:30.254859+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85827584 unmapped: 4227072 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:31.255073+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85827584 unmapped: 4227072 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:32.255162+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000127 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 4218880 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:33.255299+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 4186112 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:34.255426+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 4186112 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:35.255565+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 4186112 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:36.255727+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 4177920 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:37.255855+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000457 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 4177920 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:38.256023+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 4177920 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:39.256232+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 4177920 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:40.256436+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 4177920 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:41.256592+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 4177920 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:42.256706+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000457 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 4177920 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:43.257000+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 4177920 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.802015305s of 16.954423904s, submitted: 4
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:44.257188+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 4177920 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:45.257389+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 4177920 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:46.257539+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 4177920 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:47.257700+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000325 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 4177920 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:48.257914+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 4177920 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:49.258135+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 4177920 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:50.258264+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 4177920 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:51.258405+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 4177920 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:52.258532+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000325 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 4177920 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:53.258661+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 4177920 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:54.258791+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 4177920 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:55.258905+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 4177920 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:56.259032+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 4177920 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:57.259157+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000325 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 4177920 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:58.259288+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 4177920 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:40:59.259407+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 4177920 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:00.259545+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 4169728 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d23882000 session 0x558d242612c0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d21f33800 session 0x558d24251680
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:01.259703+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 4169728 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:02.259854+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000325 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 4169728 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:03.259999+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 4169728 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:04.260135+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d21f32000 session 0x558d23ac34a0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d21611800 session 0x558d23642f00
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 4169728 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:05.260268+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 4169728 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:06.260446+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 4169728 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:07.260622+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000325 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 4169728 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:08.260759+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 4169728 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:09.260961+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 4169728 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:10.261123+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 4169728 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:11.261231+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d209d9c00
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 27.768108368s of 27.772514343s, submitted: 1
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85893120 unmapped: 4161536 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:12.261348+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000457 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85893120 unmapped: 4161536 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:13.261528+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85893120 unmapped: 4161536 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:14.261659+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85893120 unmapped: 4161536 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21d78000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:15.261793+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85893120 unmapped: 4161536 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:16.261923+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85893120 unmapped: 4161536 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:17.262054+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000589 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85893120 unmapped: 4161536 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:18.262164+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85893120 unmapped: 4161536 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:19.262298+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85893120 unmapped: 4161536 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:20.262428+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85893120 unmapped: 4161536 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:21.262563+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85901312 unmapped: 4153344 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:22.262768+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999998 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85909504 unmapped: 4145152 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:23.262900+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85909504 unmapped: 4145152 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:24.263052+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85909504 unmapped: 4145152 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:25.263221+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85909504 unmapped: 4145152 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:26.263374+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85909504 unmapped: 4145152 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:27.263514+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999998 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85909504 unmapped: 4145152 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:28.263673+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85909504 unmapped: 4145152 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.256614685s of 17.290227890s, submitted: 3
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:29.263814+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85909504 unmapped: 4145152 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:30.263970+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85909504 unmapped: 4145152 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:31.264123+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85909504 unmapped: 4145152 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:32.264248+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999734 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85909504 unmapped: 4145152 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:33.264390+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85909504 unmapped: 4145152 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:34.264528+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85909504 unmapped: 4145152 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:35.264657+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85909504 unmapped: 4145152 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:36.264792+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85909504 unmapped: 4145152 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:37.264919+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999734 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85909504 unmapped: 4145152 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:38.265134+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85909504 unmapped: 4145152 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:39.265291+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85909504 unmapped: 4145152 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:40.265422+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85909504 unmapped: 4145152 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:41.265603+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85909504 unmapped: 4145152 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:42.265744+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999734 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 4136960 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:43.265933+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 4136960 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:44.266164+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 4136960 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:45.266329+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 4136960 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:46.266448+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 4136960 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:47.266562+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999734 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 4136960 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:48.266713+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 4136960 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:49.266831+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 4136960 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:50.266974+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 4136960 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:51.267323+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d23882800 session 0x558d23bda1e0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d21f33000 session 0x558d23ac3680
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 4136960 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:52.267447+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999734 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d209d9c00 session 0x558d242512c0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 4136960 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:53.267588+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 4136960 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:54.267731+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 4136960 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:55.267935+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 4136960 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:56.268116+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 4136960 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:57.268281+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999734 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 4136960 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:58.268438+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 4136960 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:41:59.268582+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 4136960 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:00.268737+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 4136960 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:01.268887+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 4136960 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:02.269021+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21f33800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 33.530864716s of 33.536846161s, submitted: 2
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999866 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 4136960 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:03.269222+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 4136960 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d21d78000 session 0x558d233014a0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:04.269351+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 4136960 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:05.269505+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 4136960 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:06.269645+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 4136960 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:07.269784+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999998 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 4136960 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:08.269968+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 4136960 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:09.270144+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21d78000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 4128768 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:10.270311+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 4128768 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:11.270448+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 4128768 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:12.270589+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001510 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 4128768 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:13.270738+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 4128768 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:14.270898+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d209d9c00
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.364879608s of 12.376594543s, submitted: 3
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 4128768 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:15.271092+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d222dac00 session 0x558d21e2cf00
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21611800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 4120576 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:16.271284+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 4104192 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:17.271418+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001510 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 4104192 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:18.271808+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 4104192 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:19.271950+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 4104192 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:20.272090+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21f33000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4096000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:21.272193+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4096000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:22.272342+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001378 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4096000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:23.272511+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4096000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:24.272703+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4096000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:25.272896+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4096000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:26.273044+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.185429573s of 12.199811935s, submitted: 3
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4096000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:27.273179+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001378 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4096000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:28.273317+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4096000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:29.273460+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4096000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:30.273568+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4096000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:31.273702+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4096000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:32.273905+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001246 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4096000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:33.274036+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4096000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:34.274311+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4096000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:35.274465+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4096000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:36.274651+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4096000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:37.274803+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001246 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4096000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:38.275000+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4096000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:39.275168+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85966848 unmapped: 4087808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:40.275310+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85966848 unmapped: 4087808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:41.275462+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85966848 unmapped: 4087808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:42.275573+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001246 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85966848 unmapped: 4087808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:43.275700+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85966848 unmapped: 4087808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:44.275878+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85966848 unmapped: 4087808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:45.276029+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85966848 unmapped: 4087808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:46.276250+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85966848 unmapped: 4087808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:47.276376+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001246 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85966848 unmapped: 4087808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:48.276629+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85966848 unmapped: 4087808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:49.276833+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 4079616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:50.276990+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 4079616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:51.277126+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 4079616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:52.277280+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001246 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 4079616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:53.277405+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 4079616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:54.277555+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 4079616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:55.277723+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 4079616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:56.277871+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 4079616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:57.278000+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001246 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 4079616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:58.278219+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 4079616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:59.278363+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 4079616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:00.278545+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 4079616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:01.278704+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 4079616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:02.278865+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001246 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 4079616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:03.279017+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d21f33800 session 0x558d214be960
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 4079616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:04.279148+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 4079616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:05.279311+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 4079616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:06.279451+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 4079616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:07.279611+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001246 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 4079616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:08.279769+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85983232 unmapped: 4071424 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:09.279949+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85983232 unmapped: 4071424 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:10.280167+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85983232 unmapped: 4071424 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:11.280345+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85983232 unmapped: 4071424 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:12.280515+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001246 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85983232 unmapped: 4071424 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:13.280736+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85983232 unmapped: 4071424 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:14.280955+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 47.571327209s of 47.577346802s, submitted: 2
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85991424 unmapped: 4063232 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:15.281147+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85991424 unmapped: 4063232 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:16.281271+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85991424 unmapped: 4063232 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:17.281454+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001378 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85991424 unmapped: 4063232 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:18.281614+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85991424 unmapped: 4063232 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:19.281862+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85991424 unmapped: 4063232 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:20.282024+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85991424 unmapped: 4063232 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:21.282621+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85991424 unmapped: 4063232 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:22.282746+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001378 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85991424 unmapped: 4063232 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:23.282872+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85991424 unmapped: 4063232 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:24.282995+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85991424 unmapped: 4063232 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:25.283151+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85991424 unmapped: 4063232 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:26.283252+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:27.283439+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85991424 unmapped: 4063232 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001378 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:28.283606+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85991424 unmapped: 4063232 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:29.283732+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85991424 unmapped: 4063232 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:30.283859+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85991424 unmapped: 4063232 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:31.284040+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85991424 unmapped: 4063232 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:32.284214+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 4055040 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.832866669s of 17.840095520s, submitted: 2
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001246 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:33.284384+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 4055040 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:34.284575+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 4055040 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:35.284714+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 4055040 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:36.284899+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 4055040 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:37.285041+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 4055040 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001246 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:38.285191+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 4055040 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:39.285438+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 4055040 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:40.285617+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 4055040 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:41.285985+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 4055040 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:42.286451+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 4055040 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001246 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:43.286573+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 4055040 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:44.286682+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 4055040 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:45.287061+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 4055040 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d23882800 session 0x558d22f345a0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:46.287151+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 4055040 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:47.287320+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 4055040 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001246 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:48.287739+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 4046848 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:49.288203+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 4046848 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:50.288389+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 4046848 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:51.288510+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 4046848 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:52.288679+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 4046848 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001246 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:53.289006+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 4046848 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:54.289259+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 4046848 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:55.289587+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 4046848 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:56.289718+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 4046848 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d22223400
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.166109085s of 24.170095444s, submitted: 1
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:57.289981+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 4046848 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001378 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:58.290167+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 4046848 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:59.290350+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 4046848 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:00.290468+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 4046848 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:01.290579+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 4046848 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:02.290705+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 4046848 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882c00
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002890 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:03.290844+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 4022272 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:04.291016+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 4022272 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:05.291256+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 4022272 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:06.291406+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 4022272 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:07.291558+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 4014080 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002890 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:08.291776+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 4014080 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:09.291945+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 4014080 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:10.292067+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 4014080 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:11.292196+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 4005888 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:12.292368+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 4005888 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002890 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:13.292531+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 4005888 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:14.292716+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 4005888 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.972208023s of 17.985364914s, submitted: 2
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:15.292838+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 4005888 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:16.292994+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 4005888 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:17.293131+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 4005888 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d21f33000 session 0x558d24261680
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d209d9c00 session 0x558d23ab1680
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:18.293362+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002758 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 4005888 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:19.293507+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 4005888 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:20.293648+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 4005888 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:21.293979+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 4005888 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:22.294157+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3997696 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:23.294323+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002758 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3997696 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:24.294488+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3997696 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:25.294628+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3997696 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:26.294765+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3997696 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:27.294884+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3997696 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:28.295038+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002758 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3997696 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23883000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.542468071s of 14.546203613s, submitted: 1
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:29.295151+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86065152 unmapped: 3989504 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:30.295266+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86065152 unmapped: 3989504 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:31.295370+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86065152 unmapped: 3989504 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:32.295493+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86065152 unmapped: 3989504 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:33.295645+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004402 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86065152 unmapped: 3989504 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:34.295874+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86065152 unmapped: 3989504 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:35.296051+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86065152 unmapped: 3989504 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:36.296210+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86065152 unmapped: 3989504 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:37.296350+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86065152 unmapped: 3989504 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:38.296518+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1003811 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 3981312 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:39.296663+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 3981312 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:40.296808+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 3981312 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:41.296960+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 3981312 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:42.297075+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 3981312 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:43.297154+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1003811 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 3981312 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:44.297329+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 3981312 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.668762207s of 15.679323196s, submitted: 3
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:45.297487+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 3981312 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:46.297643+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 3981312 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:47.297774+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 3981312 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:48.297989+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1003679 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86081536 unmapped: 3973120 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d21d78000 session 0x558d23642b40
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d23882000 session 0x558d238714a0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:49.298132+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86081536 unmapped: 3973120 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:50.298253+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86081536 unmapped: 3973120 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:51.298368+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86081536 unmapped: 3973120 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:52.299394+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86081536 unmapped: 3973120 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:53.299533+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1003679 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86081536 unmapped: 3973120 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:54.299679+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86081536 unmapped: 3973120 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:55.299820+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86081536 unmapped: 3973120 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:56.299991+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86081536 unmapped: 3973120 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:57.300186+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86081536 unmapped: 3973120 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:58.300423+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1003679 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86081536 unmapped: 3973120 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:59.300553+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d209d9c00
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.809399605s of 14.812178612s, submitted: 1
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86081536 unmapped: 3973120 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:00.300696+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86081536 unmapped: 3973120 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:01.300913+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 3964928 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:02.301059+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21f33000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 3956736 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:03.301201+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005323 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 3956736 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:04.301371+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 3956736 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:05.301649+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21f33800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 3956736 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:06.301785+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 3956736 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:07.301924+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 3956736 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:08.302088+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006835 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 3956736 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:09.302274+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 3956736 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:10.302448+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 3956736 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:11.302678+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 3956736 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:12.302803+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 3956736 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:13.302929+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006244 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.976356506s of 13.989496231s, submitted: 4
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 3948544 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:14.303086+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 3948544 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:15.303237+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 3948544 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:16.303422+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 3948544 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:17.303549+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 3948544 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:18.303697+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006112 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 3940352 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:19.303829+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 3940352 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:20.303945+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 3940352 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:21.304155+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 3940352 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:22.304360+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 3940352 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:23.304518+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006112 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 3940352 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:24.304639+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 3940352 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d21f33800 session 0x558d24ad8780
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d209d9c00 session 0x558d232aa960
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:25.304776+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 3940352 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d23882c00 session 0x558d24ab7680
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d22223400 session 0x558d231c8f00
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:26.305164+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 3940352 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:27.305291+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 3940352 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:28.305440+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006112 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 3940352 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:29.305557+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 3940352 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:30.305691+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 3940352 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:31.305837+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 3940352 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:32.305977+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 3940352 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:33.306127+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006112 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 3940352 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:34.306263+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 3940352 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:35.306391+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 3940352 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d209d9c00
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 22.232669830s of 22.235353470s, submitted: 1
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:36.306506+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 3940352 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21f33800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:37.306665+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 3940352 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:38.306863+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006376 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 3940352 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:39.306981+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 3940352 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:40.307147+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 3940352 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:41.307286+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 3940352 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:42.307412+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86122496 unmapped: 3932160 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:43.307532+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006376 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86122496 unmapped: 3932160 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:44.307696+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86122496 unmapped: 3932160 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:45.307920+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86122496 unmapped: 3932160 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d21f33000 session 0x558d242512c0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d23883000 session 0x558d2158e000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:46.308077+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86122496 unmapped: 3932160 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:47.308220+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86122496 unmapped: 3932160 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.302981377s of 12.310188293s, submitted: 2
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:48.308397+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005785 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86122496 unmapped: 3932160 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:49.308521+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86122496 unmapped: 3932160 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:50.308654+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86122496 unmapped: 3932160 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:51.308783+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86130688 unmapped: 3923968 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:52.308901+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86130688 unmapped: 3923968 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:53.309032+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005521 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86130688 unmapped: 3923968 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:54.309162+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86130688 unmapped: 3923968 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:55.309285+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86130688 unmapped: 3923968 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:56.309407+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d21f33800 session 0x558d215dd4a0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86130688 unmapped: 3923968 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882c00
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:57.309618+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86130688 unmapped: 3923968 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:58.309809+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005653 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86130688 unmapped: 3923968 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:59.309940+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86130688 unmapped: 3923968 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:00.310065+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86130688 unmapped: 3923968 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:01.310209+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86130688 unmapped: 3923968 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:02.310361+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86130688 unmapped: 3923968 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:03.310477+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005653 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86130688 unmapped: 3923968 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 9181 writes, 36K keys, 9181 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 9181 writes, 2009 syncs, 4.57 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 720 writes, 1140 keys, 720 commit groups, 1.0 writes per commit group, ingest: 0.38 MB, 0.00 MB/s
                                           Interval WAL: 720 writes, 336 syncs, 2.14 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd2f30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd2f30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd2f30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:04.310685+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86130688 unmapped: 3923968 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:05.310818+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86130688 unmapped: 3923968 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:06.310928+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86130688 unmapped: 3923968 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:07.311074+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.203287125s of 19.217010498s, submitted: 4
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86122496 unmapped: 3932160 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:08.311277+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005785 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86122496 unmapped: 3932160 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:09.311388+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86122496 unmapped: 3932160 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:10.311508+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86122496 unmapped: 3932160 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:11.311621+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86122496 unmapped: 3932160 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:12.311767+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86122496 unmapped: 3932160 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:13.311880+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005785 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86122496 unmapped: 3932160 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:14.311983+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86130688 unmapped: 3923968 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23883400
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23883800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:15.312085+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86147072 unmapped: 3907584 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:16.312228+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d23882000 session 0x558d23bda1e0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d209d9c00 session 0x558d24ad90e0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86147072 unmapped: 3907584 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:17.312374+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86147072 unmapped: 3907584 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:18.312942+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007165 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86147072 unmapped: 3907584 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:19.313474+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86147072 unmapped: 3907584 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:20.313978+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86147072 unmapped: 3907584 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:21.314188+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.968050957s of 14.455158234s, submitted: 3
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86147072 unmapped: 3907584 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:22.315190+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86147072 unmapped: 3907584 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:23.315415+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007033 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86147072 unmapped: 3907584 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:24.316491+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86147072 unmapped: 3907584 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:25.316613+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86147072 unmapped: 3907584 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:26.316947+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86147072 unmapped: 3907584 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:27.317331+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86147072 unmapped: 3907584 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:28.317499+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007033 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86147072 unmapped: 3907584 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:29.317624+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86147072 unmapped: 3907584 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:30.317752+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86147072 unmapped: 3907584 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d209d9c00
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:31.317938+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86147072 unmapped: 3907584 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:32.318089+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86147072 unmapped: 3907584 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:33.318252+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007165 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86147072 unmapped: 3907584 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:34.318382+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.520938873s of 12.531864166s, submitted: 3
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86147072 unmapped: 3907584 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:35.318519+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86147072 unmapped: 3907584 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:36.318853+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86155264 unmapped: 3899392 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:37.319178+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 3891200 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:38.319318+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008677 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 3891200 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:39.319506+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 3891200 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:40.319649+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 3891200 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:41.319763+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 3891200 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:42.319886+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 3891200 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:43.320012+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007954 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 3891200 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:44.320132+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 3891200 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:45.320314+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 3891200 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:46.320503+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 3891200 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:47.320653+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 3891200 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:48.320852+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007954 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 3891200 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:49.321002+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 3891200 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:50.321167+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 3891200 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:51.321364+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 3891200 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:52.321514+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 3891200 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:53.321674+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007954 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 3891200 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:54.321756+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 3891200 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:55.321872+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 3891200 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:56.321994+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 3891200 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:57.322123+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 3891200 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:58.322266+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007954 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 3891200 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:59.322380+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 3891200 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:00.322506+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 3891200 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:01.322629+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 3891200 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:02.322750+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread fragmentation_score=0.000032 took=0.000042s
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 3891200 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:03.322903+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007954 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 3891200 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:04.323029+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 3883008 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:05.323147+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 3883008 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:06.323278+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 3883008 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:07.323434+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 3883008 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:08.323611+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007954 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 3883008 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:09.323753+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 3883008 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:10.323908+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 3883008 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:11.324055+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 3883008 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:12.324172+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 3883008 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:13.324299+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007954 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 3883008 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:14.324437+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 3883008 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:15.324559+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 3883008 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:16.324677+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 3883008 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:17.324817+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 3883008 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:18.325043+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007954 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 3883008 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:19.325208+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 3883008 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:20.325335+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 3883008 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:21.325472+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 47.280517578s of 47.298473358s, submitted: 3
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [0,0,1])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86220800 unmapped: 3833856 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:22.325616+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 2596864 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:23.325799+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007954 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87621632 unmapped: 3481600 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:24.326181+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87621632 unmapped: 3481600 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:25.326557+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87621632 unmapped: 3481600 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:26.326926+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87621632 unmapped: 3481600 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:27.327123+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87621632 unmapped: 3481600 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:28.327543+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007954 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87621632 unmapped: 3481600 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:29.327980+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87621632 unmapped: 3481600 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:30.328244+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87621632 unmapped: 3481600 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:31.328596+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87621632 unmapped: 3481600 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:32.328790+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87621632 unmapped: 3481600 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:33.329053+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007954 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87621632 unmapped: 3481600 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:34.329306+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d209d9c00 session 0x558d214b94a0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87621632 unmapped: 3481600 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:35.329540+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87621632 unmapped: 3481600 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:36.329690+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87621632 unmapped: 3481600 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:37.329839+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87621632 unmapped: 3481600 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:38.329998+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007954 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87621632 unmapped: 3481600 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:39.330182+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87621632 unmapped: 3481600 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:40.330307+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87621632 unmapped: 3481600 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:41.330486+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87621632 unmapped: 3481600 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:42.330641+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87621632 unmapped: 3481600 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:43.330834+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007954 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87621632 unmapped: 3481600 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:44.330996+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87621632 unmapped: 3481600 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:45.331147+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21f33000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 22.958734512s of 24.189737320s, submitted: 354
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87629824 unmapped: 3473408 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:46.331305+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87629824 unmapped: 3473408 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:47.331480+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87629824 unmapped: 3473408 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:48.331688+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011110 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87629824 unmapped: 3473408 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:49.331843+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87629824 unmapped: 3473408 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:50.332046+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:51.332251+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87629824 unmapped: 3473408 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:52.332372+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87629824 unmapped: 3473408 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:53.332486+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87629824 unmapped: 3473408 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011110 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:54.332632+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87629824 unmapped: 3473408 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:55.332771+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87629824 unmapped: 3473408 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:56.333240+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87629824 unmapped: 3473408 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:57.333559+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87629824 unmapped: 3473408 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:58.333903+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87629824 unmapped: 3473408 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010519 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:59.334194+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87629824 unmapped: 3473408 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:00.334361+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87629824 unmapped: 3473408 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.056202888s of 15.251289368s, submitted: 4
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:01.334649+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87638016 unmapped: 3465216 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:02.334804+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87638016 unmapped: 3465216 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:03.335058+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87638016 unmapped: 3465216 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010387 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:04.335338+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87638016 unmapped: 3465216 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:05.335563+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87638016 unmapped: 3465216 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:06.335762+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87638016 unmapped: 3465216 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:07.335967+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87638016 unmapped: 3465216 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:08.336184+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87638016 unmapped: 3465216 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d23883800 session 0x558d22276960
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d23882800 session 0x558d222625a0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010387 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:09.336395+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87638016 unmapped: 3465216 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:10.336623+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87638016 unmapped: 3465216 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:11.336868+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87638016 unmapped: 3465216 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:12.336975+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87638016 unmapped: 3465216 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d21f33000 session 0x558d245c45a0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:13.337112+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87638016 unmapped: 3465216 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010387 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:14.337287+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87638016 unmapped: 3465216 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:15.337506+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87638016 unmapped: 3465216 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:16.337649+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87638016 unmapped: 3465216 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:17.337787+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87638016 unmapped: 3465216 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:18.337981+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87638016 unmapped: 3465216 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010387 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:19.338137+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87638016 unmapped: 3465216 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21f33800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.436895370s of 18.440626144s, submitted: 1
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:20.338324+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87638016 unmapped: 3465216 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:21.338474+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87638016 unmapped: 3465216 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:22.338660+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87638016 unmapped: 3465216 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:23.338801+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87638016 unmapped: 3465216 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010651 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:24.338937+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87646208 unmapped: 3457024 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:25.339087+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87646208 unmapped: 3457024 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23883000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:26.339392+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87646208 unmapped: 3457024 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:27.339521+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87646208 unmapped: 3457024 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:28.339687+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87646208 unmapped: 3457024 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010060 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:29.339788+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87646208 unmapped: 3457024 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23883c00
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.091918945s of 10.104517937s, submitted: 3
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:30.339901+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87654400 unmapped: 3448832 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:31.340028+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87654400 unmapped: 3448832 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:32.340215+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:33.340308+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010849 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:34.340414+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:35.340581+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:36.340703+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:37.340831+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:38.341045+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010849 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:39.341171+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:40.341321+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.158608437s of 11.199194908s, submitted: 4
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:41.341463+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:42.341651+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:43.341855+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010717 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:44.342048+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:45.342216+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:46.342345+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:47.342477+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:48.342659+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:49.342801+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010717 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:50.342953+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d23882000 session 0x558d24abb4a0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d23883c00 session 0x558d214bcb40
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:51.343089+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:52.343250+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:53.343376+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:54.343505+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010717 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:55.343615+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:56.343729+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:57.343880+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:58.344031+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:59.344353+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010717 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:00.344622+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:01.344819+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d209d9c00
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.965158463s of 20.967866898s, submitted: 1
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:02.344993+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:03.345179+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:04.345725+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010849 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:05.346065+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:06.346295+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:07.346578+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21f33000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:08.346874+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:09.347403+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010849 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Nov 24 10:05:59 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2592132238' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:10.347815+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:11.348163+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:12.348477+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:13.348651+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.069068909s of 12.073123932s, submitted: 1
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:14.348796+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010258 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:15.348924+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:16.349070+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:17.349447+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:18.349765+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d23883400 session 0x558d238701e0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d23882c00 session 0x558d22f345a0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:19.349910+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010126 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:20.350057+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:21.350179+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:22.350311+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:23.350420+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:24.350562+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010126 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:25.350742+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:26.350952+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:27.351176+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:28.351432+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.483628273s of 15.492028236s, submitted: 2
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:29.351567+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010258 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:30.351695+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:31.351836+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:32.351970+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:33.352150+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:34.352287+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011770 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:35.352441+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:36.352628+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:37.352779+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:38.353001+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:39.353166+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011770 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:40.353296+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:41.353463+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.622843742s of 12.630350113s, submitted: 2
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:42.353639+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:43.353772+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:44.353921+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011638 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:45.354155+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:46.354474+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:47.354655+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:48.354850+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:49.355044+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011638 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:50.355162+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:51.355287+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:52.355659+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:53.355809+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:54.355937+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011638 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d21f33000 session 0x558d221db4a0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d209d9c00 session 0x558d232efa40
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:55.356074+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:56.356234+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:57.356430+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:58.356643+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:59.356809+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011638 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:00.356924+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:01.357076+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:02.357159+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:03.357350+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:04.357480+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011638 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:05.357609+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882c00
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.092479706s of 24.096317291s, submitted: 1
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:06.357764+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:07.357904+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:08.358190+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:09.358329+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011770 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:10.358474+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:11.358596+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23883400
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:12.358757+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:13.358954+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:14.359158+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013282 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:15.359346+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:16.359563+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:17.359725+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.254719734s of 12.262475967s, submitted: 2
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:18.359930+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:19.360073+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012691 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:20.360215+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:21.361246+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 2383872 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:22.361378+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 2383872 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:23.361782+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 2383872 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:24.362004+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012559 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 2383872 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:25.362171+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 2383872 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:26.362324+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 2383872 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:27.362428+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 2383872 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:28.362633+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 2383872 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:29.362780+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012559 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 2383872 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:30.362941+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 2383872 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:31.363238+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 2383872 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:32.363441+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 2383872 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:33.363616+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 2383872 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:34.363763+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012559 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 2383872 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:35.363920+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 2383872 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:36.364322+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:37.364459+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:38.364611+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:39.364734+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012559 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:40.374601+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:41.374741+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:42.374908+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:43.375060+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:44.375211+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012559 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:45.375339+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:46.375536+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:47.375720+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:48.375890+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:49.376026+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012559 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:50.376160+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:51.376333+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:52.376480+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:53.376601+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:54.376716+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012559 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:55.376835+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:56.376996+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:57.377170+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:58.377421+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:59.377623+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012559 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:00.377781+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:01.377917+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:02.378199+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:03.378345+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:04.378508+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012559 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:05.378637+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:06.378765+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:07.378952+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:08.379127+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:09.379242+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012559 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:10.379465+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:11.379668+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:12.379787+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:13.379933+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:14.380648+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012559 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:15.381008+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:16.381482+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:17.381698+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:18.381909+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:19.382382+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012559 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:20.382581+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:21.382752+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:22.382969+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:23.383351+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:24.383827+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012559 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:25.384075+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:26.384585+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:27.384813+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:28.385166+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:29.385363+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012559 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d23882800 session 0x558d214b9a40
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:30.385646+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:31.385900+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:32.386169+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d23883400 session 0x558d245c4d20
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d23882c00 session 0x558d245d8960
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:33.386365+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:34.386526+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012559 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:35.386729+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:36.386948+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:37.387114+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:38.387272+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:39.387407+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012559 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:40.387640+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23883c00
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 82.736328125s of 82.741722107s, submitted: 2
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:41.387778+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:42.387914+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:43.388027+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:44.388213+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012691 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:45.388378+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:46.388494+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23883800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 2367488 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:47.388615+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 2367488 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:48.388704+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 2367488 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:49.388785+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1014203 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 2367488 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:50.388925+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 2367488 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:51.389051+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 2367488 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:52.389187+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 2367488 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:53.389312+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 2367488 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.260792732s of 13.267044067s, submitted: 2
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:54.389397+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1014071 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 2367488 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:55.389517+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 2367488 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:56.389627+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 2367488 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:57.389747+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 2367488 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:58.390059+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 2367488 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:59.390158+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1014071 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 2367488 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:00.390304+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 2359296 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:01.390433+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 2359296 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:02.390554+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 2359296 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:03.390700+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 2359296 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:04.390866+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013939 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 2359296 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:05.391000+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 2359296 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:06.391163+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 2359296 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:07.391385+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d23883000 session 0x558d245c4b40
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d21f33800 session 0x558d24251e00
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 2359296 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:08.391582+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 2359296 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:09.391697+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _renew_subs
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.101872444s of 15.108276367s, submitted: 2
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21f33800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _renew_subs
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1022869 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 97148928 unmapped: 2342912 heap: 99491840 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:10.391868+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _renew_subs
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 153 ms_handle_reset con 0x558d21f33800 session 0x558d245c50e0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88858624 unmapped: 19030016 heap: 107888640 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:11.392002+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 27344896 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:12.392132+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 153 handle_osd_map epochs [153,154], i have 153, src has [1,154]
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 154 heartbeat osd_stat(store_statfs(0x4fb1c6000/0x0/0x4ffc00000, data 0x157ea5e/0x1643000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [0,1])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 154 ms_handle_reset con 0x558d23882800 session 0x558d24ad9a40
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88956928 unmapped: 27328512 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:13.392264+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 154 handle_osd_map epochs [154,155], i have 154, src has [1,155]
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fb1c5000/0x0/0x4ffc00000, data 0x1580b66/0x1646000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88973312 unmapped: 27312128 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:14.392377+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171994 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88973312 unmapped: 27312128 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:15.392486+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 27279360 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:16.392584+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:17.392694+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 27279360 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fb1c1000/0x0/0x4ffc00000, data 0x1582b38/0x1649000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:18.392832+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 27279360 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882c00
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:19.392965+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 27279360 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1172126 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:20.393193+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 27279360 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fb1c1000/0x0/0x4ffc00000, data 0x1582b38/0x1649000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:21.393371+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 27279360 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:22.393497+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 27279360 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:23.393619+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 27279360 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fb1c1000/0x0/0x4ffc00000, data 0x1582b38/0x1649000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:24.393746+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 27279360 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fb1c1000/0x0/0x4ffc00000, data 0x1582b38/0x1649000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1172126 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:25.393876+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 27279360 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fb1c1000/0x0/0x4ffc00000, data 0x1582b38/0x1649000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:26.394040+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 27279360 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:27.394220+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 27279360 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:28.394434+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 27279360 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fb1c1000/0x0/0x4ffc00000, data 0x1582b38/0x1649000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:29.394615+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 27279360 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1172126 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:30.394793+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 27279360 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:31.395040+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 27279360 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:32.395272+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 27279360 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fb1c1000/0x0/0x4ffc00000, data 0x1582b38/0x1649000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:33.395398+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 27279360 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:34.395536+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 27279360 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.825229645s of 25.062850952s, submitted: 69
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170222 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:35.395662+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 27410432 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:36.395801+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 27410432 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fb1c3000/0x0/0x4ffc00000, data 0x1582b38/0x1649000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:37.395967+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 27410432 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fb1c3000/0x0/0x4ffc00000, data 0x1582b38/0x1649000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:38.396205+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 27410432 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:39.396370+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 27410432 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170222 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:40.396543+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 27410432 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fb1c3000/0x0/0x4ffc00000, data 0x1582b38/0x1649000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:41.396665+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 27410432 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:42.396796+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 27410432 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:43.396954+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 27410432 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fb1c3000/0x0/0x4ffc00000, data 0x1582b38/0x1649000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:44.397125+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 27410432 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170222 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:45.397316+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 27410432 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:46.397478+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 27410432 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fb1c3000/0x0/0x4ffc00000, data 0x1582b38/0x1649000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:47.397762+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 27410432 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fb1c3000/0x0/0x4ffc00000, data 0x1582b38/0x1649000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:48.399322+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 27410432 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:49.400060+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 27410432 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170222 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:50.400747+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 27410432 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:51.400989+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 27410432 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:52.401405+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 27410432 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:53.402158+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 27410432 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fb1c3000/0x0/0x4ffc00000, data 0x1582b38/0x1649000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:54.402877+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88866816 unmapped: 27418624 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fb1c3000/0x0/0x4ffc00000, data 0x1582b38/0x1649000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170222 data_alloc: 218103808 data_used: 184320
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:55.403390+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88866816 unmapped: 27418624 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:56.403574+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88866816 unmapped: 27418624 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:57.403718+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88866816 unmapped: 27418624 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:58.403925+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23883000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.014596939s of 24.018394470s, submitted: 1
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 155 ms_handle_reset con 0x558d23883000 session 0x558d24ab6780
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23883400
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 27410432 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fb1c3000/0x0/0x4ffc00000, data 0x1582b38/0x1649000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 155 ms_handle_reset con 0x558d23883400 session 0x558d24ab7e00
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:59.404054+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 96878592 unmapped: 19406848 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 155 handle_osd_map epochs [155,156], i have 155, src has [1,156]
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193444 data_alloc: 218103808 data_used: 7000064
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:00.404503+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 96878592 unmapped: 19406848 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336c000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _renew_subs
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 156 handle_osd_map epochs [157,157], i have 156, src has [1,157]
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 157 ms_handle_reset con 0x558d2336c000 session 0x558d24abab40
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:01.404627+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 96919552 unmapped: 19365888 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 157 heartbeat osd_stat(store_statfs(0x4fadbc000/0x0/0x4ffc00000, data 0x1986d64/0x1a4f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:02.404936+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 96919552 unmapped: 19365888 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:03.405225+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 157 handle_osd_map epochs [157,158], i have 157, src has [1,158]
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 96919552 unmapped: 19365888 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:04.405520+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 96919552 unmapped: 19365888 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1227084 data_alloc: 218103808 data_used: 7000064
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:05.405780+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 96919552 unmapped: 19365888 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fadb8000/0x0/0x4ffc00000, data 0x1988d36/0x1a52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:06.406006+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 96919552 unmapped: 19365888 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:07.406156+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fadb8000/0x0/0x4ffc00000, data 0x1988d36/0x1a52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21f33800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21f33800 session 0x558d243a7e00
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 97230848 unmapped: 19054592 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:08.406314+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 97230848 unmapped: 19054592 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:09.406500+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 97230848 unmapped: 19054592 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fad95000/0x0/0x4ffc00000, data 0x19acd59/0x1a77000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23883000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:10.406677+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258577 data_alloc: 234881024 data_used: 11198464
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101122048 unmapped: 15163392 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fad95000/0x0/0x4ffc00000, data 0x19acd59/0x1a77000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fad95000/0x0/0x4ffc00000, data 0x19acd59/0x1a77000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:11.406835+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101122048 unmapped: 15163392 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:12.407004+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fad95000/0x0/0x4ffc00000, data 0x19acd59/0x1a77000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101122048 unmapped: 15163392 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:13.407166+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fad95000/0x0/0x4ffc00000, data 0x19acd59/0x1a77000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101122048 unmapped: 15163392 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:14.407368+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101122048 unmapped: 15163392 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:15.407501+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258577 data_alloc: 234881024 data_used: 11198464
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101122048 unmapped: 15163392 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:16.407692+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101122048 unmapped: 15163392 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fad95000/0x0/0x4ffc00000, data 0x19acd59/0x1a77000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:17.407825+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101122048 unmapped: 15163392 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:18.408026+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101122048 unmapped: 15163392 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:19.408163+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101122048 unmapped: 15163392 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:20.408320+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258577 data_alloc: 234881024 data_used: 11198464
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101580800 unmapped: 14704640 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 22.118104935s of 22.209205627s, submitted: 39
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:21.408458+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101179392 unmapped: 15106048 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fad33000/0x0/0x4ffc00000, data 0x1a0ed59/0x1ad9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:22.408598+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 15065088 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:23.408725+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 15065088 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:24.408865+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 15065088 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:25.409010+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1269699 data_alloc: 234881024 data_used: 11309056
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 15065088 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:26.409150+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fad2f000/0x0/0x4ffc00000, data 0x1a12d59/0x1add000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 15065088 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:27.409273+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 15065088 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:28.409459+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 15065088 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:29.409577+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 15065088 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:30.409688+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1269699 data_alloc: 234881024 data_used: 11309056
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 15065088 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:31.409861+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 15065088 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fad2f000/0x0/0x4ffc00000, data 0x1a12d59/0x1add000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:32.410001+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 15065088 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:33.410141+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 15065088 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:34.410291+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 15065088 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:35.410442+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1269699 data_alloc: 234881024 data_used: 11309056
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 15065088 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:36.410634+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 15065088 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:37.410782+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fad2f000/0x0/0x4ffc00000, data 0x1a12d59/0x1add000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 15065088 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:38.410947+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 15065088 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fad2f000/0x0/0x4ffc00000, data 0x1a12d59/0x1add000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:39.411073+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 15065088 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:40.411200+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1269699 data_alloc: 234881024 data_used: 11309056
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 15065088 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:41.411354+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 15065088 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fad2f000/0x0/0x4ffc00000, data 0x1a12d59/0x1add000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:42.411520+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 15065088 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:43.411665+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 15065088 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23883400
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 23.617031097s of 23.632219315s, submitted: 3
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23883400 session 0x558d231c9680
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:44.411781+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336c400
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fad2f000/0x0/0x4ffc00000, data 0x1a12d59/0x1add000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336c400 session 0x558d231bcd20
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101769216 unmapped: 14516224 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336c800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:45.411899+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336c800 session 0x558d23bc8780
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1309974 data_alloc: 234881024 data_used: 11833344
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336cc00
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336cc00 session 0x558d215c6960
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101883904 unmapped: 14401536 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:46.412053+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101883904 unmapped: 14401536 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:47.412210+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101883904 unmapped: 14401536 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa822000/0x0/0x4ffc00000, data 0x1f1edbb/0x1fea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:48.412395+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101883904 unmapped: 14401536 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:49.412543+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101883904 unmapped: 14401536 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa822000/0x0/0x4ffc00000, data 0x1f1edbb/0x1fea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:50.412970+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1309990 data_alloc: 234881024 data_used: 11833344
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101916672 unmapped: 14368768 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa822000/0x0/0x4ffc00000, data 0x1f1edbb/0x1fea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:51.413772+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101916672 unmapped: 14368768 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21f33800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:52.414632+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 105340928 unmapped: 10944512 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa822000/0x0/0x4ffc00000, data 0x1f1edbb/0x1fea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:53.414798+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 9420800 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:54.415356+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 9420800 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:55.415530+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1345710 data_alloc: 234881024 data_used: 17121280
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 9420800 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa822000/0x0/0x4ffc00000, data 0x1f1edbb/0x1fea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:56.416184+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 9388032 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:57.416351+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 9388032 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:58.416583+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 9388032 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:59.416791+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 9388032 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:00.416927+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1345710 data_alloc: 234881024 data_used: 17121280
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 9388032 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:01.417092+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 9388032 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa822000/0x0/0x4ffc00000, data 0x1f1edbb/0x1fea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:02.417466+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 106930176 unmapped: 9355264 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:03.417699+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.244815826s of 19.362913132s, submitted: 31
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 115105792 unmapped: 4325376 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:04.417897+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 112279552 unmapped: 7151616 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:05.418083+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1423254 data_alloc: 234881024 data_used: 18010112
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111017984 unmapped: 8413184 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8db7000/0x0/0x4ffc00000, data 0x27e9dbb/0x28b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:06.418342+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8db7000/0x0/0x4ffc00000, data 0x27e9dbb/0x28b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111017984 unmapped: 8413184 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:07.418497+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111017984 unmapped: 8413184 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:08.418718+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111017984 unmapped: 8413184 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:09.418914+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8db7000/0x0/0x4ffc00000, data 0x27e9dbb/0x28b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111017984 unmapped: 8413184 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:10.419163+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8db7000/0x0/0x4ffc00000, data 0x27e9dbb/0x28b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1423558 data_alloc: 234881024 data_used: 18018304
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111026176 unmapped: 8404992 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:11.419399+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21f33800 session 0x558d21df4780
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111026176 unmapped: 8404992 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336c400
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:12.419613+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336c400 session 0x558d243a6000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 107732992 unmapped: 11698176 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:13.419897+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 107732992 unmapped: 11698176 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:14.420182+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 107732992 unmapped: 11698176 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:15.420369+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1277852 data_alloc: 234881024 data_used: 11833344
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 107732992 unmapped: 11698176 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:16.420524+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f93a4000/0x0/0x4ffc00000, data 0x1a12d59/0x1add000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 107732992 unmapped: 11698176 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:17.420828+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23883000 session 0x558d24abbc20
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23882800 session 0x558d23870b40
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 107732992 unmapped: 11698176 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336c800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:18.421047+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.526162148s of 14.800761223s, submitted: 109
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336c800 session 0x558d22262000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:19.421179+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:20.421320+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1211406 data_alloc: 218103808 data_used: 7524352
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:21.421444+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa019000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:22.421653+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:23.421784+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:24.421981+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:25.422136+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1211406 data_alloc: 218103808 data_used: 7524352
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:26.422265+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:27.422415+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa019000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:28.422610+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa019000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:29.422732+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:30.422853+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1211406 data_alloc: 218103808 data_used: 7524352
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:31.422963+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:32.423051+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:33.423198+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:34.423265+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa019000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:35.423373+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1211406 data_alloc: 218103808 data_used: 7524352
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa019000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:36.423547+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:37.423640+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:38.423785+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:39.423942+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:40.424123+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1211406 data_alloc: 218103808 data_used: 7524352
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa019000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:41.424243+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:42.424410+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa019000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:43.424545+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:44.424679+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:45.424802+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1211406 data_alloc: 218103808 data_used: 7524352
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa019000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:46.424962+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:47.425094+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:48.425285+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:49.425462+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:50.425617+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1211406 data_alloc: 218103808 data_used: 7524352
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:51.425792+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa019000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21f33800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 33.755134583s of 33.798049927s, submitted: 15
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21f33800 session 0x558d242505a0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:52.425940+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104734720 unmapped: 24215552 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:53.426131+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104734720 unmapped: 24215552 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:54.426328+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f99be000/0x0/0x4ffc00000, data 0x1be4d36/0x1cae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104734720 unmapped: 24215552 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:55.427333+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258560 data_alloc: 218103808 data_used: 7000064
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104734720 unmapped: 24215552 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:56.427509+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336c400
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336c400 session 0x558d242605a0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104751104 unmapped: 24199168 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:57.427630+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104751104 unmapped: 24199168 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:58.427846+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 105603072 unmapped: 23347200 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:59.428219+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 105603072 unmapped: 23347200 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f99bd000/0x0/0x4ffc00000, data 0x1be4d59/0x1caf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:00.428514+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1305477 data_alloc: 234881024 data_used: 11505664
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 105603072 unmapped: 23347200 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:01.429019+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f99bd000/0x0/0x4ffc00000, data 0x1be4d59/0x1caf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 105603072 unmapped: 23347200 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:02.429627+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 105603072 unmapped: 23347200 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:03.430155+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 105603072 unmapped: 23347200 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:04.430358+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 105603072 unmapped: 23347200 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:05.430534+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1305477 data_alloc: 234881024 data_used: 11505664
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 105603072 unmapped: 23347200 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:06.430818+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 105603072 unmapped: 23347200 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:07.431036+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f99bd000/0x0/0x4ffc00000, data 0x1be4d59/0x1caf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 105603072 unmapped: 23347200 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:08.431386+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.502574921s of 16.565576553s, submitted: 10
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108593152 unmapped: 20357120 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:09.431621+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:10.431823+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1364337 data_alloc: 234881024 data_used: 12042240
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:11.432052+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9308000/0x0/0x4ffc00000, data 0x2299d59/0x2364000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:12.432367+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9308000/0x0/0x4ffc00000, data 0x2299d59/0x2364000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:13.432660+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9308000/0x0/0x4ffc00000, data 0x2299d59/0x2364000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:14.432797+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:15.432945+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1364337 data_alloc: 234881024 data_used: 12042240
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9308000/0x0/0x4ffc00000, data 0x2299d59/0x2364000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:16.433191+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:17.433370+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:18.433619+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9308000/0x0/0x4ffc00000, data 0x2299d59/0x2364000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:19.433900+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:20.434069+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1364337 data_alloc: 234881024 data_used: 12042240
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:21.434240+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9308000/0x0/0x4ffc00000, data 0x2299d59/0x2364000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:22.434457+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:23.434686+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:24.434900+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:25.435041+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1364337 data_alloc: 234881024 data_used: 12042240
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:26.435239+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9308000/0x0/0x4ffc00000, data 0x2299d59/0x2364000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:27.435373+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:28.435535+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:29.435708+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:30.435876+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1364337 data_alloc: 234881024 data_used: 12042240
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:31.436039+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9308000/0x0/0x4ffc00000, data 0x2299d59/0x2364000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:32.436198+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:33.436350+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:34.436482+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:35.436595+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1364337 data_alloc: 234881024 data_used: 12042240
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:36.436717+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:37.436844+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9308000/0x0/0x4ffc00000, data 0x2299d59/0x2364000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:38.437073+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23882800 session 0x558d231bdc20
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23883000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 30.224287033s of 30.359544754s, submitted: 50
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 103981056 unmapped: 24969216 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:39.437275+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23883000 session 0x558d236583c0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 103989248 unmapped: 24961024 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:40.437407+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219187 data_alloc: 218103808 data_used: 4837376
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 103989248 unmapped: 24961024 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:41.437550+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 103989248 unmapped: 24961024 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:42.437715+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:43.437895+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 103989248 unmapped: 24961024 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa019000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:44.438045+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 103989248 unmapped: 24961024 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:45.438179+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 103989248 unmapped: 24961024 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219187 data_alloc: 218103808 data_used: 4837376
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:46.438347+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 103989248 unmapped: 24961024 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa019000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:47.438493+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 103940096 unmapped: 25010176 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:48.438661+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 103940096 unmapped: 25010176 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:49.438817+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 103940096 unmapped: 25010176 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa019000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:50.439703+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 103940096 unmapped: 25010176 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa019000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219187 data_alloc: 218103808 data_used: 4837376
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:51.439911+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 103940096 unmapped: 25010176 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:52.440065+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 103940096 unmapped: 25010176 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa019000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:53.440202+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 103940096 unmapped: 25010176 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:54.440341+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 103940096 unmapped: 25010176 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:55.440511+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 103940096 unmapped: 25010176 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa019000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219187 data_alloc: 218103808 data_used: 4837376
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:56.440642+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 103940096 unmapped: 25010176 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336cc00
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336cc00 session 0x558d215d74a0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21f33800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21f33800 session 0x558d24261c20
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336c400
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336c400 session 0x558d24ad83c0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23882800 session 0x558d214be960
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23883000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.965669632s of 18.014984131s, submitted: 19
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:57.440796+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23883000 session 0x558d22f35e00
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23883400
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23883400 session 0x558d24ad8780
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104701952 unmapped: 28450816 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21f33800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21f33800 session 0x558d24455e00
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336c400
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336c400 session 0x558d231bc000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23882800 session 0x558d24ab74a0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:58.441006+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104701952 unmapped: 28450816 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:59.441190+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104701952 unmapped: 28450816 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f979f000/0x0/0x4ffc00000, data 0x1e02d46/0x1ecd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23883000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23883000 session 0x558d244552c0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:00.441326+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104701952 unmapped: 28450816 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336d000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336d000 session 0x558d24abb860
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1285605 data_alloc: 218103808 data_used: 4837376
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:01.441455+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 28467200 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21f33800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21f33800 session 0x558d2158e5a0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336c400
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336c400 session 0x558d2149da40
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:02.441582+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104710144 unmapped: 28442624 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f979d000/0x0/0x4ffc00000, data 0x1e02d79/0x1ecf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336d000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:03.441706+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 103964672 unmapped: 29188096 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 10K writes, 39K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 10K writes, 2620 syncs, 4.03 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1371 writes, 3660 keys, 1371 commit groups, 1.0 writes per commit group, ingest: 2.75 MB, 0.00 MB/s
                                           Interval WAL: 1371 writes, 611 syncs, 2.24 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:04.441827+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 26656768 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:05.442007+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 26656768 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1340445 data_alloc: 234881024 data_used: 11751424
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f979d000/0x0/0x4ffc00000, data 0x1e02d79/0x1ecf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:06.442178+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 26656768 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:07.442346+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 26656768 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:08.442497+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 26656768 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:09.442662+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 26656768 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f979d000/0x0/0x4ffc00000, data 0x1e02d79/0x1ecf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:10.442811+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 26656768 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1340445 data_alloc: 234881024 data_used: 11751424
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:11.442963+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 26656768 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:12.443128+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 26656768 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f979d000/0x0/0x4ffc00000, data 0x1e02d79/0x1ecf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:13.443283+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 26656768 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:14.443463+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 26656768 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.491628647s of 17.575763702s, submitted: 24
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:15.443586+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108888064 unmapped: 24264704 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1386909 data_alloc: 234881024 data_used: 11845632
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:16.443841+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108838912 unmapped: 24313856 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f91d7000/0x0/0x4ffc00000, data 0x23c7d79/0x2494000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:17.443982+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 25157632 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:18.444208+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 25157632 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:19.444353+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 25157632 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:20.444523+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 25157632 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f915a000/0x0/0x4ffc00000, data 0x2444d79/0x2511000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1391019 data_alloc: 234881024 data_used: 11915264
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:21.444642+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 25157632 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:22.444775+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 25157632 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:23.444884+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108265472 unmapped: 24887296 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:24.445055+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108265472 unmapped: 24887296 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:25.445286+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108265472 unmapped: 24887296 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f913a000/0x0/0x4ffc00000, data 0x2465d79/0x2532000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1389963 data_alloc: 234881024 data_used: 11915264
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:26.445454+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108265472 unmapped: 24887296 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:27.445588+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108265472 unmapped: 24887296 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:28.445742+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108265472 unmapped: 24887296 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.405244827s of 14.579542160s, submitted: 62
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:29.445890+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108339200 unmapped: 24813568 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9131000/0x0/0x4ffc00000, data 0x246ed79/0x253b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:30.446034+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108347392 unmapped: 24805376 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1390067 data_alloc: 234881024 data_used: 11915264
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:31.446285+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108347392 unmapped: 24805376 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:32.447180+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108347392 unmapped: 24805376 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:33.448315+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23883000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108363776 unmapped: 24788992 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23883000 session 0x558d215d70e0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336d400
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336d400 session 0x558d24ab6d20
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336d800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336d800 session 0x558d24ab63c0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21f33800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21f33800 session 0x558d24ab7e00
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336c400
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336c400 session 0x558d24ab74a0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:34.448496+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 24068096 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:35.449232+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 24068096 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8924000/0x0/0x4ffc00000, data 0x2c7bd79/0x2d48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1455854 data_alloc: 234881024 data_used: 11915264
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:36.450170+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 24068096 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:37.450653+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 24068096 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8921000/0x0/0x4ffc00000, data 0x2c7ed79/0x2d4b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:38.450873+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 24068096 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:39.451014+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 24068096 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:40.451161+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 24068096 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1455854 data_alloc: 234881024 data_used: 11915264
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:41.451445+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 24068096 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336d400
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.207920074s of 12.297619820s, submitted: 26
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336d400 session 0x558d24ab6000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8921000/0x0/0x4ffc00000, data 0x2c7ed79/0x2d4b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23883000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:42.451633+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 24043520 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:43.451768+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110166016 unmapped: 22986752 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8920000/0x0/0x4ffc00000, data 0x2c7ed9c/0x2d4c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:44.451909+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 17555456 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:45.452067+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 17555456 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1513695 data_alloc: 234881024 data_used: 19017728
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:46.452173+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8920000/0x0/0x4ffc00000, data 0x2c7ed9c/0x2d4c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 115630080 unmapped: 17522688 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8920000/0x0/0x4ffc00000, data 0x2c7ed9c/0x2d4c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:47.452312+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 115744768 unmapped: 17408000 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:48.452679+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 115744768 unmapped: 17408000 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:49.452905+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 115744768 unmapped: 17408000 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:50.453169+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f891e000/0x0/0x4ffc00000, data 0x2c7fd9c/0x2d4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 115777536 unmapped: 17375232 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1514287 data_alloc: 234881024 data_used: 19021824
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:51.453374+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 115777536 unmapped: 17375232 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:52.453613+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 115777536 unmapped: 17375232 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:53.453747+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 115777536 unmapped: 17375232 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.486683846s of 12.517056465s, submitted: 8
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:54.453886+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 121249792 unmapped: 11902976 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:55.454082+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 121405440 unmapped: 11747328 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1599979 data_alloc: 234881024 data_used: 19943424
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f7e16000/0x0/0x4ffc00000, data 0x3788d9c/0x3856000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:56.454389+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118661120 unmapped: 14491648 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:57.454549+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118669312 unmapped: 14483456 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:58.454726+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118669312 unmapped: 14483456 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f7e16000/0x0/0x4ffc00000, data 0x3788d9c/0x3856000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:59.454924+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118702080 unmapped: 14450688 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:00.455132+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118702080 unmapped: 14450688 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1600131 data_alloc: 234881024 data_used: 19947520
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:01.455294+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118702080 unmapped: 14450688 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:02.455417+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23883000 session 0x558d231bdc20
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118718464 unmapped: 14434304 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f7e16000/0x0/0x4ffc00000, data 0x3788d9c/0x3856000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336dc00
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336dc00 session 0x558d2335dc20
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:03.455528+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113754112 unmapped: 19398656 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:04.455835+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113754112 unmapped: 19398656 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:05.456170+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f88c9000/0x0/0x4ffc00000, data 0x2472d79/0x253f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113754112 unmapped: 19398656 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1400713 data_alloc: 234881024 data_used: 10334208
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:06.456406+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113754112 unmapped: 19398656 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336d000 session 0x558d242c9e00
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.439421654s of 12.743181229s, submitted: 128
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23882800 session 0x558d22f345a0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21f33800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:07.456550+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21f33800 session 0x558d232aa960
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108969984 unmapped: 24182784 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:08.456805+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108969984 unmapped: 24182784 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:09.456968+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108969984 unmapped: 24182784 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:10.457125+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108969984 unmapped: 24182784 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c09000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239810 data_alloc: 218103808 data_used: 4837376
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:11.457252+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108969984 unmapped: 24182784 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:12.457578+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c09000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108969984 unmapped: 24182784 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:13.457808+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21610800 session 0x558d21e2d680
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336c400
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108969984 unmapped: 24182784 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21610000 session 0x558d2158e3c0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336d400
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:14.458185+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108969984 unmapped: 24182784 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c09000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21611800 session 0x558d23871e00
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23883000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:15.458331+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108969984 unmapped: 24182784 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239810 data_alloc: 218103808 data_used: 4837376
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:16.458618+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108969984 unmapped: 24182784 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:17.458908+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108969984 unmapped: 24182784 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:18.459209+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108969984 unmapped: 24182784 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:19.459355+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108969984 unmapped: 24182784 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:20.459624+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108969984 unmapped: 24182784 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c09000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239810 data_alloc: 218103808 data_used: 4837376
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:21.459808+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.715085030s of 14.812863350s, submitted: 35
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108969984 unmapped: 24182784 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c09000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:22.459983+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109002752 unmapped: 24150016 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:23.460158+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c0a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [0,0,0,1])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109150208 unmapped: 24002560 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:24.460316+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109248512 unmapped: 23904256 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:25.460496+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109248512 unmapped: 23904256 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c0a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239518 data_alloc: 218103808 data_used: 4837376
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:26.460685+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109256704 unmapped: 23896064 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:27.460806+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109256704 unmapped: 23896064 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21610800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:28.461001+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21610800 session 0x558d24ab74a0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109281280 unmapped: 23871488 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:29.461161+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109289472 unmapped: 23863296 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:30.461393+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9790000/0x0/0x4ffc00000, data 0x1a02d36/0x1acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109297664 unmapped: 23855104 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1277516 data_alloc: 218103808 data_used: 4837376
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:31.461546+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109297664 unmapped: 23855104 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:32.461669+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109297664 unmapped: 23855104 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21611800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21611800 session 0x558d24ab6b40
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:33.461854+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9790000/0x0/0x4ffc00000, data 0x1a02d36/0x1acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21610800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21610800 session 0x558d23643a40
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109297664 unmapped: 23855104 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21f33800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21f33800 session 0x558d242623c0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336d000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.204575539s of 12.448491096s, submitted: 356
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336d000 session 0x558d242612c0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:34.462015+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109338624 unmapped: 23814144 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d25c9ec00
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:35.462174+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109338624 unmapped: 23814144 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:36.462436+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1281167 data_alloc: 218103808 data_used: 4960256
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109174784 unmapped: 23977984 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:37.462583+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f978f000/0x0/0x4ffc00000, data 0x1a02d46/0x1acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109174784 unmapped: 23977984 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:38.462886+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109174784 unmapped: 23977984 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:39.463169+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109174784 unmapped: 23977984 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f978f000/0x0/0x4ffc00000, data 0x1a02d46/0x1acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:40.463449+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109174784 unmapped: 23977984 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:41.463644+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1302295 data_alloc: 218103808 data_used: 8114176
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f978f000/0x0/0x4ffc00000, data 0x1a02d46/0x1acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109174784 unmapped: 23977984 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:42.463909+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109182976 unmapped: 23969792 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:43.464150+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109182976 unmapped: 23969792 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f978f000/0x0/0x4ffc00000, data 0x1a02d46/0x1acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:44.464441+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109182976 unmapped: 23969792 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:45.464581+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109043712 unmapped: 24109056 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:46.464745+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1302295 data_alloc: 218103808 data_used: 8114176
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109043712 unmapped: 24109056 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.845697403s of 12.867346764s, submitted: 7
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:47.464920+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111583232 unmapped: 21569536 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:48.465250+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111435776 unmapped: 21716992 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:49.465446+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f92c1000/0x0/0x4ffc00000, data 0x1ec7d46/0x1f92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111460352 unmapped: 21692416 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:50.465566+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111460352 unmapped: 21692416 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:51.465776+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351943 data_alloc: 218103808 data_used: 8171520
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111460352 unmapped: 21692416 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:52.465899+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111460352 unmapped: 21692416 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:53.466055+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111468544 unmapped: 21684224 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:54.466236+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111067136 unmapped: 22085632 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:55.466383+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f92a9000/0x0/0x4ffc00000, data 0x1ee8d46/0x1fb3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111067136 unmapped: 22085632 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:56.466546+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1346975 data_alloc: 218103808 data_used: 8171520
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111067136 unmapped: 22085632 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:57.466731+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111067136 unmapped: 22085632 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:58.466979+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111067136 unmapped: 22085632 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:59.467181+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.613057137s of 12.828779221s, submitted: 73
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111091712 unmapped: 22061056 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21d78400 session 0x558d23318f00
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d25c9e000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:00.467377+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111091712 unmapped: 22061056 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:01.467782+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1346999 data_alloc: 218103808 data_used: 8171520
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f929b000/0x0/0x4ffc00000, data 0x1ef6d46/0x1fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111091712 unmapped: 22061056 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:02.467962+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111091712 unmapped: 22061056 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:03.468202+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111091712 unmapped: 22061056 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:04.468450+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111091712 unmapped: 22061056 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:05.468660+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f929b000/0x0/0x4ffc00000, data 0x1ef6d46/0x1fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111091712 unmapped: 22061056 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:06.468908+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1347375 data_alloc: 218103808 data_used: 8171520
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111091712 unmapped: 22061056 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:07.469074+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111099904 unmapped: 22052864 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:08.469279+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111099904 unmapped: 22052864 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9298000/0x0/0x4ffc00000, data 0x1ef9d46/0x1fc4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:09.469425+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111099904 unmapped: 22052864 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:10.469500+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111099904 unmapped: 22052864 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:11.469661+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1347679 data_alloc: 218103808 data_used: 8179712
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.016714096s of 12.032507896s, submitted: 5
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 21880832 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:12.469793+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 21880832 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:13.469920+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b20000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b20000 session 0x558d214bf4a0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b20400
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b20400 session 0x558d242625a0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21610800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21610800 session 0x558d243a6f00
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9288000/0x0/0x4ffc00000, data 0x1f09d46/0x1fd4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21f33800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 25714688 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21f33800 session 0x558d24ab63c0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336d000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336d000 session 0x558d242c9c20
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:14.470061+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 25714688 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:15.470213+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8d3c000/0x0/0x4ffc00000, data 0x2454da8/0x2520000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 25714688 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:16.470384+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1395763 data_alloc: 218103808 data_used: 8179712
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 25714688 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:17.470520+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 25706496 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:18.470724+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111173632 unmapped: 26181632 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:19.470851+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8d3c000/0x0/0x4ffc00000, data 0x2454da8/0x2520000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111173632 unmapped: 26181632 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:20.471040+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b20000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b20000 session 0x558d2158e3c0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111181824 unmapped: 26173440 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b20800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b20c00
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:21.471189+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8d3c000/0x0/0x4ffc00000, data 0x2454da8/0x2520000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1395548 data_alloc: 218103808 data_used: 8179712
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8d3c000/0x0/0x4ffc00000, data 0x2454da8/0x2520000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111206400 unmapped: 26148864 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:22.471313+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8d3c000/0x0/0x4ffc00000, data 0x2454da8/0x2520000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 114827264 unmapped: 22528000 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:23.471494+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 114827264 unmapped: 22528000 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:24.471647+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 114827264 unmapped: 22528000 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.056807518s of 13.206870079s, submitted: 52
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:25.471801+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8d3c000/0x0/0x4ffc00000, data 0x2454da8/0x2520000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 114876416 unmapped: 22478848 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:26.471946+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1432228 data_alloc: 234881024 data_used: 13516800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 114876416 unmapped: 22478848 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:27.472167+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8d38000/0x0/0x4ffc00000, data 0x2458da8/0x2524000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 114876416 unmapped: 22478848 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:28.472346+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 114876416 unmapped: 22478848 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:29.472498+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 114876416 unmapped: 22478848 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:30.472628+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 114876416 unmapped: 22478848 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8d38000/0x0/0x4ffc00000, data 0x2458da8/0x2524000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:31.472753+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1432476 data_alloc: 234881024 data_used: 13516800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 114900992 unmapped: 22454272 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:32.472932+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117530624 unmapped: 19824640 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:33.473074+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118603776 unmapped: 18751488 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:34.473201+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118652928 unmapped: 18702336 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f827c000/0x0/0x4ffc00000, data 0x2f14da8/0x2fe0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:35.473423+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118685696 unmapped: 18669568 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:36.473577+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1520180 data_alloc: 234881024 data_used: 14266368
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118693888 unmapped: 18661376 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:37.473830+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.298201561s of 12.570601463s, submitted: 89
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117907456 unmapped: 19447808 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:38.474011+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117907456 unmapped: 19447808 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8279000/0x0/0x4ffc00000, data 0x2f17da8/0x2fe3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:39.474153+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8279000/0x0/0x4ffc00000, data 0x2f17da8/0x2fe3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118079488 unmapped: 19275776 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:40.474350+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f825a000/0x0/0x4ffc00000, data 0x2f36da8/0x3002000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118079488 unmapped: 19275776 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:41.474601+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1518452 data_alloc: 234881024 data_used: 14270464
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118087680 unmapped: 19267584 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:42.474813+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118087680 unmapped: 19267584 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:43.474973+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118087680 unmapped: 19267584 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:44.475166+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8254000/0x0/0x4ffc00000, data 0x2f3cda8/0x3008000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118087680 unmapped: 19267584 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:45.475401+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118292480 unmapped: 19062784 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:46.475539+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1519144 data_alloc: 234881024 data_used: 14270464
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f824a000/0x0/0x4ffc00000, data 0x2f46da8/0x3012000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118292480 unmapped: 19062784 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:47.475675+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118300672 unmapped: 19054592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:48.475831+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118300672 unmapped: 19054592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:49.476008+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f824a000/0x0/0x4ffc00000, data 0x2f46da8/0x3012000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118300672 unmapped: 19054592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:50.476232+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118300672 unmapped: 19054592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:51.476362+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1519144 data_alloc: 234881024 data_used: 14270464
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118300672 unmapped: 19054592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:52.476493+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.050909042s of 15.083417892s, submitted: 9
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118300672 unmapped: 19054592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8244000/0x0/0x4ffc00000, data 0x2f4cda8/0x3018000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:53.476615+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118300672 unmapped: 19054592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:54.476736+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118358016 unmapped: 18997248 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:55.476864+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8241000/0x0/0x4ffc00000, data 0x2f4dda8/0x3019000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118358016 unmapped: 18997248 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:56.477003+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1519252 data_alloc: 234881024 data_used: 14270464
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118358016 unmapped: 18997248 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:57.477154+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 18989056 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:58.477323+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 18989056 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:59.477453+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 20570112 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:00.477596+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 20570112 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:01.477768+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1519524 data_alloc: 234881024 data_used: 14270464
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f823e000/0x0/0x4ffc00000, data 0x2f52da8/0x301e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 20561920 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:02.477940+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 20561920 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:03.478085+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 20561920 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.415008545s of 11.443326950s, submitted: 9
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:04.478332+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 20561920 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:05.478464+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 20561920 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:06.478655+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1520188 data_alloc: 234881024 data_used: 14270464
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 20561920 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8239000/0x0/0x4ffc00000, data 0x2f56da8/0x3022000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:07.478821+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116801536 unmapped: 20553728 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:08.479035+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116801536 unmapped: 20553728 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:09.479186+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8239000/0x0/0x4ffc00000, data 0x2f56da8/0x3022000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116817920 unmapped: 20537344 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:10.479344+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116817920 unmapped: 20537344 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:11.479563+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1520168 data_alloc: 234881024 data_used: 14270464
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116817920 unmapped: 20537344 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:12.479779+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8234000/0x0/0x4ffc00000, data 0x2f5cda8/0x3028000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116817920 unmapped: 20537344 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:13.480067+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116817920 unmapped: 20537344 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:14.480322+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116817920 unmapped: 20537344 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:15.480506+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.406557083s of 11.440868378s, submitted: 10
Nov 24 10:05:59 compute-0 ceph-osd[82549]: mgrc ms_handle_reset ms_handle_reset con 0x558d23035800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3769522832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3769522832,v1:192.168.122.100:6801/3769522832]
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: get_auth_request con 0x558d21611800 auth_method 0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: mgrc handle_mgr_configure stats_period=5
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116817920 unmapped: 20537344 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:16.480719+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1520076 data_alloc: 234881024 data_used: 14270464
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116817920 unmapped: 20537344 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:17.480872+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8230000/0x0/0x4ffc00000, data 0x2f5fda8/0x302b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116826112 unmapped: 20529152 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:18.481015+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116826112 unmapped: 20529152 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:19.481190+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116826112 unmapped: 20529152 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:20.481385+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 20520960 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:21.481617+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1520164 data_alloc: 234881024 data_used: 14270464
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 20520960 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:22.481855+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f822d000/0x0/0x4ffc00000, data 0x2f62da8/0x302e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 20520960 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:23.482028+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116842496 unmapped: 20512768 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:24.482217+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116842496 unmapped: 20512768 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:25.482383+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.978147507s of 10.000229836s, submitted: 7
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b20800 session 0x558d2153c3c0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b20c00 session 0x558d24260d20
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21610800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116875264 unmapped: 20480000 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:26.482517+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1360002 data_alloc: 218103808 data_used: 8179712
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21610800 session 0x558d2335d0e0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 115474432 unmapped: 21880832 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:27.482698+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f876a000/0x0/0x4ffc00000, data 0x1f35d46/0x2000000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 115474432 unmapped: 21880832 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:28.482910+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 115474432 unmapped: 21880832 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:29.483080+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8765000/0x0/0x4ffc00000, data 0x1f3ad46/0x2005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 115474432 unmapped: 21880832 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:30.483299+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 115474432 unmapped: 21880832 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:31.483435+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1360378 data_alloc: 218103808 data_used: 8179712
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 115474432 unmapped: 21880832 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:32.483606+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 115474432 unmapped: 21880832 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:33.483763+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8765000/0x0/0x4ffc00000, data 0x1f3ad46/0x2005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23882800 session 0x558d215d8000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d25c9ec00 session 0x558d24ad9c20
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21610800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 115474432 unmapped: 21880832 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:34.483909+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21610800 session 0x558d24ab7c20
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 24174592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:35.484092+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 24174592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:36.484334+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1259967 data_alloc: 218103808 data_used: 4837376
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 24174592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:37.484532+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 24174592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:38.484861+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 24174592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:39.485033+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c0a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 24174592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:40.485166+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c0a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 24174592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:41.485289+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1259967 data_alloc: 218103808 data_used: 4837376
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 24174592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:42.485470+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 24174592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:43.485636+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c0a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 24174592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:44.485959+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 24174592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:45.486185+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 24174592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:46.486349+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1259967 data_alloc: 218103808 data_used: 4837376
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 24174592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:47.486501+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c0a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c0a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 24174592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:48.486731+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 24174592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:49.486885+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 24174592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:50.487041+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 24174592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:51.487200+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1259967 data_alloc: 218103808 data_used: 4837376
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 24174592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:52.487320+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 24174592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c0a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:53.487494+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 24174592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:54.487656+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 24174592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:55.487811+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c0a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 24174592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:56.487962+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1259967 data_alloc: 218103808 data_used: 4837376
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c0a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 24174592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:57.488144+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 24174592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:58.488297+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 24174592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:59.488421+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23882800 session 0x558d2149d4a0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b20800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b20800 session 0x558d2153cd20
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b20c00
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b20c00 session 0x558d2149da40
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21f33800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21f33800 session 0x558d23bda1e0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21610800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 33.998332977s of 34.264808655s, submitted: 79
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21610800 session 0x558d215d70e0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 112754688 unmapped: 24600576 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:00.488558+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 112754688 unmapped: 24600576 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:01.488669+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1327283 data_alloc: 218103808 data_used: 4837376
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:02.488821+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 112754688 unmapped: 24600576 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f92e1000/0x0/0x4ffc00000, data 0x1eb1d36/0x1f7b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:03.488964+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 112754688 unmapped: 24600576 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:04.489067+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 112754688 unmapped: 24600576 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f92e1000/0x0/0x4ffc00000, data 0x1eb1d36/0x1f7b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23882800 session 0x558d22f34000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:05.489195+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 112754688 unmapped: 24600576 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b20800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b20800 session 0x558d215c7c20
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:06.489370+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 112754688 unmapped: 24600576 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b20c00
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b20c00 session 0x558d232ab4a0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336d000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336d000 session 0x558d24ab6960
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329097 data_alloc: 218103808 data_used: 4837376
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336d000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:07.489503+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 24576000 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21610800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:08.489684+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 24576000 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:09.489823+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 21757952 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:10.489977+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 21757952 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f92e0000/0x0/0x4ffc00000, data 0x1eb1d46/0x1f7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:11.490124+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 21757952 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21610800 session 0x558d24250960
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336d000 session 0x558d23319c20
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1393241 data_alloc: 234881024 data_used: 12484608
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:12.490242+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.449810028s of 12.505952835s, submitted: 11
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110157824 unmapped: 27197440 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f92e0000/0x0/0x4ffc00000, data 0x1eb1d46/0x1f7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23882800 session 0x558d24aba3c0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:13.490392+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110157824 unmapped: 27197440 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:14.490557+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110157824 unmapped: 27197440 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:15.490783+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110157824 unmapped: 27197440 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:16.490958+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110157824 unmapped: 27197440 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264968 data_alloc: 218103808 data_used: 3002368
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:17.491140+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110157824 unmapped: 27197440 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c0a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:18.491310+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110157824 unmapped: 27197440 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:19.491429+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110157824 unmapped: 27197440 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:20.491580+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110157824 unmapped: 27197440 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:21.491729+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110157824 unmapped: 27197440 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c0a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264968 data_alloc: 218103808 data_used: 3002368
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:22.491939+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c0a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110157824 unmapped: 27197440 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c0a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:23.492143+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110157824 unmapped: 27197440 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:24.492268+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110157824 unmapped: 27197440 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:25.492468+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110157824 unmapped: 27197440 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:26.492617+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110157824 unmapped: 27197440 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264968 data_alloc: 218103808 data_used: 3002368
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:27.492781+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110157824 unmapped: 27197440 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:28.492922+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110157824 unmapped: 27197440 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c0a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b20800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b20800 session 0x558d23bdad20
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b20c00
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b20c00 session 0x558d232aaf00
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b20c00
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b20c00 session 0x558d24abad20
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21610800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21610800 session 0x558d24abb680
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336d000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:29.493024+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.988937378s of 17.018053055s, submitted: 7
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 114384896 unmapped: 22970368 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336d000 session 0x558d2158e780
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:30.493163+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110190592 unmapped: 27164672 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:31.493282+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110190592 unmapped: 27164672 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1327830 data_alloc: 218103808 data_used: 3002368
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:32.493423+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110190592 unmapped: 27164672 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f93e8000/0x0/0x4ffc00000, data 0x1daad36/0x1e74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:33.493559+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110198784 unmapped: 27156480 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:34.493693+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110198784 unmapped: 27156480 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23882800 session 0x558d232aba40
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f93e8000/0x0/0x4ffc00000, data 0x1daad36/0x1e74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:35.493849+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110223360 unmapped: 27131904 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b20800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:36.493953+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110223360 unmapped: 27131904 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b20000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1331368 data_alloc: 218103808 data_used: 3067904
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:37.494056+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111378432 unmapped: 25976832 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f93e7000/0x0/0x4ffc00000, data 0x1daad59/0x1e75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:38.494165+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113852416 unmapped: 23502848 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f93e7000/0x0/0x4ffc00000, data 0x1daad59/0x1e75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:39.494289+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113852416 unmapped: 23502848 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b20000 session 0x558d22276b40
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.594711304s of 10.678098679s, submitted: 22
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b20800 session 0x558d2149cd20
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:40.494424+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21610800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113852416 unmapped: 23502848 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21610800 session 0x558d2335cf00
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:41.494546+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109961216 unmapped: 27394048 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1271058 data_alloc: 218103808 data_used: 3002368
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:42.494718+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109961216 unmapped: 27394048 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:43.494900+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109961216 unmapped: 27394048 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:44.495090+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c09000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109961216 unmapped: 27394048 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:45.495318+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109961216 unmapped: 27394048 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:46.495476+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109961216 unmapped: 27394048 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1271058 data_alloc: 218103808 data_used: 3002368
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:47.495659+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c09000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109961216 unmapped: 27394048 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:48.495850+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109961216 unmapped: 27394048 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c09000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:49.496024+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109969408 unmapped: 27385856 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:50.496168+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109969408 unmapped: 27385856 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c09000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c09000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:51.496327+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109969408 unmapped: 27385856 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1271058 data_alloc: 218103808 data_used: 3002368
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:52.496492+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c09000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109969408 unmapped: 27385856 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:53.496649+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c09000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109969408 unmapped: 27385856 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c09000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:54.496773+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109969408 unmapped: 27385856 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:55.496897+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109969408 unmapped: 27385856 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c09000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:56.497061+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 27377664 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1271058 data_alloc: 218103808 data_used: 3002368
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:57.497224+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 27377664 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:58.497425+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 27377664 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:59.497551+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 27377664 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:00.497682+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 27377664 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c09000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:01.497811+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 27377664 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1271058 data_alloc: 218103808 data_used: 3002368
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:02.497972+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 27377664 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:03.498121+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109985792 unmapped: 27369472 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:04.498310+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109985792 unmapped: 27369472 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:05.498450+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109985792 unmapped: 27369472 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:06.498676+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109985792 unmapped: 27369472 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c09000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1271058 data_alloc: 218103808 data_used: 3002368
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:07.498825+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109985792 unmapped: 27369472 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336d000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336d000 session 0x558d23870b40
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23882800 session 0x558d23a1ab40
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b20c00
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b20c00 session 0x558d23ab0f00
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21610800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21610800 session 0x558d23ab0960
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336d000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 28.090379715s of 28.186922073s, submitted: 30
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:08.499143+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110059520 unmapped: 27295744 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336d000 session 0x558d23ab0b40
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:09.499331+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23882800 session 0x558d214be5a0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110067712 unmapped: 27287552 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:10.499594+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110067712 unmapped: 27287552 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f92f6000/0x0/0x4ffc00000, data 0x1e9bd98/0x1f66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:11.499776+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110067712 unmapped: 27287552 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341067 data_alloc: 218103808 data_used: 3002368
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:12.499966+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110067712 unmapped: 27287552 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f92f6000/0x0/0x4ffc00000, data 0x1e9bd98/0x1f66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:13.500164+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110075904 unmapped: 27279360 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:14.500340+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110084096 unmapped: 27271168 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:15.500479+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110084096 unmapped: 27271168 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:16.500635+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110084096 unmapped: 27271168 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b20800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b20800 session 0x558d24260780
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341067 data_alloc: 218103808 data_used: 3002368
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:17.500776+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f92f6000/0x0/0x4ffc00000, data 0x1e9bd98/0x1f66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b21000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b21000 session 0x558d215c70e0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110100480 unmapped: 27254784 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:18.500938+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b21000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b21000 session 0x558d21d8a780
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110100480 unmapped: 27254784 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21610800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.295337677s of 10.542456627s, submitted: 29
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21610800 session 0x558d24ab6000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:19.501276+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336d000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110133248 unmapped: 27222016 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:20.502720+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111812608 unmapped: 25542656 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:21.502860+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f92f5000/0x0/0x4ffc00000, data 0x1e9bda8/0x1f67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113696768 unmapped: 23658496 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1407206 data_alloc: 234881024 data_used: 12505088
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:22.503047+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113696768 unmapped: 23658496 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f92f5000/0x0/0x4ffc00000, data 0x1e9bda8/0x1f67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:23.503235+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113696768 unmapped: 23658496 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:24.503414+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113696768 unmapped: 23658496 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:25.503616+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113696768 unmapped: 23658496 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:26.503995+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113696768 unmapped: 23658496 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1407206 data_alloc: 234881024 data_used: 12505088
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:27.504167+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113696768 unmapped: 23658496 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:28.504700+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113696768 unmapped: 23658496 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f92f5000/0x0/0x4ffc00000, data 0x1e9bda8/0x1f67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:29.505059+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113704960 unmapped: 23650304 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:30.506174+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113704960 unmapped: 23650304 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:31.506563+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.962337494s of 12.983584404s, submitted: 7
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 114057216 unmapped: 23298048 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1445254 data_alloc: 234881024 data_used: 12660736
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:32.506728+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116654080 unmapped: 20701184 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:33.506922+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116973568 unmapped: 20381696 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:34.507129+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8dce000/0x0/0x4ffc00000, data 0x23c2da8/0x248e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116973568 unmapped: 20381696 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:35.507295+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116973568 unmapped: 20381696 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8dce000/0x0/0x4ffc00000, data 0x23c2da8/0x248e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:36.507433+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 20242432 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1450736 data_alloc: 234881024 data_used: 13234176
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:37.507608+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 20242432 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:38.507799+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 20242432 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8dce000/0x0/0x4ffc00000, data 0x23c2da8/0x248e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:39.507970+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 20209664 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:40.508155+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 20209664 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:41.508362+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 20209664 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1450736 data_alloc: 234881024 data_used: 13234176
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:42.508522+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 20209664 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:43.508657+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 20209664 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:44.508806+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8dce000/0x0/0x4ffc00000, data 0x23c2da8/0x248e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117178368 unmapped: 20176896 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:45.509011+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117178368 unmapped: 20176896 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:46.509159+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.702803612s of 14.805799484s, submitted: 47
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23882800 session 0x558d23a1b2c0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336d000 session 0x558d2158e1e0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117178368 unmapped: 20176896 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b20800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b20800 session 0x558d24ab63c0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280017 data_alloc: 218103808 data_used: 3002368
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:47.509271+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 25878528 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:48.509447+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c09000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 25878528 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:49.509607+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 25878528 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:50.509744+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 25878528 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c09000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:51.509875+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 25878528 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280017 data_alloc: 218103808 data_used: 3002368
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:52.510058+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 25878528 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:53.510206+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 25878528 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:54.510319+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c09000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 25878528 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:55.510662+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 25878528 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:56.510770+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 25878528 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280017 data_alloc: 218103808 data_used: 3002368
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:57.510920+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 25878528 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c09000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:58.511174+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 25878528 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:59.511301+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 25878528 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:00.511418+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 25878528 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c09000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:01.511537+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 25878528 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280017 data_alloc: 218103808 data_used: 3002368
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:02.511654+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 25878528 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:03.511748+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 25878528 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c09000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:04.511896+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 25878528 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:05.512068+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c09000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 25878528 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:06.512185+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 25878528 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:07.512361+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280017 data_alloc: 218103808 data_used: 3002368
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b20800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b20800 session 0x558d214b9a40
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21610800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21610800 session 0x558d23ac3e00
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336d000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336d000 session 0x558d243a72c0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23882800 session 0x558d215c9a40
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b21000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.234964371s of 21.351840973s, submitted: 34
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 25878528 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b21000 session 0x558d222632c0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b21000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b21000 session 0x558d232aaf00
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21610800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21610800 session 0x558d24ab6960
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336d000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336d000 session 0x558d22276960
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23882800 session 0x558d215d72c0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:08.512533+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 25927680 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:09.512669+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 25927680 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:10.512811+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 25927680 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:11.512983+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9840000/0x0/0x4ffc00000, data 0x1950da8/0x1a1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 25927680 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9840000/0x0/0x4ffc00000, data 0x1950da8/0x1a1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:12.513214+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1314678 data_alloc: 218103808 data_used: 3002368
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 25927680 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:13.513447+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 25927680 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:14.513589+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b20800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b20800 session 0x558d2149d2c0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111411200 unmapped: 25944064 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b20800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:15.513730+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f983f000/0x0/0x4ffc00000, data 0x1950dcb/0x1a1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111411200 unmapped: 25944064 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:16.513832+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111624192 unmapped: 25731072 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:17.513981+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1340867 data_alloc: 218103808 data_used: 6770688
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111624192 unmapped: 25731072 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:18.514238+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111624192 unmapped: 25731072 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:19.514384+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111624192 unmapped: 25731072 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:20.514561+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f983f000/0x0/0x4ffc00000, data 0x1950dcb/0x1a1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111624192 unmapped: 25731072 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:21.514701+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111624192 unmapped: 25731072 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:22.514855+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1340867 data_alloc: 218103808 data_used: 6770688
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111624192 unmapped: 25731072 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:23.514996+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21610800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111624192 unmapped: 25731072 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.050022125s of 16.154581070s, submitted: 31
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21610800 session 0x558d2149d4a0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:24.515145+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111747072 unmapped: 25608192 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9111000/0x0/0x4ffc00000, data 0x207edcb/0x214b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:25.515294+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111747072 unmapped: 25608192 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:26.515440+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111747072 unmapped: 25608192 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:27.515566+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1400983 data_alloc: 218103808 data_used: 6774784
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 112517120 unmapped: 24838144 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:28.515738+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 112517120 unmapped: 24838144 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:29.515877+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8cb3000/0x0/0x4ffc00000, data 0x24dcdcb/0x25a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336d000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336d000 session 0x558d2158fc20
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113213440 unmapped: 24141824 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:30.515996+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23882800 session 0x558d24263a40
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 24133632 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:31.516141+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b21000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b21000 session 0x558d24263c20
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b21800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b21800 session 0x558d214bef00
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113229824 unmapped: 24125440 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8c7b000/0x0/0x4ffc00000, data 0x2514dcb/0x25e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:32.516263+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1442101 data_alloc: 218103808 data_used: 6807552
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113229824 unmapped: 24125440 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21610800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:33.516405+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336d000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113287168 unmapped: 24068096 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:34.516521+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.491031647s of 10.713547707s, submitted: 69
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116088832 unmapped: 21266432 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:35.516649+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116088832 unmapped: 21266432 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:36.516796+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116088832 unmapped: 21266432 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:37.516967+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8c59000/0x0/0x4ffc00000, data 0x2535ddb/0x2603000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1491485 data_alloc: 234881024 data_used: 13135872
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116121600 unmapped: 21233664 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:38.517159+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116121600 unmapped: 21233664 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:39.517315+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116121600 unmapped: 21233664 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:40.517529+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116121600 unmapped: 21233664 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:41.517707+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116121600 unmapped: 21233664 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:42.517849+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1491485 data_alloc: 234881024 data_used: 13135872
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8c59000/0x0/0x4ffc00000, data 0x2535ddb/0x2603000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116154368 unmapped: 21200896 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:43.518012+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116154368 unmapped: 21200896 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:44.518311+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116154368 unmapped: 21200896 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.482767105s of 10.491147041s, submitted: 2
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:45.518459+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8c59000/0x0/0x4ffc00000, data 0x2535ddb/0x2603000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 124125184 unmapped: 13230080 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:46.518607+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 123486208 unmapped: 13869056 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:47.518780+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578925 data_alloc: 234881024 data_used: 14553088
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 123568128 unmapped: 13787136 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:48.518948+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 123568128 unmapped: 13787136 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:49.519193+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 13778944 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:50.519326+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f821d000/0x0/0x4ffc00000, data 0x2f71ddb/0x303f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 13778944 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:51.519448+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f821d000/0x0/0x4ffc00000, data 0x2f71ddb/0x303f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 13778944 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:52.519585+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1580445 data_alloc: 234881024 data_used: 14639104
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 13778944 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:53.519735+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 13778944 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:54.519862+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f821d000/0x0/0x4ffc00000, data 0x2f71ddb/0x303f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 13778944 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:55.519988+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f821d000/0x0/0x4ffc00000, data 0x2f71ddb/0x303f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 13778944 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:56.520196+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336d000 session 0x558d243a6000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21610800 session 0x558d21df4b40
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 13778944 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:57.520340+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.993705750s of 12.190896034s, submitted: 85
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1388441 data_alloc: 218103808 data_used: 5189632
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23882800 session 0x558d24ad81e0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 120872960 unmapped: 16482304 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:58.520500+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 120872960 unmapped: 16482304 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:59.521197+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f937f000/0x0/0x4ffc00000, data 0x1e10dcb/0x1edd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 120881152 unmapped: 16474112 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:00.521333+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b20800 session 0x558d232ef0e0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 120881152 unmapped: 16474112 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b21000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:01.521463+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b21000 session 0x558d232aa960
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 19619840 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:02.521611+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1298138 data_alloc: 218103808 data_used: 3002368
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 19619840 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:03.522200+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 19619840 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:04.522706+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c08000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 19619840 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:05.523052+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 19619840 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:06.523524+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 19619840 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:07.523733+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1298138 data_alloc: 218103808 data_used: 3002368
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 19619840 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:08.524069+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 19619840 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:09.524443+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c08000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 19619840 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:10.524696+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 19619840 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:11.524912+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 19619840 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:12.525040+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1298138 data_alloc: 218103808 data_used: 3002368
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 19619840 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:13.525358+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c08000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 19619840 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:14.525649+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 19619840 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:15.525846+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 19619840 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:16.526030+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 19619840 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:17.526234+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1298138 data_alloc: 218103808 data_used: 3002368
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 19619840 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:18.526417+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 19619840 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:19.526644+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c08000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 19619840 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:20.526877+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 19619840 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:21.527062+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b21000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b21000 session 0x558d22276960
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21610800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21610800 session 0x558d24abb860
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336d000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336d000 session 0x558d23870960
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 19619840 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:22.527217+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23882800 session 0x558d24ab7e00
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b20800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 25.039752960s of 25.173450470s, submitted: 43
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1357123 data_alloc: 218103808 data_used: 3002368
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b20800 session 0x558d24262000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21610800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21610800 session 0x558d2158e5a0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117817344 unmapped: 22691840 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:23.527379+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117817344 unmapped: 22691840 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:24.527538+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:25.527727+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117817344 unmapped: 22691840 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f94e2000/0x0/0x4ffc00000, data 0x1cafd98/0x1d7a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:26.527919+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117817344 unmapped: 22691840 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:27.528093+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117817344 unmapped: 22691840 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f94e2000/0x0/0x4ffc00000, data 0x1cafd98/0x1d7a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1357067 data_alloc: 218103808 data_used: 3002368
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:28.528353+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117817344 unmapped: 22691840 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:29.528512+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117817344 unmapped: 22691840 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f94e2000/0x0/0x4ffc00000, data 0x1cafd98/0x1d7a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:30.528717+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117817344 unmapped: 22691840 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336d000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:31.528925+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336d000 session 0x558d22f35680
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118128640 unmapped: 22380544 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f94e2000/0x0/0x4ffc00000, data 0x1cafd98/0x1d7a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882800
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b21000
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:32.529277+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118128640 unmapped: 22380544 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1358767 data_alloc: 218103808 data_used: 3010560
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:33.529394+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118554624 unmapped: 21954560 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:34.529555+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118554624 unmapped: 21954560 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f94be000/0x0/0x4ffc00000, data 0x1cd3d98/0x1d9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:35.529706+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118554624 unmapped: 21954560 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:36.529916+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118554624 unmapped: 21954560 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:37.530054+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118554624 unmapped: 21954560 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1401175 data_alloc: 234881024 data_used: 9367552
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:38.530354+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118554624 unmapped: 21954560 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:39.530563+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118554624 unmapped: 21954560 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:40.530746+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118554624 unmapped: 21954560 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f94be000/0x0/0x4ffc00000, data 0x1cd3d98/0x1d9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:41.530931+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118554624 unmapped: 21954560 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:42.531149+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118554624 unmapped: 21954560 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1401631 data_alloc: 234881024 data_used: 9379840
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:43.531282+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.676862717s of 20.775457382s, submitted: 32
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118685696 unmapped: 21823488 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:44.531489+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118964224 unmapped: 21544960 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.25585 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:45.531634+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 21487616 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8ce5000/0x0/0x4ffc00000, data 0x24abd98/0x2576000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:46.531831+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 21487616 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:47.532000+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 21487616 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8ce5000/0x0/0x4ffc00000, data 0x24abd98/0x2576000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1472363 data_alloc: 234881024 data_used: 10211328
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:48.532218+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 21487616 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:49.532400+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 21487616 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:50.532550+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8ce5000/0x0/0x4ffc00000, data 0x24abd98/0x2576000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 21487616 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:51.532695+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 21487616 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:52.532837+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 21479424 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8ce5000/0x0/0x4ffc00000, data 0x24abd98/0x2576000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1472363 data_alloc: 234881024 data_used: 10211328
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:53.532985+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 21479424 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:54.533133+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 21479424 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:55.533272+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 21479424 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:56.533466+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 21479424 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:57.533614+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 21479424 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1472363 data_alloc: 234881024 data_used: 10211328
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:58.533773+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 21479424 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8ce5000/0x0/0x4ffc00000, data 0x24abd98/0x2576000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:59.533928+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 21479424 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:00.534066+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 21479424 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.154628754s of 17.316360474s, submitted: 62
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23882800 session 0x558d23871c20
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b21000 session 0x558d22277c20
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b83400
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:01.534197+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b83400 session 0x558d231c92c0
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:02.534313+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:03.534450+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:04.534583+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:05.534706+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:06.534830+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:07.534958+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:08.535151+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:09.535238+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:10.535327+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:11.535453+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:12.535632+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:13.535798+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:14.535955+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:15.536084+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:16.536240+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:17.536402+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:18.536562+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:19.536700+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:20.536812+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:21.537297+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:22.537448+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:23.537609+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:24.537777+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:25.537912+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:26.538157+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:27.538951+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:28.539132+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:29.539269+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:30.539445+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:31.539663+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:32.539843+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:33.540001+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:34.540160+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:35.540314+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:36.540529+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:37.540653+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:38.540790+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:39.540978+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:40.541093+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:41.541268+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 24223744 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:42.541404+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 24223744 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:43.541576+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 24223744 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:44.541780+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 24223744 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:45.541929+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 24223744 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:46.542082+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 24223744 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:47.542297+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 24223744 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:48.542430+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 24223744 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:49.542600+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 24223744 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:50.542768+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 24223744 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:51.542921+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 24215552 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:52.543137+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 24215552 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:53.543254+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 24215552 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:54.543369+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 24215552 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:55.543493+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 24215552 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:56.543613+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 24215552 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:57.543825+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 24215552 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:58.544046+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 24215552 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:59.544162+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 24215552 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:00.544363+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 24215552 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:01.544486+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 24215552 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:02.544637+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 24215552 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:03.544785+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 24215552 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:04.544969+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 24215552 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:05.545172+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116301824 unmapped: 24207360 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:06.545354+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116301824 unmapped: 24207360 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:07.545476+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116301824 unmapped: 24207360 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:08.547643+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116301824 unmapped: 24207360 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:09.547802+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116301824 unmapped: 24207360 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:10.549376+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116301824 unmapped: 24207360 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:11.550631+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 24199168 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:12.551444+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 24199168 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:13.551581+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 24199168 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:14.552047+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 24199168 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:15.552194+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 24199168 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:16.552366+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 24199168 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:17.552551+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 24199168 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:18.552745+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 24199168 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:19.552884+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 24199168 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:20.553029+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 24199168 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:21.553700+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 24199168 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:22.553858+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 24199168 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:23.553989+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 24199168 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:24.554144+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 24199168 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:25.554247+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 24190976 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:26.554350+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 24117248 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: do_command 'config diff' '{prefix=config diff}'
Nov 24 10:05:59 compute-0 ceph-osd[82549]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 24 10:05:59 compute-0 ceph-osd[82549]: do_command 'config show' '{prefix=config show}'
Nov 24 10:05:59 compute-0 ceph-osd[82549]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 24 10:05:59 compute-0 ceph-osd[82549]: do_command 'counter dump' '{prefix=counter dump}'
Nov 24 10:05:59 compute-0 ceph-osd[82549]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 24 10:05:59 compute-0 ceph-osd[82549]: do_command 'counter schema' '{prefix=counter schema}'
Nov 24 10:05:59 compute-0 ceph-osd[82549]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:27.554450+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 115957760 unmapped: 24551424 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:05:59 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:05:59 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:28.554593+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 115834880 unmapped: 24674304 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:05:59 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:29.554737+0000)
Nov 24 10:05:59 compute-0 ceph-osd[82549]: do_command 'log dump' '{prefix=log dump}'
Nov 24 10:05:59 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.17442 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:05:59 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:06:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:05:59 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:06:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:05:59 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:06:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:06:00 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:06:00 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 10:06:00 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.25600 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:00 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.17463 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:00 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Nov 24 10:06:00 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3402713778' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 24 10:06:00 compute-0 ceph-mon[74331]: from='client.26939 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:00 compute-0 ceph-mon[74331]: from='client.25543 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:00 compute-0 ceph-mon[74331]: from='client.17385 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:00 compute-0 ceph-mon[74331]: from='client.26948 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:00 compute-0 ceph-mon[74331]: from='client.25558 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:00 compute-0 ceph-mon[74331]: from='client.17406 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:00 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/953032884' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 24 10:06:00 compute-0 ceph-mon[74331]: from='client.26969 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:00 compute-0 ceph-mon[74331]: from='client.25576 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:00 compute-0 ceph-mon[74331]: from='client.17421 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:00 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/2387178002' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 24 10:06:00 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2592132238' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 24 10:06:00 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/1267946720' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 24 10:06:00 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/2839996989' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 24 10:06:00 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/632773887' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 24 10:06:00 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/4093387687' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 10:06:00 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/2372817951' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 10:06:00 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/3402713778' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 24 10:06:00 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1178: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:06:00 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Nov 24 10:06:00 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/102652043' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 24 10:06:00 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.25627 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:00 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.17478 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:00 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:00 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:00 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:00.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:00 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:06:00] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 24 10:06:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:06:00] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 24 10:06:01 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.25639 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:01 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Nov 24 10:06:01 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3561731369' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 24 10:06:01 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.17502 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:01 compute-0 nova_compute[257700]: 2025-11-24 10:06:01.295 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:06:01 compute-0 crontab[285434]: (root) LIST (root)
Nov 24 10:06:01 compute-0 nova_compute[257700]: 2025-11-24 10:06:01.472 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:06:01 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.25657 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:01 compute-0 ceph-mon[74331]: from='client.25585 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:01 compute-0 ceph-mon[74331]: from='client.17442 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:01 compute-0 ceph-mon[74331]: from='client.25600 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:01 compute-0 ceph-mon[74331]: from='client.17463 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:01 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/3737416269' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 24 10:06:01 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:06:01 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/102652043' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 24 10:06:01 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/4273539614' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 24 10:06:01 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/2953229075' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 24 10:06:01 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/1925814334' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 24 10:06:01 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3561731369' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 24 10:06:01 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/231459924' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 24 10:06:01 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/1699553728' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 24 10:06:01 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/2545531661' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 24 10:06:01 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.17529 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:01 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:01 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:01 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:01.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:01 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.25675 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:01 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.17544 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:02 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Nov 24 10:06:02 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2638530521' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 24 10:06:02 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:06:02 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.25702 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:02 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.17574 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:02 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Nov 24 10:06:02 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3506976665' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 24 10:06:02 compute-0 ceph-mon[74331]: pgmap v1178: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:06:02 compute-0 ceph-mon[74331]: from='client.25627 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:02 compute-0 ceph-mon[74331]: from='client.17478 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:02 compute-0 ceph-mon[74331]: from='client.25639 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:02 compute-0 ceph-mon[74331]: from='client.17502 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:02 compute-0 ceph-mon[74331]: from='client.25657 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:02 compute-0 ceph-mon[74331]: from='client.17529 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:02 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/3919931828' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 24 10:06:02 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/4204316792' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 24 10:06:02 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/1906280520' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 10:06:02 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/1906280520' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 10:06:02 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/3987749054' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 24 10:06:02 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2638530521' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 24 10:06:02 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/1758111733' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 24 10:06:02 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/3328823182' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 24 10:06:02 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/4061091675' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 24 10:06:02 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3506976665' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 24 10:06:02 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/2769527818' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 24 10:06:02 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1179: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:06:02 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.25714 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:02 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.17598 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:02 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Nov 24 10:06:02 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1472469966' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 24 10:06:02 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.27125 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:02 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:02 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:02 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:02.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:03 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Nov 24 10:06:03 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2529875961' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 24 10:06:03 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.27134 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:03 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Nov 24 10:06:03 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4255283993' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 24 10:06:03 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.27143 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:03 compute-0 ceph-mgr[74626]: [dashboard INFO request] [192.168.122.100:54604] [POST] [200] [0.001s] [4.0B] [eaa28656-b4c1-4d47-9cf0-63074c0fd652] /api/prometheus_receiver
Nov 24 10:06:03 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:03 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:03 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:03.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:03 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.27149 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:03 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Nov 24 10:06:03 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1216611343' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 24 10:06:03 compute-0 ceph-mon[74331]: from='client.25675 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:03 compute-0 ceph-mon[74331]: from='client.17544 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:03 compute-0 ceph-mon[74331]: from='client.25702 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:03 compute-0 ceph-mon[74331]: from='client.17574 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:03 compute-0 ceph-mon[74331]: pgmap v1179: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:06:03 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/811620630' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 24 10:06:03 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1472469966' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 24 10:06:03 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/4187827911' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 24 10:06:03 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/2337431009' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 24 10:06:03 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2529875961' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 24 10:06:03 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/4255283993' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 24 10:06:03 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/1514098715' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 24 10:06:03 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/1367571744' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 24 10:06:03 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Nov 24 10:06:03 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1445369186' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 24 10:06:03 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Nov 24 10:06:03 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1880844870' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 24 10:06:04 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.27170 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:04 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Nov 24 10:06:04 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3029797191' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 24 10:06:04 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Nov 24 10:06:04 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1577620300' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 24 10:06:04 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.27185 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:04 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Nov 24 10:06:04 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2083374271' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 24 10:06:04 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1180: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:06:04 compute-0 ceph-mon[74331]: from='client.25714 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:04 compute-0 ceph-mon[74331]: from='client.17598 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:04 compute-0 ceph-mon[74331]: from='client.27125 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:04 compute-0 ceph-mon[74331]: from='client.27134 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:04 compute-0 ceph-mon[74331]: from='client.27143 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:04 compute-0 ceph-mon[74331]: from='client.27149 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:04 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1216611343' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 24 10:06:04 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1445369186' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 24 10:06:04 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/1449981449' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 24 10:06:04 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1880844870' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 24 10:06:04 compute-0 ceph-mon[74331]: from='client.27170 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:04 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/108309168' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 24 10:06:04 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3029797191' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 24 10:06:04 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/3220301944' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 24 10:06:04 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/4110595336' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 24 10:06:04 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1577620300' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 24 10:06:04 compute-0 ceph-mon[74331]: from='client.27185 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:04 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2083374271' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 24 10:06:04 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/2072392436' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 24 10:06:04 compute-0 ceph-mon[74331]: pgmap v1180: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:06:04 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Nov 24 10:06:04 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3410044772' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 24 10:06:04 compute-0 systemd[1]: Starting Hostname Service...
Nov 24 10:06:04 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.27200 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:04 compute-0 systemd[1]: Started Hostname Service.
Nov 24 10:06:04 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Nov 24 10:06:04 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2929018559' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 24 10:06:04 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:04 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:06:04 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:04.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:06:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:06:04 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:06:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:06:05 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:06:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:06:05 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:06:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:06:05 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:06:05 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Nov 24 10:06:05 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1668517519' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 24 10:06:05 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.27218 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:05 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Nov 24 10:06:05 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3166199330' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 24 10:06:05 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.17757 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:05 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:05 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:05 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:05.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:05 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/1998033834' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 24 10:06:05 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3410044772' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 24 10:06:05 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/2200727252' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 24 10:06:05 compute-0 ceph-mon[74331]: from='client.27200 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:05 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2929018559' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 24 10:06:05 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/3321027459' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 24 10:06:05 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/2931140467' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 24 10:06:05 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1668517519' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 24 10:06:05 compute-0 ceph-mon[74331]: from='client.27218 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:05 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/812305037' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 24 10:06:05 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3166199330' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 24 10:06:05 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/1681524618' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 24 10:06:05 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/348803258' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 24 10:06:05 compute-0 ceph-mon[74331]: from='client.17757 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:05 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.27233 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:05 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Nov 24 10:06:05 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/426060483' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 24 10:06:05 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 10:06:05 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 10:06:05 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.25819 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:05 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.17778 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:06 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.17793 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:06 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.25828 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:06 compute-0 nova_compute[257700]: 2025-11-24 10:06:06.298 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:06:06 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.25840 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:06 compute-0 nova_compute[257700]: 2025-11-24 10:06:06.474 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:06:06 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.17817 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:06 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.25855 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:06 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1181: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:06:06 compute-0 ceph-mon[74331]: from='client.27233 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:06 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/1332274527' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 24 10:06:06 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/426060483' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 24 10:06:06 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/2992184342' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 24 10:06:06 compute-0 ceph-mon[74331]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 10:06:06 compute-0 ceph-mon[74331]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 10:06:06 compute-0 ceph-mon[74331]: from='client.25819 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:06 compute-0 ceph-mon[74331]: from='client.17778 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:06 compute-0 ceph-mon[74331]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 10:06:06 compute-0 ceph-mon[74331]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 10:06:06 compute-0 ceph-mon[74331]: from='client.17793 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:06 compute-0 ceph-mon[74331]: from='client.25828 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:06 compute-0 ceph-mon[74331]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 10:06:06 compute-0 ceph-mon[74331]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 10:06:06 compute-0 ceph-mon[74331]: from='client.25840 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:06 compute-0 ceph-mon[74331]: from='client.17817 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:06 compute-0 ceph-mon[74331]: from='client.25855 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:06 compute-0 ceph-mon[74331]: pgmap v1181: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:06:06 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/2999518195' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 24 10:06:06 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Nov 24 10:06:06 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1003062013' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 24 10:06:06 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.17832 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:06 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:06 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:06 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:06.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:06 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.25870 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:07 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.27308 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:07 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:06:07 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0)
Nov 24 10:06:07 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1380896036' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 24 10:06:07 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.27323 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:07 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.25888 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:07 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:07 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:06:07 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:07.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:06:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:06:07.586Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 10:06:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:06:07.586Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:06:07 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Nov 24 10:06:07 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/335999762' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 24 10:06:07 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1003062013' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 24 10:06:07 compute-0 ceph-mon[74331]: from='client.17832 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:07 compute-0 ceph-mon[74331]: from='client.25870 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:07 compute-0 ceph-mon[74331]: from='client.27308 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:07 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1380896036' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 24 10:06:07 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/218252077' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 24 10:06:07 compute-0 ceph-mon[74331]: from='client.27323 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:07 compute-0 ceph-mon[74331]: from='client.25888 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:07 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/1880514736' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 24 10:06:07 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/335999762' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 24 10:06:07 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.17874 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:07 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.25900 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:08 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Nov 24 10:06:08 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/293589835' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 24 10:06:08 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.17892 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:08 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 10:06:08 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 10:06:08 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.25912 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:08 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.17913 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:08 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1182: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:06:08 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.25930 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:08 compute-0 sshd-session[286440]: Invalid user nexus from 36.255.3.203 port 56420
Nov 24 10:06:08 compute-0 ceph-mon[74331]: from='client.17874 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:08 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/2545381563' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 24 10:06:08 compute-0 ceph-mon[74331]: from='client.25900 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:08 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/293589835' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 24 10:06:08 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/2982781899' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 24 10:06:08 compute-0 ceph-mon[74331]: from='client.17892 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:08 compute-0 ceph-mon[74331]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 10:06:08 compute-0 ceph-mon[74331]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 10:06:08 compute-0 ceph-mon[74331]: from='client.25912 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:08 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/3641594844' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 24 10:06:08 compute-0 ceph-mon[74331]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 10:06:08 compute-0 ceph-mon[74331]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 10:06:08 compute-0 ceph-mon[74331]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 10:06:08 compute-0 ceph-mon[74331]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 10:06:08 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/1811800393' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 24 10:06:08 compute-0 ceph-mon[74331]: from='client.17913 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:08 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Nov 24 10:06:08 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2892553981' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 24 10:06:08 compute-0 sshd-session[286440]: Received disconnect from 36.255.3.203 port 56420:11: Bye Bye [preauth]
Nov 24 10:06:08 compute-0 sshd-session[286440]: Disconnected from invalid user nexus 36.255.3.203 port 56420 [preauth]
Nov 24 10:06:08 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 10:06:08 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 10:06:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:06:08.935Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:06:08 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:08 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:08 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:08.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:09 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.17949 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:09 compute-0 sudo[286678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:06:09 compute-0 sudo[286678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:06:09 compute-0 sudo[286678]: pam_unix(sudo:session): session closed for user root
Nov 24 10:06:09 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.27389 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:09 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:09 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:06:09 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:09.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:06:09 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Nov 24 10:06:09 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2699081811' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 24 10:06:09 compute-0 ceph-mon[74331]: pgmap v1182: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:06:09 compute-0 ceph-mon[74331]: from='client.25930 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:09 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/2699018512' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 24 10:06:09 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2892553981' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 24 10:06:09 compute-0 ceph-mon[74331]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 10:06:09 compute-0 ceph-mon[74331]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 10:06:09 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/2080032816' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 24 10:06:09 compute-0 ceph-mon[74331]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 10:06:09 compute-0 ceph-mon[74331]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 10:06:09 compute-0 ceph-mon[74331]: from='client.17949 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:09 compute-0 ceph-mon[74331]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 10:06:09 compute-0 ceph-mon[74331]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 10:06:09 compute-0 ceph-mon[74331]: from='client.27389 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:09 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/501692901' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 24 10:06:09 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2699081811' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 24 10:06:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:06:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:06:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:06:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:06:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:06:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:06:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:06:10 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:06:10 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.25978 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:10 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df"} v 0)
Nov 24 10:06:10 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3185763502' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 24 10:06:10 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump"} v 0)
Nov 24 10:06:10 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/138290104' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 24 10:06:10 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1183: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:06:10 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.27428 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:10 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/2135132105' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 24 10:06:10 compute-0 ceph-mon[74331]: from='client.25978 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:10 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3185763502' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 24 10:06:10 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/3311372601' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 24 10:06:10 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/1883798475' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 24 10:06:10 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/138290104' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 24 10:06:10 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df"} v 0)
Nov 24 10:06:10 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1685863189' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 24 10:06:10 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:10 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:10 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:10.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:06:10] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 24 10:06:10 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:06:10] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 24 10:06:11 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls"} v 0)
Nov 24 10:06:11 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2910977602' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 24 10:06:11 compute-0 nova_compute[257700]: 2025-11-24 10:06:11.298 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:06:11 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.18015 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:11 compute-0 nova_compute[257700]: 2025-11-24 10:06:11.475 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:06:11 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.27452 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:11 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:11 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:11 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:11.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:11 compute-0 ceph-mon[74331]: pgmap v1183: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:06:11 compute-0 ceph-mon[74331]: from='client.27428 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:11 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/1685863189' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 24 10:06:11 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2910977602' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 24 10:06:11 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/1209386023' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 24 10:06:11 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/3221716505' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 24 10:06:11 compute-0 ceph-mon[74331]: from='client.18015 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:11 compute-0 ceph-mon[74331]: from='client.27452 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:11 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.27458 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:11 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat"} v 0)
Nov 24 10:06:11 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3027317944' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 24 10:06:12 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:06:12 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.26008 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:12 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump"} v 0)
Nov 24 10:06:12 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2354909685' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 24 10:06:12 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1184: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:06:12 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.18057 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:12 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/1577556685' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 24 10:06:12 compute-0 ceph-mon[74331]: from='client.27458 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:12 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3027317944' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 24 10:06:12 compute-0 ceph-mon[74331]: from='client.26008 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:12 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/2394133039' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 24 10:06:12 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2354909685' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 24 10:06:12 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/3554526462' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 24 10:06:12 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/3082189847' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 24 10:06:12 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:12 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:06:12 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:12.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:06:13 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Nov 24 10:06:13 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1530786313' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 24 10:06:13 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.27488 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:13 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.18078 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:13 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:06:13.561Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:06:13 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:13 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:13 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:13.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:13 compute-0 sudo[287129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:06:13 compute-0 sudo[287129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:06:13 compute-0 sudo[287129]: pam_unix(sudo:session): session closed for user root
Nov 24 10:06:13 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.27494 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:13 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:13 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 10:06:13 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:13 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 10:06:13 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:13 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:06:13 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:13 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:06:13 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:13 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 24 10:06:13 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:13 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 10:06:13 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:13 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:06:13 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:13 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 24 10:06:13 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:13 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 10:06:13 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:13 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:06:13 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:13 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 10:06:13 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:13 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 10:06:13 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.26035 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:13 compute-0 sudo[287179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 10:06:13 compute-0 sudo[287179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:06:13 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.18090 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:13 compute-0 ceph-mon[74331]: pgmap v1184: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:06:13 compute-0 ceph-mon[74331]: from='client.18057 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:13 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1530786313' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 24 10:06:13 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/2372932552' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 24 10:06:13 compute-0 ceph-mon[74331]: from='client.27488 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:13 compute-0 ceph-mon[74331]: from='client.18078 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:14 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump"} v 0)
Nov 24 10:06:14 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3088268736' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 24 10:06:14 compute-0 sudo[287179]: pam_unix(sudo:session): session closed for user root
Nov 24 10:06:14 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1185: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:06:14 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 10:06:14 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.26050 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:14 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:06:14 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 10:06:14 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:06:14 compute-0 sudo[287443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:06:14 compute-0 sudo[287443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:06:14 compute-0 sudo[287443]: pam_unix(sudo:session): session closed for user root
Nov 24 10:06:14 compute-0 sudo[287484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 24 10:06:14 compute-0 sudo[287484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:06:14 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0)
Nov 24 10:06:14 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/566409870' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 24 10:06:14 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.26059 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:14 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.27524 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:14 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:14 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:14 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:14.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:06:14 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:06:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:06:14 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:06:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:06:14 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:06:15 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.18129 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:06:15 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:06:15 compute-0 podman[287671]: 2025-11-24 10:06:14.975842078 +0000 UTC m=+0.022440411 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:06:15 compute-0 podman[287671]: 2025-11-24 10:06:15.087124575 +0000 UTC m=+0.133722888 container create 3ab1f4f9714ca692058c1e767929a17e42df691177ead55f9ff4e2ab9c706259 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 10:06:15 compute-0 ceph-mon[74331]: from='client.27494 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:15 compute-0 ceph-mon[74331]: from='client.26035 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:15 compute-0 ceph-mon[74331]: from='client.18090 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:15 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/1514606181' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 24 10:06:15 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/3408717226' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 24 10:06:15 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3088268736' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 24 10:06:15 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:06:15 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 10:06:15 compute-0 ceph-mon[74331]: pgmap v1185: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:06:15 compute-0 ceph-mon[74331]: from='client.26050 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:15 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:06:15 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:06:15 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/3492086322' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 24 10:06:15 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 10:06:15 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 10:06:15 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:06:15 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/566409870' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 24 10:06:15 compute-0 systemd[1]: Started libpod-conmon-3ab1f4f9714ca692058c1e767929a17e42df691177ead55f9ff4e2ab9c706259.scope.
Nov 24 10:06:15 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:06:15 compute-0 podman[287671]: 2025-11-24 10:06:15.241411415 +0000 UTC m=+0.288009738 container init 3ab1f4f9714ca692058c1e767929a17e42df691177ead55f9ff4e2ab9c706259 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_dewdney, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 10:06:15 compute-0 podman[287671]: 2025-11-24 10:06:15.247449036 +0000 UTC m=+0.294047349 container start 3ab1f4f9714ca692058c1e767929a17e42df691177ead55f9ff4e2ab9c706259 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_dewdney, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 10:06:15 compute-0 magical_dewdney[287809]: 167 167
Nov 24 10:06:15 compute-0 systemd[1]: libpod-3ab1f4f9714ca692058c1e767929a17e42df691177ead55f9ff4e2ab9c706259.scope: Deactivated successfully.
Nov 24 10:06:15 compute-0 podman[287671]: 2025-11-24 10:06:15.266362038 +0000 UTC m=+0.312960351 container attach 3ab1f4f9714ca692058c1e767929a17e42df691177ead55f9ff4e2ab9c706259 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_dewdney, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 10:06:15 compute-0 podman[287671]: 2025-11-24 10:06:15.267177569 +0000 UTC m=+0.313775872 container died 3ab1f4f9714ca692058c1e767929a17e42df691177ead55f9ff4e2ab9c706259 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Nov 24 10:06:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-6be3faa03311a0ead478977fbe6f6308dd73fdfe85f190f3000d5991162c5b38-merged.mount: Deactivated successfully.
Nov 24 10:06:15 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.27530 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:15 compute-0 podman[287671]: 2025-11-24 10:06:15.329648987 +0000 UTC m=+0.376247300 container remove 3ab1f4f9714ca692058c1e767929a17e42df691177ead55f9ff4e2ab9c706259 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_dewdney, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 10:06:15 compute-0 systemd[1]: libpod-conmon-3ab1f4f9714ca692058c1e767929a17e42df691177ead55f9ff4e2ab9c706259.scope: Deactivated successfully.
Nov 24 10:06:15 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.18141 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:15 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:15 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 10:06:15 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:15 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 10:06:15 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:15 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:06:15 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:15 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:06:15 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:15 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 24 10:06:15 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:15 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 10:06:15 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:15 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:06:15 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:15 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 24 10:06:15 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:15 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 10:06:15 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:15 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:06:15 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:15 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 10:06:15 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:15 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 10:06:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:06:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:06:15 compute-0 podman[287992]: 2025-11-24 10:06:15.491149878 +0000 UTC m=+0.048857440 container create fcdb215e52e05f70e1c69d5ce07a2eb0e0e8a5418cfd5d510040e2e658229bf2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 24 10:06:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:06:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:06:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:06:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:06:15 compute-0 systemd[1]: Started libpod-conmon-fcdb215e52e05f70e1c69d5ce07a2eb0e0e8a5418cfd5d510040e2e658229bf2.scope.
Nov 24 10:06:15 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:06:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/605e9c4e9eee4b12fc80536307d14719eaca02384b95f89ca990474a4e5ffaed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 10:06:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/605e9c4e9eee4b12fc80536307d14719eaca02384b95f89ca990474a4e5ffaed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 10:06:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/605e9c4e9eee4b12fc80536307d14719eaca02384b95f89ca990474a4e5ffaed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 10:06:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/605e9c4e9eee4b12fc80536307d14719eaca02384b95f89ca990474a4e5ffaed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 10:06:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/605e9c4e9eee4b12fc80536307d14719eaca02384b95f89ca990474a4e5ffaed/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 10:06:15 compute-0 podman[287992]: 2025-11-24 10:06:15.467689412 +0000 UTC m=+0.025397004 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:06:15 compute-0 podman[287992]: 2025-11-24 10:06:15.578655811 +0000 UTC m=+0.136363373 container init fcdb215e52e05f70e1c69d5ce07a2eb0e0e8a5418cfd5d510040e2e658229bf2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 24 10:06:15 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:15 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:15 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:15.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:15 compute-0 podman[287992]: 2025-11-24 10:06:15.592810434 +0000 UTC m=+0.150518006 container start fcdb215e52e05f70e1c69d5ce07a2eb0e0e8a5418cfd5d510040e2e658229bf2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 24 10:06:15 compute-0 podman[287992]: 2025-11-24 10:06:15.59823003 +0000 UTC m=+0.155937612 container attach fcdb215e52e05f70e1c69d5ce07a2eb0e0e8a5418cfd5d510040e2e658229bf2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_mcclintock, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325)
Nov 24 10:06:15 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0)
Nov 24 10:06:15 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3846290914' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 24 10:06:15 compute-0 epic_mcclintock[288056]: --> passed data devices: 0 physical, 1 LVM
Nov 24 10:06:15 compute-0 epic_mcclintock[288056]: --> All data devices are unavailable
Nov 24 10:06:15 compute-0 systemd[1]: libpod-fcdb215e52e05f70e1c69d5ce07a2eb0e0e8a5418cfd5d510040e2e658229bf2.scope: Deactivated successfully.
Nov 24 10:06:15 compute-0 podman[287992]: 2025-11-24 10:06:15.940242674 +0000 UTC m=+0.497950256 container died fcdb215e52e05f70e1c69d5ce07a2eb0e0e8a5418cfd5d510040e2e658229bf2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_mcclintock, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 10:06:15 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.26089 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-605e9c4e9eee4b12fc80536307d14719eaca02384b95f89ca990474a4e5ffaed-merged.mount: Deactivated successfully.
Nov 24 10:06:15 compute-0 ovs-appctl[288251]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 24 10:06:15 compute-0 podman[287992]: 2025-11-24 10:06:15.986013336 +0000 UTC m=+0.543720898 container remove fcdb215e52e05f70e1c69d5ce07a2eb0e0e8a5418cfd5d510040e2e658229bf2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_mcclintock, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Nov 24 10:06:15 compute-0 systemd[1]: libpod-conmon-fcdb215e52e05f70e1c69d5ce07a2eb0e0e8a5418cfd5d510040e2e658229bf2.scope: Deactivated successfully.
Nov 24 10:06:16 compute-0 ovs-appctl[288260]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 24 10:06:16 compute-0 ovs-appctl[288268]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 24 10:06:16 compute-0 sudo[287484]: pam_unix(sudo:session): session closed for user root
Nov 24 10:06:16 compute-0 sudo[288274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:06:16 compute-0 sudo[288274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:06:16 compute-0 sudo[288274]: pam_unix(sudo:session): session closed for user root
Nov 24 10:06:16 compute-0 sudo[288316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- lvm list --format json
Nov 24 10:06:16 compute-0 sudo[288316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:06:16 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd stat"} v 0)
Nov 24 10:06:16 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3991889880' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 24 10:06:16 compute-0 ceph-mon[74331]: from='client.26059 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:16 compute-0 ceph-mon[74331]: from='client.27524 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:16 compute-0 ceph-mon[74331]: from='client.18129 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:16 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/2667697992' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 24 10:06:16 compute-0 ceph-mon[74331]: from='client.27530 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:16 compute-0 ceph-mon[74331]: from='client.18141 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:16 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:06:16 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/1865868444' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 24 10:06:16 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3846290914' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 24 10:06:16 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/2984097938' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 24 10:06:16 compute-0 nova_compute[257700]: 2025-11-24 10:06:16.298 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:06:16 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status"} v 0)
Nov 24 10:06:16 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/156699043' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Nov 24 10:06:16 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.26098 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:16 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:16 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 10:06:16 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:16 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 10:06:16 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:16 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:06:16 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:16 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:06:16 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:16 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 24 10:06:16 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:16 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 10:06:16 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:16 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:06:16 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:16 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 24 10:06:16 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:16 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 10:06:16 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:16 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:06:16 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:16 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 10:06:16 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:16 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 10:06:16 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1186: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:06:16 compute-0 nova_compute[257700]: 2025-11-24 10:06:16.477 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:06:16 compute-0 podman[288536]: 2025-11-24 10:06:16.531359495 +0000 UTC m=+0.044415469 container create 015231f15d7725c26a14b394a0172404e03f0482eee5b7ecb379eb4b1e264edb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_payne, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 10:06:16 compute-0 systemd[1]: Started libpod-conmon-015231f15d7725c26a14b394a0172404e03f0482eee5b7ecb379eb4b1e264edb.scope.
Nov 24 10:06:16 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.18171 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:16 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:06:16 compute-0 podman[288536]: 2025-11-24 10:06:16.607809983 +0000 UTC m=+0.120865947 container init 015231f15d7725c26a14b394a0172404e03f0482eee5b7ecb379eb4b1e264edb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_payne, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 10:06:16 compute-0 podman[288536]: 2025-11-24 10:06:16.5143199 +0000 UTC m=+0.027375864 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:06:16 compute-0 podman[288536]: 2025-11-24 10:06:16.615392252 +0000 UTC m=+0.128448196 container start 015231f15d7725c26a14b394a0172404e03f0482eee5b7ecb379eb4b1e264edb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_payne, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 24 10:06:16 compute-0 podman[288536]: 2025-11-24 10:06:16.618553101 +0000 UTC m=+0.131609055 container attach 015231f15d7725c26a14b394a0172404e03f0482eee5b7ecb379eb4b1e264edb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_payne, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 10:06:16 compute-0 romantic_payne[288575]: 167 167
Nov 24 10:06:16 compute-0 systemd[1]: libpod-015231f15d7725c26a14b394a0172404e03f0482eee5b7ecb379eb4b1e264edb.scope: Deactivated successfully.
Nov 24 10:06:16 compute-0 podman[288536]: 2025-11-24 10:06:16.621076654 +0000 UTC m=+0.134132608 container died 015231f15d7725c26a14b394a0172404e03f0482eee5b7ecb379eb4b1e264edb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_payne, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 24 10:06:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d1c17e23011c2b2135d951bde7a6292c3c315be0035a5349ae3a5be2acf6866-merged.mount: Deactivated successfully.
Nov 24 10:06:16 compute-0 podman[288536]: 2025-11-24 10:06:16.658709163 +0000 UTC m=+0.171765107 container remove 015231f15d7725c26a14b394a0172404e03f0482eee5b7ecb379eb4b1e264edb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_payne, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Nov 24 10:06:16 compute-0 systemd[1]: libpod-conmon-015231f15d7725c26a14b394a0172404e03f0482eee5b7ecb379eb4b1e264edb.scope: Deactivated successfully.
Nov 24 10:06:16 compute-0 podman[288684]: 2025-11-24 10:06:16.834625353 +0000 UTC m=+0.045600120 container create f5bb9b9fb2eb1f822173f62609a5c2e0941ddd9a67ba46fa450da2fb74cb472c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 10:06:16 compute-0 systemd[1]: Started libpod-conmon-f5bb9b9fb2eb1f822173f62609a5c2e0941ddd9a67ba46fa450da2fb74cb472c.scope.
Nov 24 10:06:16 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:06:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2419385ad2dca49b50f48e1f6fe7a5cd3a8f187844e6c48621ab784c865735b5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 10:06:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2419385ad2dca49b50f48e1f6fe7a5cd3a8f187844e6c48621ab784c865735b5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 10:06:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2419385ad2dca49b50f48e1f6fe7a5cd3a8f187844e6c48621ab784c865735b5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 10:06:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2419385ad2dca49b50f48e1f6fe7a5cd3a8f187844e6c48621ab784c865735b5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 10:06:16 compute-0 podman[288684]: 2025-11-24 10:06:16.815908896 +0000 UTC m=+0.026883663 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:06:16 compute-0 podman[288684]: 2025-11-24 10:06:16.930434483 +0000 UTC m=+0.141409260 container init f5bb9b9fb2eb1f822173f62609a5c2e0941ddd9a67ba46fa450da2fb74cb472c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_johnson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 10:06:16 compute-0 podman[288684]: 2025-11-24 10:06:16.936921205 +0000 UTC m=+0.147895962 container start f5bb9b9fb2eb1f822173f62609a5c2e0941ddd9a67ba46fa450da2fb74cb472c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Nov 24 10:06:16 compute-0 podman[288684]: 2025-11-24 10:06:16.941819838 +0000 UTC m=+0.152794615 container attach f5bb9b9fb2eb1f822173f62609a5c2e0941ddd9a67ba46fa450da2fb74cb472c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_johnson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 10:06:16 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.18177 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:16 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:16 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:16 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:16.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:17 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:06:17 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.27572 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:17 compute-0 nifty_johnson[288732]: {
Nov 24 10:06:17 compute-0 nifty_johnson[288732]:     "0": [
Nov 24 10:06:17 compute-0 nifty_johnson[288732]:         {
Nov 24 10:06:17 compute-0 nifty_johnson[288732]:             "devices": [
Nov 24 10:06:17 compute-0 nifty_johnson[288732]:                 "/dev/loop3"
Nov 24 10:06:17 compute-0 nifty_johnson[288732]:             ],
Nov 24 10:06:17 compute-0 nifty_johnson[288732]:             "lv_name": "ceph_lv0",
Nov 24 10:06:17 compute-0 nifty_johnson[288732]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 10:06:17 compute-0 nifty_johnson[288732]:             "lv_size": "21470642176",
Nov 24 10:06:17 compute-0 nifty_johnson[288732]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=84a084c3-61a7-5de7-8207-1f88efa59a64,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 24 10:06:17 compute-0 nifty_johnson[288732]:             "lv_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 10:06:17 compute-0 nifty_johnson[288732]:             "name": "ceph_lv0",
Nov 24 10:06:17 compute-0 nifty_johnson[288732]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 10:06:17 compute-0 nifty_johnson[288732]:             "tags": {
Nov 24 10:06:17 compute-0 nifty_johnson[288732]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 10:06:17 compute-0 nifty_johnson[288732]:                 "ceph.block_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 10:06:17 compute-0 nifty_johnson[288732]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 10:06:17 compute-0 nifty_johnson[288732]:                 "ceph.cluster_fsid": "84a084c3-61a7-5de7-8207-1f88efa59a64",
Nov 24 10:06:17 compute-0 nifty_johnson[288732]:                 "ceph.cluster_name": "ceph",
Nov 24 10:06:17 compute-0 nifty_johnson[288732]:                 "ceph.crush_device_class": "",
Nov 24 10:06:17 compute-0 nifty_johnson[288732]:                 "ceph.encrypted": "0",
Nov 24 10:06:17 compute-0 nifty_johnson[288732]:                 "ceph.osd_fsid": "4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c",
Nov 24 10:06:17 compute-0 nifty_johnson[288732]:                 "ceph.osd_id": "0",
Nov 24 10:06:17 compute-0 nifty_johnson[288732]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 10:06:17 compute-0 nifty_johnson[288732]:                 "ceph.type": "block",
Nov 24 10:06:17 compute-0 nifty_johnson[288732]:                 "ceph.vdo": "0",
Nov 24 10:06:17 compute-0 nifty_johnson[288732]:                 "ceph.with_tpm": "0"
Nov 24 10:06:17 compute-0 nifty_johnson[288732]:             },
Nov 24 10:06:17 compute-0 nifty_johnson[288732]:             "type": "block",
Nov 24 10:06:17 compute-0 nifty_johnson[288732]:             "vg_name": "ceph_vg0"
Nov 24 10:06:17 compute-0 nifty_johnson[288732]:         }
Nov 24 10:06:17 compute-0 nifty_johnson[288732]:     ]
Nov 24 10:06:17 compute-0 nifty_johnson[288732]: }
Nov 24 10:06:17 compute-0 ceph-mon[74331]: from='client.26089 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:17 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3991889880' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 24 10:06:17 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/156699043' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Nov 24 10:06:17 compute-0 ceph-mon[74331]: from='client.26098 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:17 compute-0 ceph-mon[74331]: pgmap v1186: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:06:17 compute-0 ceph-mon[74331]: from='client.18171 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:17 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/827779880' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Nov 24 10:06:17 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/2621451589' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 24 10:06:17 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/1903195685' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 24 10:06:17 compute-0 systemd[1]: libpod-f5bb9b9fb2eb1f822173f62609a5c2e0941ddd9a67ba46fa450da2fb74cb472c.scope: Deactivated successfully.
Nov 24 10:06:17 compute-0 podman[288684]: 2025-11-24 10:06:17.254238664 +0000 UTC m=+0.465213421 container died f5bb9b9fb2eb1f822173f62609a5c2e0941ddd9a67ba46fa450da2fb74cb472c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default)
Nov 24 10:06:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-2419385ad2dca49b50f48e1f6fe7a5cd3a8f187844e6c48621ab784c865735b5-merged.mount: Deactivated successfully.
Nov 24 10:06:17 compute-0 podman[288684]: 2025-11-24 10:06:17.300674893 +0000 UTC m=+0.511649660 container remove f5bb9b9fb2eb1f822173f62609a5c2e0941ddd9a67ba46fa450da2fb74cb472c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 10:06:17 compute-0 systemd[1]: libpod-conmon-f5bb9b9fb2eb1f822173f62609a5c2e0941ddd9a67ba46fa450da2fb74cb472c.scope: Deactivated successfully.
Nov 24 10:06:17 compute-0 sudo[288316]: pam_unix(sudo:session): session closed for user root
Nov 24 10:06:17 compute-0 sudo[288956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:06:17 compute-0 sudo[288956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:06:17 compute-0 sudo[288956]: pam_unix(sudo:session): session closed for user root
Nov 24 10:06:17 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Nov 24 10:06:17 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/575993812' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 24 10:06:17 compute-0 sudo[289012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- raw list --format json
Nov 24 10:06:17 compute-0 sudo[289012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:06:17 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.18192 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:17 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:17 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:17 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:17.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:06:17.587Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 10:06:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:06:17.589Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 10:06:17 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.26125 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:17 compute-0 podman[289257]: 2025-11-24 10:06:17.918855818 +0000 UTC m=+0.041573077 container create fcf7954bd814b71493e7fd2e23dec50b1563f2d51abc62650c883ad378b09716 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_booth, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 10:06:17 compute-0 systemd[1]: Started libpod-conmon-fcf7954bd814b71493e7fd2e23dec50b1563f2d51abc62650c883ad378b09716.scope.
Nov 24 10:06:17 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status"} v 0)
Nov 24 10:06:17 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/242219174' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Nov 24 10:06:17 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:06:17 compute-0 podman[289257]: 2025-11-24 10:06:17.898078511 +0000 UTC m=+0.020795800 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:06:18 compute-0 podman[289257]: 2025-11-24 10:06:18.004528427 +0000 UTC m=+0.127245706 container init fcf7954bd814b71493e7fd2e23dec50b1563f2d51abc62650c883ad378b09716 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_booth, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 10:06:18 compute-0 podman[289257]: 2025-11-24 10:06:18.01067179 +0000 UTC m=+0.133389059 container start fcf7954bd814b71493e7fd2e23dec50b1563f2d51abc62650c883ad378b09716 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_booth, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 24 10:06:18 compute-0 podman[289257]: 2025-11-24 10:06:18.01509178 +0000 UTC m=+0.137809059 container attach fcf7954bd814b71493e7fd2e23dec50b1563f2d51abc62650c883ad378b09716 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_booth, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 10:06:18 compute-0 stoic_booth[289301]: 167 167
Nov 24 10:06:18 compute-0 systemd[1]: libpod-fcf7954bd814b71493e7fd2e23dec50b1563f2d51abc62650c883ad378b09716.scope: Deactivated successfully.
Nov 24 10:06:18 compute-0 podman[289257]: 2025-11-24 10:06:18.019300505 +0000 UTC m=+0.142017784 container died fcf7954bd814b71493e7fd2e23dec50b1563f2d51abc62650c883ad378b09716 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_booth, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 10:06:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-0888f86f067b39439993442c8faf0b6dd2c438a9abe07acd4590b7ed9bb3290c-merged.mount: Deactivated successfully.
Nov 24 10:06:18 compute-0 podman[289257]: 2025-11-24 10:06:18.064659118 +0000 UTC m=+0.187376377 container remove fcf7954bd814b71493e7fd2e23dec50b1563f2d51abc62650c883ad378b09716 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_booth, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 24 10:06:18 compute-0 systemd[1]: libpod-conmon-fcf7954bd814b71493e7fd2e23dec50b1563f2d51abc62650c883ad378b09716.scope: Deactivated successfully.
Nov 24 10:06:18 compute-0 podman[289414]: 2025-11-24 10:06:18.273912689 +0000 UTC m=+0.060503231 container create 3b562a171a5e07ee7f936753efc2dc7e21f1f85b8c992333cb09463eaefb3866 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_napier, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 10:06:18 compute-0 ceph-mon[74331]: from='client.18177 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:18 compute-0 ceph-mon[74331]: from='client.27572 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:18 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/575993812' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 24 10:06:18 compute-0 ceph-mon[74331]: from='client.18192 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:18 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/1759652024' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 24 10:06:18 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/242219174' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Nov 24 10:06:18 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/3181046644' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Nov 24 10:06:18 compute-0 systemd[1]: Started libpod-conmon-3b562a171a5e07ee7f936753efc2dc7e21f1f85b8c992333cb09463eaefb3866.scope.
Nov 24 10:06:18 compute-0 podman[289414]: 2025-11-24 10:06:18.253791557 +0000 UTC m=+0.040382119 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:06:18 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:06:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e90393214055c823e1134e5ee9c275e176047764c926146b312ead6ad27e0ec6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 10:06:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e90393214055c823e1134e5ee9c275e176047764c926146b312ead6ad27e0ec6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 10:06:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e90393214055c823e1134e5ee9c275e176047764c926146b312ead6ad27e0ec6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 10:06:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e90393214055c823e1134e5ee9c275e176047764c926146b312ead6ad27e0ec6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 10:06:18 compute-0 podman[289414]: 2025-11-24 10:06:18.37170795 +0000 UTC m=+0.158298512 container init 3b562a171a5e07ee7f936753efc2dc7e21f1f85b8c992333cb09463eaefb3866 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_napier, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Nov 24 10:06:18 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0)
Nov 24 10:06:18 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/10305290' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Nov 24 10:06:18 compute-0 podman[289414]: 2025-11-24 10:06:18.38496536 +0000 UTC m=+0.171555902 container start 3b562a171a5e07ee7f936753efc2dc7e21f1f85b8c992333cb09463eaefb3866 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_napier, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 10:06:18 compute-0 podman[289414]: 2025-11-24 10:06:18.398305183 +0000 UTC m=+0.184895725 container attach 3b562a171a5e07ee7f936753efc2dc7e21f1f85b8c992333cb09463eaefb3866 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 10:06:18 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1187: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:06:18 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.18216 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:06:18.936Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:06:18 compute-0 lvm[289730]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 10:06:18 compute-0 lvm[289730]: VG ceph_vg0 finished
Nov 24 10:06:18 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:18 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:18 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:18.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:19 compute-0 pedantic_napier[289460]: {}
Nov 24 10:06:19 compute-0 systemd[1]: libpod-3b562a171a5e07ee7f936753efc2dc7e21f1f85b8c992333cb09463eaefb3866.scope: Deactivated successfully.
Nov 24 10:06:19 compute-0 systemd[1]: libpod-3b562a171a5e07ee7f936753efc2dc7e21f1f85b8c992333cb09463eaefb3866.scope: Consumed 1.072s CPU time.
Nov 24 10:06:19 compute-0 podman[289785]: 2025-11-24 10:06:19.090337552 +0000 UTC m=+0.020438050 container died 3b562a171a5e07ee7f936753efc2dc7e21f1f85b8c992333cb09463eaefb3866 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_napier, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 10:06:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-e90393214055c823e1134e5ee9c275e176047764c926146b312ead6ad27e0ec6-merged.mount: Deactivated successfully.
Nov 24 10:06:19 compute-0 podman[289785]: 2025-11-24 10:06:19.157551189 +0000 UTC m=+0.087651647 container remove 3b562a171a5e07ee7f936753efc2dc7e21f1f85b8c992333cb09463eaefb3866 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_napier, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 24 10:06:19 compute-0 systemd[1]: libpod-conmon-3b562a171a5e07ee7f936753efc2dc7e21f1f85b8c992333cb09463eaefb3866.scope: Deactivated successfully.
Nov 24 10:06:19 compute-0 sudo[289012]: pam_unix(sudo:session): session closed for user root
Nov 24 10:06:19 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 10:06:19 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:06:19 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 10:06:19 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:06:19 compute-0 ceph-mon[74331]: from='client.26125 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:06:19 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/1696960636' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 24 10:06:19 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/10305290' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Nov 24 10:06:19 compute-0 ceph-mon[74331]: pgmap v1187: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:06:19 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/1404093567' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Nov 24 10:06:19 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/2189890635' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Nov 24 10:06:19 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/3837430605' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Nov 24 10:06:19 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/1243027862' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Nov 24 10:06:19 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:06:19 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:06:19 compute-0 sudo[289857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 10:06:19 compute-0 sudo[289857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:06:19 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.27620 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:19 compute-0 sudo[289857]: pam_unix(sudo:session): session closed for user root
Nov 24 10:06:19 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0)
Nov 24 10:06:19 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2432724872' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 24 10:06:19 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:19 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.26161 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:19 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:06:19 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:19.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:06:19 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0)
Nov 24 10:06:19 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/334201095' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Nov 24 10:06:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:06:19 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:06:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:06:20 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:06:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:06:20 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:06:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:06:20 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:06:20 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0)
Nov 24 10:06:20 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2910796446' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Nov 24 10:06:20 compute-0 ceph-mon[74331]: from='client.18216 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:20 compute-0 ceph-mon[74331]: from='client.27620 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:20 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2432724872' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 24 10:06:20 compute-0 ceph-mon[74331]: from='client.26161 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:20 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/334201095' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Nov 24 10:06:20 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/1692125416' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Nov 24 10:06:20 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/3264426233' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 24 10:06:20 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2910796446' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Nov 24 10:06:20 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1188: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:06:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:06:20.576 165073 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:06:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:06:20.577 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:06:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:06:20.577 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:06:20 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0)
Nov 24 10:06:20 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/646906727' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Nov 24 10:06:20 compute-0 sshd-session[286791]: error: kex_exchange_identification: read: Connection timed out
Nov 24 10:06:20 compute-0 sshd-session[286791]: banner exchange: Connection from 121.31.210.125 port 57638: Connection timed out
Nov 24 10:06:20 compute-0 podman[290200]: 2025-11-24 10:06:20.791821961 +0000 UTC m=+0.061037514 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.build-date=20251118)
Nov 24 10:06:20 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.27650 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:20 compute-0 podman[290201]: 2025-11-24 10:06:20.84546233 +0000 UTC m=+0.113378480 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible)
Nov 24 10:06:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:06:20] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Nov 24 10:06:20 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:06:20] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Nov 24 10:06:20 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:20 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:20 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:20.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:21 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.18276 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:21 compute-0 nova_compute[257700]: 2025-11-24 10:06:21.300 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:06:21 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/2133798779' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Nov 24 10:06:21 compute-0 ceph-mon[74331]: pgmap v1188: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:06:21 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/178353154' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Nov 24 10:06:21 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/646906727' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Nov 24 10:06:21 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/3278052733' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Nov 24 10:06:21 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/1885530590' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Nov 24 10:06:21 compute-0 nova_compute[257700]: 2025-11-24 10:06:21.479 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:06:21 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0)
Nov 24 10:06:21 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3007393360' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Nov 24 10:06:21 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:21 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:21 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:21.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:21 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.27668 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:21 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.26197 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:21 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0)
Nov 24 10:06:21 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2249032923' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Nov 24 10:06:22 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.27677 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:22 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:06:22 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Nov 24 10:06:22 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:06:22.156260) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 10:06:22 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Nov 24 10:06:22 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978782156359, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 2326, "num_deletes": 507, "total_data_size": 3513713, "memory_usage": 3581136, "flush_reason": "Manual Compaction"}
Nov 24 10:06:22 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Nov 24 10:06:22 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978782183313, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 3407008, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31695, "largest_seqno": 34020, "table_properties": {"data_size": 3396428, "index_size": 5986, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3525, "raw_key_size": 28259, "raw_average_key_size": 20, "raw_value_size": 3371900, "raw_average_value_size": 2413, "num_data_blocks": 257, "num_entries": 1397, "num_filter_entries": 1397, "num_deletions": 507, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763978623, "oldest_key_time": 1763978623, "file_creation_time": 1763978782, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "42aa12d2-c531-4ddc-8c4c-bc0b5971346b", "db_session_id": "RORHLERH15LC1QL8D0I4", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Nov 24 10:06:22 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 27090 microseconds, and 9889 cpu microseconds.
Nov 24 10:06:22 compute-0 ceph-mon[74331]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 10:06:22 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:06:22.183361) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 3407008 bytes OK
Nov 24 10:06:22 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:06:22.183381) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Nov 24 10:06:22 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:06:22.185124) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Nov 24 10:06:22 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:06:22.185139) EVENT_LOG_v1 {"time_micros": 1763978782185134, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 10:06:22 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:06:22.185157) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 10:06:22 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 3502204, prev total WAL file size 3502204, number of live WAL files 2.
Nov 24 10:06:22 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 10:06:22 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:06:22.186262) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323533' seq:72057594037927935, type:22 .. '6B7600353034' seq:0, type:0; will stop at (end)
Nov 24 10:06:22 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 10:06:22 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(3327KB)], [68(13MB)]
Nov 24 10:06:22 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978782186295, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 17516434, "oldest_snapshot_seqno": -1}
Nov 24 10:06:22 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 6624 keys, 16034842 bytes, temperature: kUnknown
Nov 24 10:06:22 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978782271133, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 16034842, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15989097, "index_size": 28097, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16581, "raw_key_size": 172809, "raw_average_key_size": 26, "raw_value_size": 15868474, "raw_average_value_size": 2395, "num_data_blocks": 1114, "num_entries": 6624, "num_filter_entries": 6624, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976305, "oldest_key_time": 0, "file_creation_time": 1763978782, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "42aa12d2-c531-4ddc-8c4c-bc0b5971346b", "db_session_id": "RORHLERH15LC1QL8D0I4", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Nov 24 10:06:22 compute-0 ceph-mon[74331]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 10:06:22 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:06:22.271373) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 16034842 bytes
Nov 24 10:06:22 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:06:22.273031) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 206.3 rd, 188.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 13.5 +0.0 blob) out(15.3 +0.0 blob), read-write-amplify(9.8) write-amplify(4.7) OK, records in: 7653, records dropped: 1029 output_compression: NoCompression
Nov 24 10:06:22 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:06:22.273052) EVENT_LOG_v1 {"time_micros": 1763978782273042, "job": 38, "event": "compaction_finished", "compaction_time_micros": 84912, "compaction_time_cpu_micros": 39489, "output_level": 6, "num_output_files": 1, "total_output_size": 16034842, "num_input_records": 7653, "num_output_records": 6624, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 10:06:22 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 10:06:22 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978782273828, "job": 38, "event": "table_file_deletion", "file_number": 70}
Nov 24 10:06:22 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 10:06:22 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978782276625, "job": 38, "event": "table_file_deletion", "file_number": 68}
Nov 24 10:06:22 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:06:22.186194) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:06:22 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:06:22.276693) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:06:22 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:06:22.276700) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:06:22 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:06:22.276702) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:06:22 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:06:22.276704) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:06:22 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:06:22.276706) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:06:22 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.18306 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:22 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1189: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:06:22 compute-0 ceph-mon[74331]: from='client.27650 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:22 compute-0 ceph-mon[74331]: from='client.18276 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:22 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/2219254769' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Nov 24 10:06:22 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3007393360' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Nov 24 10:06:22 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2249032923' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Nov 24 10:06:22 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/3105119572' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Nov 24 10:06:22 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0)
Nov 24 10:06:22 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3856428163' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Nov 24 10:06:22 compute-0 nova_compute[257700]: 2025-11-24 10:06:22.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:06:22 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:22 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:22 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:22.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:23 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.18330 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:23 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.26224 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:23 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.27710 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:23 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.18339 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:23 compute-0 ceph-mon[74331]: from='client.27668 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:23 compute-0 ceph-mon[74331]: from='client.26197 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:23 compute-0 ceph-mon[74331]: from='client.27677 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:23 compute-0 ceph-mon[74331]: from='client.18306 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:23 compute-0 ceph-mon[74331]: pgmap v1189: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:06:23 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/2599335431' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Nov 24 10:06:23 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3856428163' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Nov 24 10:06:23 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/430424438' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Nov 24 10:06:23 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/2897248161' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Nov 24 10:06:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:06:23.564Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:06:23 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:23 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:23 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:23.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:23 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0)
Nov 24 10:06:23 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2774141351' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Nov 24 10:06:23 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.18351 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:23 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:23 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 10:06:23 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:23 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 10:06:23 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:23 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:06:23 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:23 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:06:23 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:23 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 24 10:06:23 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:23 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 10:06:23 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:23 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:06:23 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:23 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 24 10:06:23 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:23 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 10:06:23 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:23 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:06:23 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:23 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 10:06:23 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:23 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 10:06:23 compute-0 nova_compute[257700]: 2025-11-24 10:06:23.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:06:23 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.26239 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:23 compute-0 nova_compute[257700]: 2025-11-24 10:06:23.942 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:06:23 compute-0 nova_compute[257700]: 2025-11-24 10:06:23.942 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:06:23 compute-0 nova_compute[257700]: 2025-11-24 10:06:23.943 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:06:23 compute-0 nova_compute[257700]: 2025-11-24 10:06:23.943 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 10:06:23 compute-0 nova_compute[257700]: 2025-11-24 10:06:23.943 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:06:24 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0)
Nov 24 10:06:24 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2543482892' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Nov 24 10:06:24 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0)
Nov 24 10:06:24 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3308493931' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 24 10:06:24 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.26254 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:24 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:06:24 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1469091592' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:06:24 compute-0 nova_compute[257700]: 2025-11-24 10:06:24.401 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:06:24 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1190: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:06:24 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.18375 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:24 compute-0 nova_compute[257700]: 2025-11-24 10:06:24.553 257704 WARNING nova.virt.libvirt.driver [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 10:06:24 compute-0 nova_compute[257700]: 2025-11-24 10:06:24.555 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4320MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 10:06:24 compute-0 nova_compute[257700]: 2025-11-24 10:06:24.555 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:06:24 compute-0 nova_compute[257700]: 2025-11-24 10:06:24.555 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:06:24 compute-0 nova_compute[257700]: 2025-11-24 10:06:24.620 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 10:06:24 compute-0 nova_compute[257700]: 2025-11-24 10:06:24.621 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 10:06:24 compute-0 nova_compute[257700]: 2025-11-24 10:06:24.636 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:06:24 compute-0 ceph-mon[74331]: from='client.18330 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:24 compute-0 ceph-mon[74331]: from='client.26224 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:24 compute-0 ceph-mon[74331]: from='client.27710 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:24 compute-0 ceph-mon[74331]: from='client.18339 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:24 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/3001595460' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Nov 24 10:06:24 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2774141351' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Nov 24 10:06:24 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2543482892' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Nov 24 10:06:24 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/3308493931' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 24 10:06:24 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1469091592' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:06:24 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.18387 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:24 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:24 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 10:06:24 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:24 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 10:06:24 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:24 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:06:24 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:24 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:06:24 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:24 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 24 10:06:24 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:24 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 10:06:24 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:24 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:06:24 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:24 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 24 10:06:24 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:24 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 10:06:24 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:24 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:06:24 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:24 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 10:06:24 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:24 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 10:06:24 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.27740 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:06:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:06:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:06:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:06:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:06:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:06:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:06:25 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:06:25 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:25 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:25 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:25.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:25 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:06:25 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3309388613' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:06:25 compute-0 nova_compute[257700]: 2025-11-24 10:06:25.101 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:06:25 compute-0 nova_compute[257700]: 2025-11-24 10:06:25.108 257704 DEBUG nova.compute.provider_tree [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Inventory has not changed in ProviderTree for provider: a50ce3b5-7e9e-4263-a4aa-c35573ac7257 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 10:06:25 compute-0 nova_compute[257700]: 2025-11-24 10:06:25.126 257704 DEBUG nova.scheduler.client.report [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Inventory has not changed for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 10:06:25 compute-0 nova_compute[257700]: 2025-11-24 10:06:25.127 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 10:06:25 compute-0 nova_compute[257700]: 2025-11-24 10:06:25.128 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.572s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:06:25 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0)
Nov 24 10:06:25 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1370692082' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 24 10:06:25 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.27749 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:25 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.26275 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:25 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0)
Nov 24 10:06:25 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3466748518' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Nov 24 10:06:25 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:25 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:25 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:25.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:25 compute-0 ceph-mon[74331]: from='client.18351 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:25 compute-0 ceph-mon[74331]: from='client.26239 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:25 compute-0 ceph-mon[74331]: from='client.26254 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:25 compute-0 ceph-mon[74331]: pgmap v1190: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:06:25 compute-0 ceph-mon[74331]: from='client.18375 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:25 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/2550223427' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Nov 24 10:06:25 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/1429568378' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Nov 24 10:06:25 compute-0 ceph-mon[74331]: from='client.18387 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:25 compute-0 ceph-mon[74331]: from='client.27740 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:25 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/2974131416' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Nov 24 10:06:25 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3309388613' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:06:25 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1370692082' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 24 10:06:25 compute-0 ceph-mon[74331]: from='client.27749 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:25 compute-0 ceph-mon[74331]: from='client.26275 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:25 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3466748518' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Nov 24 10:06:25 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.26284 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:25 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:25 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 10:06:25 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:25 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 10:06:25 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:25 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:06:25 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:25 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:06:25 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:25 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 24 10:06:25 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:25 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 10:06:25 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:25 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:06:25 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:25 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 24 10:06:25 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:25 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 10:06:25 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:25 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:06:25 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:25 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 10:06:25 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:25 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 10:06:25 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Nov 24 10:06:25 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1640188076' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 10:06:25 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.18435 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:26 compute-0 nova_compute[257700]: 2025-11-24 10:06:26.127 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:06:26 compute-0 nova_compute[257700]: 2025-11-24 10:06:26.128 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 10:06:26 compute-0 nova_compute[257700]: 2025-11-24 10:06:26.128 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 10:06:26 compute-0 nova_compute[257700]: 2025-11-24 10:06:26.141 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 10:06:26 compute-0 nova_compute[257700]: 2025-11-24 10:06:26.141 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:06:26 compute-0 nova_compute[257700]: 2025-11-24 10:06:26.141 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:06:26 compute-0 nova_compute[257700]: 2025-11-24 10:06:26.302 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:06:26 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.18444 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:26 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1191: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:06:26 compute-0 nova_compute[257700]: 2025-11-24 10:06:26.480 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:06:26 compute-0 ceph-mon[74331]: from='client.26284 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:26 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/1640188076' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 10:06:26 compute-0 ceph-mon[74331]: from='client.18435 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:26 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/1815450278' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 24 10:06:26 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/1330152833' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Nov 24 10:06:26 compute-0 ceph-mon[74331]: from='client.18444 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:26 compute-0 ceph-mon[74331]: pgmap v1191: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:06:26 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/3042987099' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Nov 24 10:06:26 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Nov 24 10:06:26 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3664216049' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 10:06:26 compute-0 podman[291009]: 2025-11-24 10:06:26.921807729 +0000 UTC m=+0.079500224 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 24 10:06:26 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.26308 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:27 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:27 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:06:27 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:27.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:06:27 compute-0 virtqemud[257224]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 24 10:06:27 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:06:27 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0)
Nov 24 10:06:27 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/430674398' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Nov 24 10:06:27 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.26314 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:06:27.589Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:06:27 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:27 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:27 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:27.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:27 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3664216049' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 10:06:27 compute-0 ceph-mon[74331]: from='client.26308 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:27 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/430674398' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Nov 24 10:06:27 compute-0 ceph-mon[74331]: from='client.26314 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:06:27 compute-0 nova_compute[257700]: 2025-11-24 10:06:27.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:06:27 compute-0 nova_compute[257700]: 2025-11-24 10:06:27.922 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:06:27 compute-0 nova_compute[257700]: 2025-11-24 10:06:27.922 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 10:06:27 compute-0 systemd[1]: Starting Time & Date Service...
Nov 24 10:06:28 compute-0 systemd[1]: Started Time & Date Service.
Nov 24 10:06:28 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1192: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:06:28 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/2984933213' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 10:06:28 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/2968096678' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:06:28 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/257950614' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Nov 24 10:06:28 compute-0 ceph-mon[74331]: pgmap v1192: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:06:28 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/2719288102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:06:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:06:28.939Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:06:29 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:29 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:06:29 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:29.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:06:29 compute-0 sudo[291247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:06:29 compute-0 sudo[291247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:06:29 compute-0 sudo[291247]: pam_unix(sudo:session): session closed for user root
Nov 24 10:06:29 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:29 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:29 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:29.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:29 compute-0 nova_compute[257700]: 2025-11-24 10:06:29.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:06:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:06:30 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:06:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:06:30 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:06:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:06:30 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:06:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:06:30 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:06:30 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/2721553192' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:06:30 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/416144228' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:06:30 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1193: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:06:30 compute-0 nova_compute[257700]: 2025-11-24 10:06:30.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:06:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:06:30] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Nov 24 10:06:30 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:06:30] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Nov 24 10:06:31 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:31 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:06:31 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:31.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:06:31 compute-0 ceph-mon[74331]: pgmap v1193: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:06:31 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:06:31 compute-0 nova_compute[257700]: 2025-11-24 10:06:31.303 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:06:31 compute-0 nova_compute[257700]: 2025-11-24 10:06:31.480 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:06:31 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:31 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:31 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:31.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:06:32 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1194: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:06:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:06:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:33.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:06:33 compute-0 ceph-mon[74331]: pgmap v1194: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:06:33 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:06:33.566Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 10:06:33 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:06:33.566Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:06:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:33.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:34 compute-0 ceph-osd[82549]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 10:06:34 compute-0 ceph-osd[82549]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 13K writes, 48K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 13K writes, 3756 syncs, 3.51 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2622 writes, 8802 keys, 2622 commit groups, 1.0 writes per commit group, ingest: 8.59 MB, 0.01 MB/s
                                           Interval WAL: 2622 writes, 1136 syncs, 2.31 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 24 10:06:34 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1195: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:06:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:06:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:06:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:06:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:06:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:06:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:06:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:06:35 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:06:35 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:35 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:06:35 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:35.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:06:35 compute-0 sshd-session[291281]: Received disconnect from 83.229.122.23 port 32838:11: Bye Bye [preauth]
Nov 24 10:06:35 compute-0 sshd-session[291281]: Disconnected from authenticating user root 83.229.122.23 port 32838 [preauth]
Nov 24 10:06:35 compute-0 ceph-mon[74331]: pgmap v1195: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:06:35 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:35 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.002000049s ======
Nov 24 10:06:35 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:35.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000049s
Nov 24 10:06:36 compute-0 nova_compute[257700]: 2025-11-24 10:06:36.307 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:06:36 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1196: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:06:36 compute-0 nova_compute[257700]: 2025-11-24 10:06:36.482 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:06:36 compute-0 sshd-session[291277]: Invalid user nexus from 45.78.198.78 port 47974
Nov 24 10:06:36 compute-0 sshd-session[291277]: Received disconnect from 45.78.198.78 port 47974:11: Bye Bye [preauth]
Nov 24 10:06:36 compute-0 sshd-session[291277]: Disconnected from invalid user nexus 45.78.198.78 port 47974 [preauth]
Nov 24 10:06:37 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:37 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:37 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:37.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:37 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:06:37 compute-0 ceph-mon[74331]: pgmap v1196: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:06:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:06:37.591Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:06:37 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:37 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:37 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:37.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:38 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1197: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:06:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:06:38.939Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:06:39 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:39 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:39 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:39.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:39 compute-0 ceph-mon[74331]: pgmap v1197: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:06:39 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:39 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:39 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:39.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:06:39 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:06:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:06:39 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:06:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:06:39 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:06:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:06:40 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:06:40 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1198: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:06:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:06:40] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Nov 24 10:06:40 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:06:40] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Nov 24 10:06:41 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:41 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:06:41 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:41.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:06:41 compute-0 nova_compute[257700]: 2025-11-24 10:06:41.308 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:06:41 compute-0 nova_compute[257700]: 2025-11-24 10:06:41.484 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:06:41 compute-0 ceph-mon[74331]: pgmap v1198: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:06:41 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:41 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:41 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:41.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:42 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:06:42 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1199: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:06:43 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:43 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:06:43 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:43.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:06:43 compute-0 ceph-mon[74331]: pgmap v1199: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:06:43 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:06:43.567Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:06:43 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:43 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:43 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:43.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:44 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1200: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:06:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:06:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:06:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:06:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:06:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:06:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:06:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:06:45 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:06:45 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:45 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:45 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:45.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Optimize plan auto_2025-11-24_10:06:45
Nov 24 10:06:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 10:06:45 compute-0 ceph-mgr[74626]: [balancer INFO root] do_upmap
Nov 24 10:06:45 compute-0 ceph-mgr[74626]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', 'volumes', 'backups', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', 'vms', '.nfs', 'cephfs.cephfs.data', 'images', '.rgw.root']
Nov 24 10:06:45 compute-0 ceph-mgr[74626]: [balancer INFO root] prepared 0/10 upmap changes
Nov 24 10:06:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:06:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:06:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:06:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:06:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:06:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:06:45 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:45 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:45 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:45.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:45 compute-0 ceph-mon[74331]: pgmap v1200: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:06:45 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:06:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 10:06:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 10:06:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 10:06:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 10:06:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 10:06:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 10:06:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:06:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:06:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 24 10:06:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 10:06:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:06:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 24 10:06:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 10:06:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:06:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 10:06:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:06:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 10:06:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 10:06:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 10:06:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 10:06:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 10:06:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 10:06:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 10:06:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 10:06:46 compute-0 nova_compute[257700]: 2025-11-24 10:06:46.310 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:06:46 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1201: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:06:46 compute-0 nova_compute[257700]: 2025-11-24 10:06:46.485 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:06:46 compute-0 ceph-mon[74331]: pgmap v1201: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:06:47 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:47 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:06:47 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:47.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:06:47 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:06:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:06:47.592Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 10:06:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:06:47.592Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 10:06:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:06:47.592Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 10:06:47 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:47 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:47 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:47.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:48 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1202: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:06:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:06:48.940Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:06:49 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:49 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:06:49 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:49.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:06:49 compute-0 ceph-mon[74331]: pgmap v1202: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:06:49 compute-0 sudo[291298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:06:49 compute-0 sudo[291298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:06:49 compute-0 sudo[291298]: pam_unix(sudo:session): session closed for user root
Nov 24 10:06:49 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:49 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:49 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:49.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:06:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:06:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:06:50 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:06:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:06:50 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:06:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:06:50 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:06:50 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1203: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:06:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:06:50] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 24 10:06:50 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:06:50] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 24 10:06:51 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:51 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:51 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:51.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:51 compute-0 nova_compute[257700]: 2025-11-24 10:06:51.312 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:06:51 compute-0 nova_compute[257700]: 2025-11-24 10:06:51.486 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:06:51 compute-0 ceph-mon[74331]: pgmap v1203: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:06:51 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:51 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:06:51 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:51.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:06:51 compute-0 podman[291326]: 2025-11-24 10:06:51.8322163 +0000 UTC m=+0.099243027 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 24 10:06:51 compute-0 podman[291327]: 2025-11-24 10:06:51.844070546 +0000 UTC m=+0.108831457 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 10:06:52 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:06:52 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1204: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:06:53 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:53 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:53 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:53.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:53 compute-0 ceph-mon[74331]: pgmap v1204: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:06:53 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:06:53.568Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:06:53 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:53 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:53 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:53.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:54 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1205: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:06:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:06:54 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:06:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:06:54 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:06:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:06:54 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:06:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:06:55 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:06:55 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:55 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:06:55 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:55.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:06:55 compute-0 ceph-mon[74331]: pgmap v1205: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:06:55 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:55 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:55 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:55.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:56 compute-0 nova_compute[257700]: 2025-11-24 10:06:56.314 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:06:56 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1206: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:06:56 compute-0 nova_compute[257700]: 2025-11-24 10:06:56.487 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:06:57 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:57 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:57 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:57.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:57 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:06:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:06:57.593Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:06:57 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:57 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:57 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:57.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:57 compute-0 podman[291380]: 2025-11-24 10:06:57.783188301 +0000 UTC m=+0.061686680 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 24 10:06:57 compute-0 ceph-mon[74331]: pgmap v1206: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:06:58 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 24 10:06:58 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 24 10:06:58 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1207: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:06:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:06:58.941Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:06:59 compute-0 ceph-mon[74331]: pgmap v1207: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:06:59 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:59 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:59 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:06:59.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:06:59 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:06:59 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:06:59 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:06:59.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:06:59 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:07:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:07:00 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:07:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:07:00 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:07:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:07:00 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:07:00 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1208: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:07:00 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:07:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:07:00] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Nov 24 10:07:00 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:07:00] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Nov 24 10:07:01 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:01 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:01 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:01.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:01 compute-0 nova_compute[257700]: 2025-11-24 10:07:01.317 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:07:01 compute-0 nova_compute[257700]: 2025-11-24 10:07:01.488 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:07:01 compute-0 ceph-mon[74331]: pgmap v1208: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:07:01 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:01 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:01 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:01.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:02 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:07:02 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1209: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:07:02 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Nov 24 10:07:02 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:07:02.603565) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 10:07:02 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Nov 24 10:07:02 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978822603631, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 644, "num_deletes": 251, "total_data_size": 804253, "memory_usage": 817224, "flush_reason": "Manual Compaction"}
Nov 24 10:07:02 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Nov 24 10:07:02 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/1642057191' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 10:07:02 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/1642057191' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 10:07:02 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978822610690, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 793238, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34021, "largest_seqno": 34664, "table_properties": {"data_size": 789716, "index_size": 1366, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8663, "raw_average_key_size": 19, "raw_value_size": 782517, "raw_average_value_size": 1803, "num_data_blocks": 60, "num_entries": 434, "num_filter_entries": 434, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763978782, "oldest_key_time": 1763978782, "file_creation_time": 1763978822, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "42aa12d2-c531-4ddc-8c4c-bc0b5971346b", "db_session_id": "RORHLERH15LC1QL8D0I4", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Nov 24 10:07:02 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 7148 microseconds, and 2591 cpu microseconds.
Nov 24 10:07:02 compute-0 ceph-mon[74331]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 10:07:02 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:07:02.610722) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 793238 bytes OK
Nov 24 10:07:02 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:07:02.610735) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Nov 24 10:07:02 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:07:02.614725) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Nov 24 10:07:02 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:07:02.614747) EVENT_LOG_v1 {"time_micros": 1763978822614741, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 10:07:02 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:07:02.614768) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 10:07:02 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 800718, prev total WAL file size 800718, number of live WAL files 2.
Nov 24 10:07:02 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 10:07:02 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:07:02.615468) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Nov 24 10:07:02 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 10:07:02 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(774KB)], [71(15MB)]
Nov 24 10:07:02 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978822615502, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 16828080, "oldest_snapshot_seqno": -1}
Nov 24 10:07:02 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 6547 keys, 14696685 bytes, temperature: kUnknown
Nov 24 10:07:02 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978822713920, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 14696685, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14652405, "index_size": 26824, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16389, "raw_key_size": 172194, "raw_average_key_size": 26, "raw_value_size": 14534082, "raw_average_value_size": 2219, "num_data_blocks": 1057, "num_entries": 6547, "num_filter_entries": 6547, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976305, "oldest_key_time": 0, "file_creation_time": 1763978822, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "42aa12d2-c531-4ddc-8c4c-bc0b5971346b", "db_session_id": "RORHLERH15LC1QL8D0I4", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Nov 24 10:07:02 compute-0 ceph-mon[74331]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 10:07:02 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:07:02.714175) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 14696685 bytes
Nov 24 10:07:02 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:07:02.715256) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 170.9 rd, 149.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 15.3 +0.0 blob) out(14.0 +0.0 blob), read-write-amplify(39.7) write-amplify(18.5) OK, records in: 7058, records dropped: 511 output_compression: NoCompression
Nov 24 10:07:02 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:07:02.715271) EVENT_LOG_v1 {"time_micros": 1763978822715264, "job": 40, "event": "compaction_finished", "compaction_time_micros": 98488, "compaction_time_cpu_micros": 38118, "output_level": 6, "num_output_files": 1, "total_output_size": 14696685, "num_input_records": 7058, "num_output_records": 6547, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 10:07:02 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 10:07:02 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978822715485, "job": 40, "event": "table_file_deletion", "file_number": 73}
Nov 24 10:07:02 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 10:07:02 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978822718049, "job": 40, "event": "table_file_deletion", "file_number": 71}
Nov 24 10:07:02 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:07:02.615380) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:07:02 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:07:02.718210) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:07:02 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:07:02.718217) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:07:02 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:07:02.718219) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:07:02 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:07:02.718221) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:07:02 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:07:02.718223) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:07:03 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:03 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:03 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:03.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:03 compute-0 sshd-session[291374]: error: kex_exchange_identification: read: Connection timed out
Nov 24 10:07:03 compute-0 sshd-session[291374]: banner exchange: Connection from 14.215.126.91 port 41794: Connection timed out
Nov 24 10:07:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:07:03.569Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 10:07:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:07:03.570Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:07:03 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:03 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:03 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:03.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:03 compute-0 ceph-mon[74331]: pgmap v1209: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:07:04 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1210: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:07:04 compute-0 ceph-mon[74331]: pgmap v1210: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:07:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:07:04 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:07:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:07:04 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:07:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:07:04 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:07:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:07:05 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:07:05 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:05 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:07:05 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:05.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:07:05 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:05 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:05 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:05.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:06 compute-0 nova_compute[257700]: 2025-11-24 10:07:06.318 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:07:06 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1211: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:07:06 compute-0 nova_compute[257700]: 2025-11-24 10:07:06.489 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:07:07 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:07 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:07 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:07.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:07 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:07:07 compute-0 ceph-mon[74331]: pgmap v1211: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:07:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:07:07.594Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:07:07 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:07 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:07 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:07.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:08 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1212: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:07:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:07:08.942Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:07:09 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:09 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:09 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:09.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:09 compute-0 ceph-mon[74331]: pgmap v1212: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:07:09 compute-0 sudo[291416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:07:09 compute-0 sudo[291416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:07:09 compute-0 sudo[291416]: pam_unix(sudo:session): session closed for user root
Nov 24 10:07:09 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:09 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:09 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:09.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:07:10 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:07:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:07:10 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:07:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:07:10 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:07:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:07:10 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:07:10 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1213: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:07:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:07:10] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Nov 24 10:07:10 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:07:10] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Nov 24 10:07:11 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:11 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:11 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:11.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:11 compute-0 nova_compute[257700]: 2025-11-24 10:07:11.320 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:07:11 compute-0 nova_compute[257700]: 2025-11-24 10:07:11.491 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:07:11 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:11 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:11 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:11.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:11 compute-0 ceph-mon[74331]: pgmap v1213: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:07:12 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:07:12 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1214: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:07:12 compute-0 ceph-mon[74331]: pgmap v1214: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:07:13 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:13 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:07:13 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:13.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:07:13 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:07:13.571Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 10:07:13 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:07:13.571Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:07:13 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:13 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:13 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:13.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:14 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1215: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:07:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:07:14 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:07:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:07:14 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:07:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:07:14 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:07:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:07:15 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:07:15 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:15 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:07:15 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:15.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:07:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:07:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:07:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:07:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:07:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:07:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:07:15 compute-0 ceph-mon[74331]: pgmap v1215: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:07:15 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:07:15 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:15 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:07:15 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:15.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:07:16 compute-0 nova_compute[257700]: 2025-11-24 10:07:16.324 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:07:16 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1216: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:07:16 compute-0 nova_compute[257700]: 2025-11-24 10:07:16.492 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:07:17 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:17 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:07:17 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:17.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:07:17 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:07:17 compute-0 ceph-mon[74331]: pgmap v1216: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:07:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:07:17.595Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:07:17 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:17 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:17 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:17.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:18 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1217: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:07:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:07:18.943Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:07:19 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:19 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:07:19 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:19.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:07:19 compute-0 sshd-session[291449]: Invalid user jason from 36.255.3.203 port 40344
Nov 24 10:07:19 compute-0 sshd-session[291449]: Received disconnect from 36.255.3.203 port 40344:11: Bye Bye [preauth]
Nov 24 10:07:19 compute-0 sshd-session[291449]: Disconnected from invalid user jason 36.255.3.203 port 40344 [preauth]
Nov 24 10:07:19 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:19 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:19 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:19.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:19 compute-0 sudo[291453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:07:19 compute-0 sudo[291453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:07:19 compute-0 sudo[291453]: pam_unix(sudo:session): session closed for user root
Nov 24 10:07:19 compute-0 sudo[291478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Nov 24 10:07:19 compute-0 sudo[291478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:07:19 compute-0 ceph-mon[74331]: pgmap v1217: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:07:19 compute-0 sudo[291478]: pam_unix(sudo:session): session closed for user root
Nov 24 10:07:19 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 10:07:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:07:19 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:07:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:07:19 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:07:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:07:19 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:07:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:07:20 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:07:20 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:07:20 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 10:07:20 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:07:20 compute-0 sudo[291522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:07:20 compute-0 sudo[291522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:07:20 compute-0 sudo[291522]: pam_unix(sudo:session): session closed for user root
Nov 24 10:07:20 compute-0 sudo[291547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 10:07:20 compute-0 sudo[291547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:07:20 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1218: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:07:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:07:20.576 165073 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:07:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:07:20.577 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:07:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:07:20.577 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:07:20 compute-0 sudo[291547]: pam_unix(sudo:session): session closed for user root
Nov 24 10:07:20 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1219: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 24 10:07:20 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1220: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 744 B/s rd, 0 op/s
Nov 24 10:07:20 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 10:07:20 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:07:20 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 10:07:20 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:07:20 compute-0 sudo[291602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:07:20 compute-0 sudo[291602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:07:20 compute-0 sudo[291602]: pam_unix(sudo:session): session closed for user root
Nov 24 10:07:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:07:20] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Nov 24 10:07:20 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:07:20] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Nov 24 10:07:21 compute-0 sudo[291627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 24 10:07:21 compute-0 sudo[291627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:07:21 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:07:21 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:07:21 compute-0 ceph-mon[74331]: pgmap v1218: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:07:21 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:07:21 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 10:07:21 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:07:21 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:07:21 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 10:07:21 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 10:07:21 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:07:21 compute-0 sudo[283309]: pam_unix(sudo:session): session closed for user root
Nov 24 10:07:21 compute-0 sshd-session[283308]: Received disconnect from 192.168.122.10 port 41258:11: disconnected by user
Nov 24 10:07:21 compute-0 sshd-session[283308]: Disconnected from user zuul 192.168.122.10 port 41258
Nov 24 10:07:21 compute-0 sshd-session[283305]: pam_unix(sshd:session): session closed for user zuul
Nov 24 10:07:21 compute-0 systemd[1]: session-56.scope: Deactivated successfully.
Nov 24 10:07:21 compute-0 systemd[1]: session-56.scope: Consumed 2min 51.812s CPU time, 884.6M memory peak, read 381.7M from disk, written 275.9M to disk.
Nov 24 10:07:21 compute-0 systemd-logind[822]: Session 56 logged out. Waiting for processes to exit.
Nov 24 10:07:21 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:21 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:21 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:21.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:21 compute-0 systemd-logind[822]: Removed session 56.
Nov 24 10:07:21 compute-0 sshd-session[291652]: Accepted publickey for zuul from 192.168.122.10 port 43090 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 10:07:21 compute-0 systemd-logind[822]: New session 57 of user zuul.
Nov 24 10:07:21 compute-0 systemd[1]: Started Session 57 of User zuul.
Nov 24 10:07:21 compute-0 sshd-session[291652]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 10:07:21 compute-0 sudo[291661]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cat /var/tmp/sos-osp/sosreport-compute-0-2025-11-24-frpavih.tar.xz
Nov 24 10:07:21 compute-0 sudo[291661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 10:07:21 compute-0 nova_compute[257700]: 2025-11-24 10:07:21.324 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:07:21 compute-0 sudo[291661]: pam_unix(sudo:session): session closed for user root
Nov 24 10:07:21 compute-0 sshd-session[291657]: Received disconnect from 192.168.122.10 port 43090:11: disconnected by user
Nov 24 10:07:21 compute-0 podman[291723]: 2025-11-24 10:07:21.430854899 +0000 UTC m=+0.039659731 container create c5a0d9aa36c41dc228ca9f2f0f44c43bbde3e1dd016fee58673ed80124fa2bd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_bassi, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 10:07:21 compute-0 sshd-session[291657]: Disconnected from user zuul 192.168.122.10 port 43090
Nov 24 10:07:21 compute-0 sshd-session[291652]: pam_unix(sshd:session): session closed for user zuul
Nov 24 10:07:21 compute-0 systemd[1]: session-57.scope: Deactivated successfully.
Nov 24 10:07:21 compute-0 systemd-logind[822]: Session 57 logged out. Waiting for processes to exit.
Nov 24 10:07:21 compute-0 systemd-logind[822]: Removed session 57.
Nov 24 10:07:21 compute-0 systemd[1]: Started libpod-conmon-c5a0d9aa36c41dc228ca9f2f0f44c43bbde3e1dd016fee58673ed80124fa2bd6.scope.
Nov 24 10:07:21 compute-0 nova_compute[257700]: 2025-11-24 10:07:21.492 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:07:21 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:07:21 compute-0 podman[291723]: 2025-11-24 10:07:21.411848985 +0000 UTC m=+0.020653817 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:07:21 compute-0 podman[291723]: 2025-11-24 10:07:21.514819765 +0000 UTC m=+0.123624617 container init c5a0d9aa36c41dc228ca9f2f0f44c43bbde3e1dd016fee58673ed80124fa2bd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 10:07:21 compute-0 podman[291723]: 2025-11-24 10:07:21.521248665 +0000 UTC m=+0.130053497 container start c5a0d9aa36c41dc228ca9f2f0f44c43bbde3e1dd016fee58673ed80124fa2bd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_bassi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Nov 24 10:07:21 compute-0 podman[291723]: 2025-11-24 10:07:21.524626979 +0000 UTC m=+0.133431811 container attach c5a0d9aa36c41dc228ca9f2f0f44c43bbde3e1dd016fee58673ed80124fa2bd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_bassi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 10:07:21 compute-0 priceless_bassi[291741]: 167 167
Nov 24 10:07:21 compute-0 podman[291723]: 2025-11-24 10:07:21.529536951 +0000 UTC m=+0.138341783 container died c5a0d9aa36c41dc228ca9f2f0f44c43bbde3e1dd016fee58673ed80124fa2bd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_bassi, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 10:07:21 compute-0 systemd[1]: libpod-c5a0d9aa36c41dc228ca9f2f0f44c43bbde3e1dd016fee58673ed80124fa2bd6.scope: Deactivated successfully.
Nov 24 10:07:21 compute-0 sshd-session[291737]: Accepted publickey for zuul from 192.168.122.10 port 43102 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 10:07:21 compute-0 systemd-logind[822]: New session 58 of user zuul.
Nov 24 10:07:21 compute-0 systemd[1]: Started Session 58 of User zuul.
Nov 24 10:07:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-87ed4efbecdc072c0b7b3f2657e3b7215fdba0e31d1fdeebea43e3d7c837af03-merged.mount: Deactivated successfully.
Nov 24 10:07:21 compute-0 sshd-session[291737]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 10:07:21 compute-0 podman[291723]: 2025-11-24 10:07:21.579935439 +0000 UTC m=+0.188740281 container remove c5a0d9aa36c41dc228ca9f2f0f44c43bbde3e1dd016fee58673ed80124fa2bd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_bassi, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 10:07:21 compute-0 systemd[1]: libpod-conmon-c5a0d9aa36c41dc228ca9f2f0f44c43bbde3e1dd016fee58673ed80124fa2bd6.scope: Deactivated successfully.
Nov 24 10:07:21 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:21 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:21 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:21.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:21 compute-0 sudo[291764]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rm -rf /var/tmp/sos-osp
Nov 24 10:07:21 compute-0 sudo[291764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 10:07:21 compute-0 sudo[291764]: pam_unix(sudo:session): session closed for user root
Nov 24 10:07:21 compute-0 sshd-session[291761]: Received disconnect from 192.168.122.10 port 43102:11: disconnected by user
Nov 24 10:07:21 compute-0 sshd-session[291761]: Disconnected from user zuul 192.168.122.10 port 43102
Nov 24 10:07:21 compute-0 sshd-session[291737]: pam_unix(sshd:session): session closed for user zuul
Nov 24 10:07:21 compute-0 systemd[1]: session-58.scope: Deactivated successfully.
Nov 24 10:07:21 compute-0 systemd-logind[822]: Session 58 logged out. Waiting for processes to exit.
Nov 24 10:07:21 compute-0 systemd-logind[822]: Removed session 58.
Nov 24 10:07:21 compute-0 podman[291794]: 2025-11-24 10:07:21.748258089 +0000 UTC m=+0.038503591 container create 57e093cb009781d2740ff3a421af42e7cc0fdf0e000a9e85993576225f538a21 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_feistel, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 10:07:21 compute-0 systemd[1]: Started libpod-conmon-57e093cb009781d2740ff3a421af42e7cc0fdf0e000a9e85993576225f538a21.scope.
Nov 24 10:07:21 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:07:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af46ef37180fa3a0887917d093719cbc91bc3648721f6e7b520cca25a0750ea2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 10:07:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af46ef37180fa3a0887917d093719cbc91bc3648721f6e7b520cca25a0750ea2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 10:07:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af46ef37180fa3a0887917d093719cbc91bc3648721f6e7b520cca25a0750ea2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 10:07:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af46ef37180fa3a0887917d093719cbc91bc3648721f6e7b520cca25a0750ea2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 10:07:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af46ef37180fa3a0887917d093719cbc91bc3648721f6e7b520cca25a0750ea2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 10:07:21 compute-0 podman[291794]: 2025-11-24 10:07:21.733239515 +0000 UTC m=+0.023485037 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:07:21 compute-0 podman[291794]: 2025-11-24 10:07:21.836591823 +0000 UTC m=+0.126837325 container init 57e093cb009781d2740ff3a421af42e7cc0fdf0e000a9e85993576225f538a21 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 24 10:07:21 compute-0 podman[291794]: 2025-11-24 10:07:21.843985369 +0000 UTC m=+0.134230871 container start 57e093cb009781d2740ff3a421af42e7cc0fdf0e000a9e85993576225f538a21 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 10:07:21 compute-0 podman[291794]: 2025-11-24 10:07:21.846784999 +0000 UTC m=+0.137030661 container attach 57e093cb009781d2740ff3a421af42e7cc0fdf0e000a9e85993576225f538a21 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_feistel, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 10:07:22 compute-0 ceph-mon[74331]: pgmap v1219: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 24 10:07:22 compute-0 ceph-mon[74331]: pgmap v1220: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 744 B/s rd, 0 op/s
Nov 24 10:07:22 compute-0 admiring_feistel[291810]: --> passed data devices: 0 physical, 1 LVM
Nov 24 10:07:22 compute-0 admiring_feistel[291810]: --> All data devices are unavailable
Nov 24 10:07:22 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:07:22 compute-0 systemd[1]: libpod-57e093cb009781d2740ff3a421af42e7cc0fdf0e000a9e85993576225f538a21.scope: Deactivated successfully.
Nov 24 10:07:22 compute-0 podman[291794]: 2025-11-24 10:07:22.178232869 +0000 UTC m=+0.468478381 container died 57e093cb009781d2740ff3a421af42e7cc0fdf0e000a9e85993576225f538a21 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_feistel, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid)
Nov 24 10:07:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-af46ef37180fa3a0887917d093719cbc91bc3648721f6e7b520cca25a0750ea2-merged.mount: Deactivated successfully.
Nov 24 10:07:22 compute-0 podman[291794]: 2025-11-24 10:07:22.224940705 +0000 UTC m=+0.515186207 container remove 57e093cb009781d2740ff3a421af42e7cc0fdf0e000a9e85993576225f538a21 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_feistel, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Nov 24 10:07:22 compute-0 systemd[1]: libpod-conmon-57e093cb009781d2740ff3a421af42e7cc0fdf0e000a9e85993576225f538a21.scope: Deactivated successfully.
Nov 24 10:07:22 compute-0 sudo[291627]: pam_unix(sudo:session): session closed for user root
Nov 24 10:07:22 compute-0 podman[291828]: 2025-11-24 10:07:22.31373551 +0000 UTC m=+0.103394831 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd)
Nov 24 10:07:22 compute-0 podman[291838]: 2025-11-24 10:07:22.317341671 +0000 UTC m=+0.107628717 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible)
Nov 24 10:07:22 compute-0 sudo[291874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:07:22 compute-0 sudo[291874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:07:22 compute-0 sudo[291874]: pam_unix(sudo:session): session closed for user root
Nov 24 10:07:22 compute-0 sudo[291908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- lvm list --format json
Nov 24 10:07:22 compute-0 sudo[291908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:07:22 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1221: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:07:22 compute-0 podman[291974]: 2025-11-24 10:07:22.806333303 +0000 UTC m=+0.049311161 container create ed3a56059af22c32e9ad4a9a463c9603de03c9aca6cf0ecfe058989fea392dbd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 10:07:22 compute-0 systemd[1]: Started libpod-conmon-ed3a56059af22c32e9ad4a9a463c9603de03c9aca6cf0ecfe058989fea392dbd.scope.
Nov 24 10:07:22 compute-0 podman[291974]: 2025-11-24 10:07:22.78616609 +0000 UTC m=+0.029143968 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:07:22 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:07:22 compute-0 podman[291974]: 2025-11-24 10:07:22.907560999 +0000 UTC m=+0.150538897 container init ed3a56059af22c32e9ad4a9a463c9603de03c9aca6cf0ecfe058989fea392dbd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 10:07:22 compute-0 podman[291974]: 2025-11-24 10:07:22.921610699 +0000 UTC m=+0.164588587 container start ed3a56059af22c32e9ad4a9a463c9603de03c9aca6cf0ecfe058989fea392dbd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Nov 24 10:07:22 compute-0 epic_jemison[291990]: 167 167
Nov 24 10:07:22 compute-0 systemd[1]: libpod-ed3a56059af22c32e9ad4a9a463c9603de03c9aca6cf0ecfe058989fea392dbd.scope: Deactivated successfully.
Nov 24 10:07:22 compute-0 podman[291974]: 2025-11-24 10:07:22.927446045 +0000 UTC m=+0.170423923 container attach ed3a56059af22c32e9ad4a9a463c9603de03c9aca6cf0ecfe058989fea392dbd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_jemison, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Nov 24 10:07:22 compute-0 podman[291974]: 2025-11-24 10:07:22.928788858 +0000 UTC m=+0.171766726 container died ed3a56059af22c32e9ad4a9a463c9603de03c9aca6cf0ecfe058989fea392dbd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 24 10:07:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ae9d9812f921ef872c1cede0b73d4c00c6453bf8a0911ed9a36e23304a211b8-merged.mount: Deactivated successfully.
Nov 24 10:07:22 compute-0 podman[291974]: 2025-11-24 10:07:22.981067213 +0000 UTC m=+0.224045091 container remove ed3a56059af22c32e9ad4a9a463c9603de03c9aca6cf0ecfe058989fea392dbd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_jemison, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Nov 24 10:07:22 compute-0 systemd[1]: libpod-conmon-ed3a56059af22c32e9ad4a9a463c9603de03c9aca6cf0ecfe058989fea392dbd.scope: Deactivated successfully.
Nov 24 10:07:23 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:23 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:07:23 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:23.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:07:23 compute-0 podman[292015]: 2025-11-24 10:07:23.173538026 +0000 UTC m=+0.041668741 container create cfbc8926ca388c81d462d0f7ad73845169fcbe598cd54cdf4107f9e90f45abd4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True)
Nov 24 10:07:23 compute-0 systemd[1]: Started libpod-conmon-cfbc8926ca388c81d462d0f7ad73845169fcbe598cd54cdf4107f9e90f45abd4.scope.
Nov 24 10:07:23 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:07:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9445302c76cab694ba554147ea16dc1d1c65b2578dc9fc36991ec42bf14bd149/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 10:07:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9445302c76cab694ba554147ea16dc1d1c65b2578dc9fc36991ec42bf14bd149/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 10:07:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9445302c76cab694ba554147ea16dc1d1c65b2578dc9fc36991ec42bf14bd149/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 10:07:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9445302c76cab694ba554147ea16dc1d1c65b2578dc9fc36991ec42bf14bd149/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 10:07:23 compute-0 podman[292015]: 2025-11-24 10:07:23.248694062 +0000 UTC m=+0.116824877 container init cfbc8926ca388c81d462d0f7ad73845169fcbe598cd54cdf4107f9e90f45abd4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_mcclintock, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 10:07:23 compute-0 podman[292015]: 2025-11-24 10:07:23.15722903 +0000 UTC m=+0.025359755 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:07:23 compute-0 podman[292015]: 2025-11-24 10:07:23.265624834 +0000 UTC m=+0.133755549 container start cfbc8926ca388c81d462d0f7ad73845169fcbe598cd54cdf4107f9e90f45abd4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_mcclintock, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 24 10:07:23 compute-0 podman[292015]: 2025-11-24 10:07:23.268601448 +0000 UTC m=+0.136732183 container attach cfbc8926ca388c81d462d0f7ad73845169fcbe598cd54cdf4107f9e90f45abd4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_mcclintock, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2)
Nov 24 10:07:23 compute-0 competent_mcclintock[292032]: {
Nov 24 10:07:23 compute-0 competent_mcclintock[292032]:     "0": [
Nov 24 10:07:23 compute-0 competent_mcclintock[292032]:         {
Nov 24 10:07:23 compute-0 competent_mcclintock[292032]:             "devices": [
Nov 24 10:07:23 compute-0 competent_mcclintock[292032]:                 "/dev/loop3"
Nov 24 10:07:23 compute-0 competent_mcclintock[292032]:             ],
Nov 24 10:07:23 compute-0 competent_mcclintock[292032]:             "lv_name": "ceph_lv0",
Nov 24 10:07:23 compute-0 competent_mcclintock[292032]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 10:07:23 compute-0 competent_mcclintock[292032]:             "lv_size": "21470642176",
Nov 24 10:07:23 compute-0 competent_mcclintock[292032]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=84a084c3-61a7-5de7-8207-1f88efa59a64,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 24 10:07:23 compute-0 competent_mcclintock[292032]:             "lv_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 10:07:23 compute-0 competent_mcclintock[292032]:             "name": "ceph_lv0",
Nov 24 10:07:23 compute-0 competent_mcclintock[292032]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 10:07:23 compute-0 competent_mcclintock[292032]:             "tags": {
Nov 24 10:07:23 compute-0 competent_mcclintock[292032]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 10:07:23 compute-0 competent_mcclintock[292032]:                 "ceph.block_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 10:07:23 compute-0 competent_mcclintock[292032]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 10:07:23 compute-0 competent_mcclintock[292032]:                 "ceph.cluster_fsid": "84a084c3-61a7-5de7-8207-1f88efa59a64",
Nov 24 10:07:23 compute-0 competent_mcclintock[292032]:                 "ceph.cluster_name": "ceph",
Nov 24 10:07:23 compute-0 competent_mcclintock[292032]:                 "ceph.crush_device_class": "",
Nov 24 10:07:23 compute-0 competent_mcclintock[292032]:                 "ceph.encrypted": "0",
Nov 24 10:07:23 compute-0 competent_mcclintock[292032]:                 "ceph.osd_fsid": "4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c",
Nov 24 10:07:23 compute-0 competent_mcclintock[292032]:                 "ceph.osd_id": "0",
Nov 24 10:07:23 compute-0 competent_mcclintock[292032]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 10:07:23 compute-0 competent_mcclintock[292032]:                 "ceph.type": "block",
Nov 24 10:07:23 compute-0 competent_mcclintock[292032]:                 "ceph.vdo": "0",
Nov 24 10:07:23 compute-0 competent_mcclintock[292032]:                 "ceph.with_tpm": "0"
Nov 24 10:07:23 compute-0 competent_mcclintock[292032]:             },
Nov 24 10:07:23 compute-0 competent_mcclintock[292032]:             "type": "block",
Nov 24 10:07:23 compute-0 competent_mcclintock[292032]:             "vg_name": "ceph_vg0"
Nov 24 10:07:23 compute-0 competent_mcclintock[292032]:         }
Nov 24 10:07:23 compute-0 competent_mcclintock[292032]:     ]
Nov 24 10:07:23 compute-0 competent_mcclintock[292032]: }
Nov 24 10:07:23 compute-0 systemd[1]: libpod-cfbc8926ca388c81d462d0f7ad73845169fcbe598cd54cdf4107f9e90f45abd4.scope: Deactivated successfully.
Nov 24 10:07:23 compute-0 podman[292015]: 2025-11-24 10:07:23.524581566 +0000 UTC m=+0.392712301 container died cfbc8926ca388c81d462d0f7ad73845169fcbe598cd54cdf4107f9e90f45abd4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_mcclintock, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 10:07:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-9445302c76cab694ba554147ea16dc1d1c65b2578dc9fc36991ec42bf14bd149-merged.mount: Deactivated successfully.
Nov 24 10:07:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:07:23.572Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 10:07:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:07:23.574Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 10:07:23 compute-0 podman[292015]: 2025-11-24 10:07:23.574941243 +0000 UTC m=+0.443071958 container remove cfbc8926ca388c81d462d0f7ad73845169fcbe598cd54cdf4107f9e90f45abd4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_mcclintock, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Nov 24 10:07:23 compute-0 systemd[1]: libpod-conmon-cfbc8926ca388c81d462d0f7ad73845169fcbe598cd54cdf4107f9e90f45abd4.scope: Deactivated successfully.
Nov 24 10:07:23 compute-0 sudo[291908]: pam_unix(sudo:session): session closed for user root
Nov 24 10:07:23 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:23 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:23 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:23.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:23 compute-0 sudo[292056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:07:23 compute-0 sudo[292056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:07:23 compute-0 sudo[292056]: pam_unix(sudo:session): session closed for user root
Nov 24 10:07:23 compute-0 sudo[292081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- raw list --format json
Nov 24 10:07:23 compute-0 sudo[292081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:07:23 compute-0 nova_compute[257700]: 2025-11-24 10:07:23.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:07:23 compute-0 nova_compute[257700]: 2025-11-24 10:07:23.922 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:07:23 compute-0 nova_compute[257700]: 2025-11-24 10:07:23.943 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:07:23 compute-0 nova_compute[257700]: 2025-11-24 10:07:23.943 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:07:23 compute-0 nova_compute[257700]: 2025-11-24 10:07:23.943 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:07:23 compute-0 nova_compute[257700]: 2025-11-24 10:07:23.943 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 10:07:23 compute-0 nova_compute[257700]: 2025-11-24 10:07:23.944 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:07:24 compute-0 ceph-mon[74331]: pgmap v1221: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:07:24 compute-0 podman[292165]: 2025-11-24 10:07:24.164103785 +0000 UTC m=+0.043740903 container create 6bdd257335df304cfaad04647859a47b8bef184dfe40e3eeef40c421b3c8f628 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_black, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 24 10:07:24 compute-0 systemd[1]: Started libpod-conmon-6bdd257335df304cfaad04647859a47b8bef184dfe40e3eeef40c421b3c8f628.scope.
Nov 24 10:07:24 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:07:24 compute-0 podman[292165]: 2025-11-24 10:07:24.145434029 +0000 UTC m=+0.025071167 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:07:24 compute-0 podman[292165]: 2025-11-24 10:07:24.246968203 +0000 UTC m=+0.126605351 container init 6bdd257335df304cfaad04647859a47b8bef184dfe40e3eeef40c421b3c8f628 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 10:07:24 compute-0 podman[292165]: 2025-11-24 10:07:24.253727272 +0000 UTC m=+0.133364390 container start 6bdd257335df304cfaad04647859a47b8bef184dfe40e3eeef40c421b3c8f628 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_black, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 24 10:07:24 compute-0 podman[292165]: 2025-11-24 10:07:24.257386012 +0000 UTC m=+0.137023130 container attach 6bdd257335df304cfaad04647859a47b8bef184dfe40e3eeef40c421b3c8f628 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True)
Nov 24 10:07:24 compute-0 systemd[1]: libpod-6bdd257335df304cfaad04647859a47b8bef184dfe40e3eeef40c421b3c8f628.scope: Deactivated successfully.
Nov 24 10:07:24 compute-0 charming_black[292182]: 167 167
Nov 24 10:07:24 compute-0 conmon[292182]: conmon 6bdd257335df304cfaad <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6bdd257335df304cfaad04647859a47b8bef184dfe40e3eeef40c421b3c8f628.scope/container/memory.events
Nov 24 10:07:24 compute-0 podman[292165]: 2025-11-24 10:07:24.261531506 +0000 UTC m=+0.141168634 container died 6bdd257335df304cfaad04647859a47b8bef184dfe40e3eeef40c421b3c8f628 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_black, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 10:07:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-68017f4ebc4cc2613321faa286b8e09f4ed377877d75a17c3bc71f952d20ded2-merged.mount: Deactivated successfully.
Nov 24 10:07:24 compute-0 podman[292165]: 2025-11-24 10:07:24.298293814 +0000 UTC m=+0.177930932 container remove 6bdd257335df304cfaad04647859a47b8bef184dfe40e3eeef40c421b3c8f628 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 10:07:24 compute-0 systemd[1]: libpod-conmon-6bdd257335df304cfaad04647859a47b8bef184dfe40e3eeef40c421b3c8f628.scope: Deactivated successfully.
Nov 24 10:07:24 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:07:24 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/297756788' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:07:24 compute-0 nova_compute[257700]: 2025-11-24 10:07:24.417 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:07:24 compute-0 podman[292210]: 2025-11-24 10:07:24.492785836 +0000 UTC m=+0.063984046 container create 646c3ccdece7bb67945bcc1d4b6deaa3c941513e8cd3752e2f9bcbd7b3ffb2e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 24 10:07:24 compute-0 systemd[1]: Started libpod-conmon-646c3ccdece7bb67945bcc1d4b6deaa3c941513e8cd3752e2f9bcbd7b3ffb2e2.scope.
Nov 24 10:07:24 compute-0 podman[292210]: 2025-11-24 10:07:24.465074566 +0000 UTC m=+0.036272796 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:07:24 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:07:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3ba213785eaeef2c21b8a5e5748e5a89b6a83c52d8b93313cb0f7f21ea51900/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 10:07:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3ba213785eaeef2c21b8a5e5748e5a89b6a83c52d8b93313cb0f7f21ea51900/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 10:07:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3ba213785eaeef2c21b8a5e5748e5a89b6a83c52d8b93313cb0f7f21ea51900/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 10:07:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3ba213785eaeef2c21b8a5e5748e5a89b6a83c52d8b93313cb0f7f21ea51900/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 10:07:24 compute-0 podman[292210]: 2025-11-24 10:07:24.597578832 +0000 UTC m=+0.168777142 container init 646c3ccdece7bb67945bcc1d4b6deaa3c941513e8cd3752e2f9bcbd7b3ffb2e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_khorana, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 10:07:24 compute-0 nova_compute[257700]: 2025-11-24 10:07:24.604 257704 WARNING nova.virt.libvirt.driver [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 10:07:24 compute-0 nova_compute[257700]: 2025-11-24 10:07:24.605 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4430MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 10:07:24 compute-0 nova_compute[257700]: 2025-11-24 10:07:24.606 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:07:24 compute-0 nova_compute[257700]: 2025-11-24 10:07:24.606 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:07:24 compute-0 podman[292210]: 2025-11-24 10:07:24.610199157 +0000 UTC m=+0.181397357 container start 646c3ccdece7bb67945bcc1d4b6deaa3c941513e8cd3752e2f9bcbd7b3ffb2e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_khorana, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 24 10:07:24 compute-0 podman[292210]: 2025-11-24 10:07:24.616495704 +0000 UTC m=+0.187693914 container attach 646c3ccdece7bb67945bcc1d4b6deaa3c941513e8cd3752e2f9bcbd7b3ffb2e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_khorana, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 10:07:24 compute-0 nova_compute[257700]: 2025-11-24 10:07:24.660 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 10:07:24 compute-0 nova_compute[257700]: 2025-11-24 10:07:24.661 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 10:07:24 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1222: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 744 B/s rd, 0 op/s
Nov 24 10:07:24 compute-0 nova_compute[257700]: 2025-11-24 10:07:24.809 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:07:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:07:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:07:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:07:25 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:07:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:07:25 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:07:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:07:25 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:07:25 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:25 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:25 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:25.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:25 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/297756788' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:07:25 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:07:25 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1761695698' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:07:25 compute-0 lvm[292320]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 10:07:25 compute-0 lvm[292320]: VG ceph_vg0 finished
Nov 24 10:07:25 compute-0 nova_compute[257700]: 2025-11-24 10:07:25.261 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:07:25 compute-0 nova_compute[257700]: 2025-11-24 10:07:25.267 257704 DEBUG nova.compute.provider_tree [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Inventory has not changed in ProviderTree for provider: a50ce3b5-7e9e-4263-a4aa-c35573ac7257 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 10:07:25 compute-0 nova_compute[257700]: 2025-11-24 10:07:25.283 257704 DEBUG nova.scheduler.client.report [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Inventory has not changed for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 10:07:25 compute-0 nova_compute[257700]: 2025-11-24 10:07:25.284 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 10:07:25 compute-0 nova_compute[257700]: 2025-11-24 10:07:25.284 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:07:25 compute-0 lucid_khorana[292226]: {}
Nov 24 10:07:25 compute-0 systemd[1]: libpod-646c3ccdece7bb67945bcc1d4b6deaa3c941513e8cd3752e2f9bcbd7b3ffb2e2.scope: Deactivated successfully.
Nov 24 10:07:25 compute-0 systemd[1]: libpod-646c3ccdece7bb67945bcc1d4b6deaa3c941513e8cd3752e2f9bcbd7b3ffb2e2.scope: Consumed 1.075s CPU time.
Nov 24 10:07:25 compute-0 podman[292326]: 2025-11-24 10:07:25.370225613 +0000 UTC m=+0.021323493 container died 646c3ccdece7bb67945bcc1d4b6deaa3c941513e8cd3752e2f9bcbd7b3ffb2e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 10:07:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3ba213785eaeef2c21b8a5e5748e5a89b6a83c52d8b93313cb0f7f21ea51900-merged.mount: Deactivated successfully.
Nov 24 10:07:25 compute-0 podman[292326]: 2025-11-24 10:07:25.405606055 +0000 UTC m=+0.056703905 container remove 646c3ccdece7bb67945bcc1d4b6deaa3c941513e8cd3752e2f9bcbd7b3ffb2e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_khorana, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 10:07:25 compute-0 systemd[1]: libpod-conmon-646c3ccdece7bb67945bcc1d4b6deaa3c941513e8cd3752e2f9bcbd7b3ffb2e2.scope: Deactivated successfully.
Nov 24 10:07:25 compute-0 sudo[292081]: pam_unix(sudo:session): session closed for user root
Nov 24 10:07:25 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 10:07:25 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:07:25 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 10:07:25 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:07:25 compute-0 sudo[292341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 10:07:25 compute-0 sudo[292341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:07:25 compute-0 sudo[292341]: pam_unix(sudo:session): session closed for user root
Nov 24 10:07:25 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:25 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:25 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:25.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:26 compute-0 ceph-mon[74331]: pgmap v1222: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 744 B/s rd, 0 op/s
Nov 24 10:07:26 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1761695698' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:07:26 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:07:26 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:07:26 compute-0 nova_compute[257700]: 2025-11-24 10:07:26.279 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:07:26 compute-0 nova_compute[257700]: 2025-11-24 10:07:26.291 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:07:26 compute-0 nova_compute[257700]: 2025-11-24 10:07:26.291 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 10:07:26 compute-0 nova_compute[257700]: 2025-11-24 10:07:26.291 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 10:07:26 compute-0 nova_compute[257700]: 2025-11-24 10:07:26.299 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 10:07:26 compute-0 nova_compute[257700]: 2025-11-24 10:07:26.299 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:07:26 compute-0 nova_compute[257700]: 2025-11-24 10:07:26.300 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:07:26 compute-0 nova_compute[257700]: 2025-11-24 10:07:26.326 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:07:26 compute-0 nova_compute[257700]: 2025-11-24 10:07:26.494 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:07:26 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1223: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:07:26 compute-0 nova_compute[257700]: 2025-11-24 10:07:26.896 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:07:27 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:27 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:07:27 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:27.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:07:27 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:07:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:07:27.596Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:07:27 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:27 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:27 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:27.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:27 compute-0 nova_compute[257700]: 2025-11-24 10:07:27.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:07:27 compute-0 nova_compute[257700]: 2025-11-24 10:07:27.922 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:07:27 compute-0 nova_compute[257700]: 2025-11-24 10:07:27.922 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 24 10:07:27 compute-0 nova_compute[257700]: 2025-11-24 10:07:27.952 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 24 10:07:28 compute-0 ceph-mon[74331]: pgmap v1223: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:07:28 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1224: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:07:28 compute-0 podman[292370]: 2025-11-24 10:07:28.782826182 +0000 UTC m=+0.054114151 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 24 10:07:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:07:28.943Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:07:28 compute-0 nova_compute[257700]: 2025-11-24 10:07:28.952 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:07:28 compute-0 nova_compute[257700]: 2025-11-24 10:07:28.952 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 10:07:29 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:29 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:29 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:29.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:29 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/1717278106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:07:29 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:29 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:29 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:29.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:29 compute-0 sudo[292391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:07:29 compute-0 sudo[292391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:07:29 compute-0 sudo[292391]: pam_unix(sudo:session): session closed for user root
Nov 24 10:07:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:07:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:07:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:07:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:07:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:07:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:07:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:07:30 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:07:30 compute-0 ceph-mon[74331]: pgmap v1224: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:07:30 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/3520934719' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:07:30 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/4056666050' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:07:30 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1225: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 24 10:07:30 compute-0 nova_compute[257700]: 2025-11-24 10:07:30.922 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:07:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:07:30] "GET /metrics HTTP/1.1" 200 48463 "" "Prometheus/2.51.0"
Nov 24 10:07:30 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:07:30] "GET /metrics HTTP/1.1" 200 48463 "" "Prometheus/2.51.0"
Nov 24 10:07:31 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:31 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:07:31 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:31.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:07:31 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/1905884823' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:07:31 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:07:31 compute-0 nova_compute[257700]: 2025-11-24 10:07:31.329 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:07:31 compute-0 nova_compute[257700]: 2025-11-24 10:07:31.496 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:07:31 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:31 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:07:31 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:31.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:07:31 compute-0 nova_compute[257700]: 2025-11-24 10:07:31.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:07:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:07:32 compute-0 ceph-mon[74331]: pgmap v1225: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 24 10:07:32 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1226: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:07:32 compute-0 nova_compute[257700]: 2025-11-24 10:07:32.922 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:07:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:33.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:33 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:07:33.575Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:07:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:33.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:34 compute-0 ceph-mon[74331]: pgmap v1226: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:07:34 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1227: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:07:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:07:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:07:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:07:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:07:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:07:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:07:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:07:35 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:07:35 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:35 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:35 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:35.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:35 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:35 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:35 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:35.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:36 compute-0 ceph-mon[74331]: pgmap v1227: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:07:36 compute-0 nova_compute[257700]: 2025-11-24 10:07:36.331 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:07:36 compute-0 nova_compute[257700]: 2025-11-24 10:07:36.498 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:07:36 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1228: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:07:37 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:37 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:07:37 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:37.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:07:37 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:07:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:07:37.597Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:07:37 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:37 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:37 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:37.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:38 compute-0 ceph-mon[74331]: pgmap v1228: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:07:38 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1229: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:07:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:07:38.944Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:07:39 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:39 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:07:39 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:39.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:07:39 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:39 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:39 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:39.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:39 compute-0 sshd-session[292389]: error: kex_exchange_identification: read: Connection timed out
Nov 24 10:07:39 compute-0 sshd-session[292389]: banner exchange: Connection from 121.31.210.125 port 44760: Connection timed out
Nov 24 10:07:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:07:39 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:07:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:07:39 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:07:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:07:39 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:07:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:07:40 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:07:40 compute-0 ceph-mon[74331]: pgmap v1229: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:07:40 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1230: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:07:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:07:40] "GET /metrics HTTP/1.1" 200 48463 "" "Prometheus/2.51.0"
Nov 24 10:07:40 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:07:40] "GET /metrics HTTP/1.1" 200 48463 "" "Prometheus/2.51.0"
Nov 24 10:07:41 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:41 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:07:41 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:41.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:07:41 compute-0 nova_compute[257700]: 2025-11-24 10:07:41.333 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:07:41 compute-0 nova_compute[257700]: 2025-11-24 10:07:41.499 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:07:41 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:41 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:41 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:41.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:42 compute-0 sshd-session[292418]: error: kex_exchange_identification: read: Connection timed out
Nov 24 10:07:42 compute-0 sshd-session[292418]: banner exchange: Connection from 14.215.126.91 port 47374: Connection timed out
Nov 24 10:07:42 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:07:42 compute-0 ceph-mon[74331]: pgmap v1230: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:07:42 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1231: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:07:43 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:43 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:07:43 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:43.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:07:43 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:07:43.576Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:07:43 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:43 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:43 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:43.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:43 compute-0 nova_compute[257700]: 2025-11-24 10:07:43.932 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:07:43 compute-0 nova_compute[257700]: 2025-11-24 10:07:43.933 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 24 10:07:44 compute-0 ceph-mon[74331]: pgmap v1231: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:07:44 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1232: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:07:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:07:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:07:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:07:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:07:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:07:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:07:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:07:45 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:07:45 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:45 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:45 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:45.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Optimize plan auto_2025-11-24_10:07:45
Nov 24 10:07:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 10:07:45 compute-0 ceph-mgr[74626]: [balancer INFO root] do_upmap
Nov 24 10:07:45 compute-0 ceph-mgr[74626]: [balancer INFO root] pools ['images', 'cephfs.cephfs.data', 'vms', 'backups', '.mgr', '.nfs', 'volumes', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta']
Nov 24 10:07:45 compute-0 ceph-mgr[74626]: [balancer INFO root] prepared 0/10 upmap changes
Nov 24 10:07:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:07:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:07:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:07:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:07:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:07:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:07:45 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:45 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:45 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:45.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 10:07:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:07:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 10:07:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:07:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 10:07:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:07:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:07:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:07:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:07:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:07:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 24 10:07:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:07:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 10:07:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:07:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:07:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:07:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 24 10:07:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:07:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 10:07:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:07:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:07:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:07:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 10:07:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:07:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 10:07:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 10:07:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 10:07:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 10:07:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 10:07:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 10:07:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 10:07:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 10:07:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 10:07:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 10:07:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 10:07:46 compute-0 ceph-mon[74331]: pgmap v1232: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:07:46 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:07:46 compute-0 nova_compute[257700]: 2025-11-24 10:07:46.334 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:07:46 compute-0 nova_compute[257700]: 2025-11-24 10:07:46.501 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:07:46 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1233: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:07:47 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:47 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:07:47 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:47.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:07:47 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:07:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:07:47.597Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:07:47 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:47 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:47 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:47.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:48 compute-0 ceph-mon[74331]: pgmap v1233: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:07:48 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1234: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:07:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:07:48.945Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:07:49 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:49 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:07:49 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:49.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:07:49 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:49 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:07:49 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:49.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:07:49 compute-0 sshd-session[292436]: Invalid user default from 83.229.122.23 port 48936
Nov 24 10:07:49 compute-0 sudo[292439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:07:49 compute-0 sudo[292439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:07:49 compute-0 sudo[292439]: pam_unix(sudo:session): session closed for user root
Nov 24 10:07:49 compute-0 sshd-session[292436]: Received disconnect from 83.229.122.23 port 48936:11: Bye Bye [preauth]
Nov 24 10:07:49 compute-0 sshd-session[292436]: Disconnected from invalid user default 83.229.122.23 port 48936 [preauth]
Nov 24 10:07:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:07:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:07:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:07:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:07:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:07:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:07:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:07:50 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:07:50 compute-0 ceph-mon[74331]: pgmap v1234: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:07:50 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1235: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:07:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:07:50] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Nov 24 10:07:50 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:07:50] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Nov 24 10:07:51 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:51 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:51 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:51.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:51 compute-0 nova_compute[257700]: 2025-11-24 10:07:51.337 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:07:51 compute-0 nova_compute[257700]: 2025-11-24 10:07:51.503 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:07:51 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:51 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:51 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:51.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:52 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:07:52 compute-0 ceph-mon[74331]: pgmap v1235: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:07:52 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1236: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:07:52 compute-0 podman[292467]: 2025-11-24 10:07:52.778489575 +0000 UTC m=+0.056954942 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 24 10:07:52 compute-0 podman[292468]: 2025-11-24 10:07:52.853092737 +0000 UTC m=+0.128672642 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 10:07:53 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:53 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:07:53 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:53.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:07:53 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:07:53.577Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 10:07:53 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:07:53.577Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:07:53 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:53 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:53 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:53.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:54 compute-0 ceph-mon[74331]: pgmap v1236: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:07:54 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1237: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:07:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:07:55 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:07:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:07:55 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:07:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:07:55 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:07:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:07:55 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:07:55 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:55 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:55 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:55.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:55 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:55 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:07:55 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:55.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:07:56 compute-0 nova_compute[257700]: 2025-11-24 10:07:56.340 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:07:56 compute-0 ceph-mon[74331]: pgmap v1237: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:07:56 compute-0 nova_compute[257700]: 2025-11-24 10:07:56.505 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:07:56 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1238: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 0 B/s wr, 168 op/s
Nov 24 10:07:57 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:57 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:57 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:57.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:57 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:07:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:07:57.599Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:07:57 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:57 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:07:57 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:57.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:07:58 compute-0 ceph-mon[74331]: pgmap v1238: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 0 B/s wr, 168 op/s
Nov 24 10:07:58 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1239: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 0 B/s wr, 168 op/s
Nov 24 10:07:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:07:58.946Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:07:59 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:59 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:59 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:07:59.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:59 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:07:59 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:07:59 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:07:59.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:07:59 compute-0 podman[292520]: 2025-11-24 10:07:59.775842998 +0000 UTC m=+0.051550768 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 24 10:08:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:07:59 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:08:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:07:59 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:08:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:07:59 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:08:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:08:00 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:08:00 compute-0 ceph-mon[74331]: pgmap v1239: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 0 B/s wr, 168 op/s
Nov 24 10:08:00 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1240: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 0 B/s wr, 168 op/s
Nov 24 10:08:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:08:00] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Nov 24 10:08:00 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:08:00] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Nov 24 10:08:01 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:01 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:01 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:01.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:01 compute-0 nova_compute[257700]: 2025-11-24 10:08:01.344 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:08:01 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:08:01 compute-0 nova_compute[257700]: 2025-11-24 10:08:01.539 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:08:01 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:01 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:01 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:01.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:02 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:08:02 compute-0 ceph-mon[74331]: pgmap v1240: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 0 B/s wr, 168 op/s
Nov 24 10:08:02 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/2332319195' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 10:08:02 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/2332319195' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 10:08:02 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1241: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 0 B/s wr, 168 op/s
Nov 24 10:08:03 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:03 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:08:03 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:03.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:08:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:08:03.578Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:08:03 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:03 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:03 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:03.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:04 compute-0 ceph-mon[74331]: pgmap v1241: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 0 B/s wr, 168 op/s
Nov 24 10:08:04 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1242: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 0 B/s wr, 168 op/s
Nov 24 10:08:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:08:04 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:08:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:08:04 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:08:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:08:04 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:08:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:08:05 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:08:05 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:05 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:08:05 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:05.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:08:05 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:05 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:05 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:05.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:06 compute-0 nova_compute[257700]: 2025-11-24 10:08:06.345 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:08:06 compute-0 ceph-mon[74331]: pgmap v1242: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 0 B/s wr, 168 op/s
Nov 24 10:08:06 compute-0 nova_compute[257700]: 2025-11-24 10:08:06.541 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:08:06 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1243: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 0 B/s wr, 168 op/s
Nov 24 10:08:07 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:07 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:07 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:07.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:07 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:08:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:08:07.599Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 10:08:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:08:07.599Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 10:08:07 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:07 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:07 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:07.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:08 compute-0 ceph-mon[74331]: pgmap v1243: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 0 B/s wr, 168 op/s
Nov 24 10:08:08 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1244: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:08:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:08:08.948Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:08:09 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:09 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:09 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:09.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:09 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:09 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:09 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:09.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:09 compute-0 sudo[292550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:08:09 compute-0 sudo[292550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:08:09 compute-0 sudo[292550]: pam_unix(sudo:session): session closed for user root
Nov 24 10:08:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:08:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:08:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:08:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:08:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:08:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:08:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:08:10 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:08:10 compute-0 ceph-mon[74331]: pgmap v1244: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:08:10 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1245: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:08:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:08:10] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Nov 24 10:08:10 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:08:10] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Nov 24 10:08:11 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:11 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:11 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:11.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:11 compute-0 nova_compute[257700]: 2025-11-24 10:08:11.347 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:08:11 compute-0 nova_compute[257700]: 2025-11-24 10:08:11.541 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:08:11 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:11 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:11 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:11.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:12 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:08:12 compute-0 ceph-mon[74331]: pgmap v1245: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:08:12 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1246: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:08:13 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:13 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:13 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:13.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:13 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:08:13.579Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:08:13 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:13 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:13 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:13.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:14 compute-0 ceph-mon[74331]: pgmap v1246: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:08:14 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1247: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:08:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:08:14 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:08:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:08:14 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:08:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:08:14 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:08:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:08:15 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:08:15 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:15 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:15 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:15.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:08:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:08:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:08:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:08:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:08:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:08:15 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:08:15 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:15 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:15 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:15.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:16 compute-0 nova_compute[257700]: 2025-11-24 10:08:16.348 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:08:16 compute-0 nova_compute[257700]: 2025-11-24 10:08:16.543 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:08:16 compute-0 ceph-mon[74331]: pgmap v1247: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:08:16 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1248: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:08:17 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:08:17 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:17 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:17 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:17.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:08:17.600Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:08:17 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:17 compute-0 ceph-mon[74331]: pgmap v1248: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:08:17 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:17 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:17.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:17 compute-0 sshd-session[292546]: error: kex_exchange_identification: read: Connection timed out
Nov 24 10:08:17 compute-0 sshd-session[292546]: banner exchange: Connection from 14.215.126.91 port 54306: Connection timed out
Nov 24 10:08:18 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1249: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:08:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:08:18.949Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:08:19 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:19 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:19 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:19.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:19 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:19 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:19 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:19.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:19 compute-0 ceph-mon[74331]: pgmap v1249: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:08:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:08:19 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:08:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:08:19 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:08:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:08:19 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:08:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:08:20 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:08:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:08:20.580 165073 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:08:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:08:20.581 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:08:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:08:20.582 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:08:20 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1250: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:08:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:08:20] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 24 10:08:20 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:08:20] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 24 10:08:21 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:21 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:21 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:21.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:21 compute-0 nova_compute[257700]: 2025-11-24 10:08:21.355 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:08:21 compute-0 nova_compute[257700]: 2025-11-24 10:08:21.545 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:08:21 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:21 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:21 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:21.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:21 compute-0 ceph-mon[74331]: pgmap v1250: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:08:22 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:08:22 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1251: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:08:23 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:23 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:23 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:23.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:08:23.579Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:08:23 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:23 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:23 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:23.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:23 compute-0 ceph-mon[74331]: pgmap v1251: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:08:23 compute-0 podman[292589]: 2025-11-24 10:08:23.802009903 +0000 UTC m=+0.067504445 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd)
Nov 24 10:08:23 compute-0 podman[292590]: 2025-11-24 10:08:23.825622142 +0000 UTC m=+0.090732665 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible)
Nov 24 10:08:24 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1252: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:08:24 compute-0 nova_compute[257700]: 2025-11-24 10:08:24.953 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:08:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:08:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:08:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:08:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:08:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:08:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:08:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:08:25 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:08:25 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:25 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:08:25 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:25.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:08:25 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:25 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:25 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:25.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:25 compute-0 ceph-mon[74331]: pgmap v1252: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:08:25 compute-0 sudo[292636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:08:25 compute-0 sudo[292636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:08:25 compute-0 sudo[292636]: pam_unix(sudo:session): session closed for user root
Nov 24 10:08:25 compute-0 nova_compute[257700]: 2025-11-24 10:08:25.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:08:25 compute-0 nova_compute[257700]: 2025-11-24 10:08:25.922 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:08:25 compute-0 nova_compute[257700]: 2025-11-24 10:08:25.922 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:08:25 compute-0 nova_compute[257700]: 2025-11-24 10:08:25.946 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:08:25 compute-0 nova_compute[257700]: 2025-11-24 10:08:25.947 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:08:25 compute-0 nova_compute[257700]: 2025-11-24 10:08:25.947 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:08:25 compute-0 nova_compute[257700]: 2025-11-24 10:08:25.947 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 10:08:25 compute-0 nova_compute[257700]: 2025-11-24 10:08:25.947 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:08:25 compute-0 sudo[292661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 10:08:25 compute-0 sudo[292661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:08:26 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 10:08:26 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:08:26 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 10:08:26 compute-0 nova_compute[257700]: 2025-11-24 10:08:26.357 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:08:26 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 10:08:26 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:08:26 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1115580703' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:08:26 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:08:26 compute-0 nova_compute[257700]: 2025-11-24 10:08:26.403 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:08:26 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:08:26 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 10:08:26 compute-0 sudo[292661]: pam_unix(sudo:session): session closed for user root
Nov 24 10:08:26 compute-0 nova_compute[257700]: 2025-11-24 10:08:26.547 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:08:26 compute-0 nova_compute[257700]: 2025-11-24 10:08:26.567 257704 WARNING nova.virt.libvirt.driver [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 10:08:26 compute-0 nova_compute[257700]: 2025-11-24 10:08:26.568 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4493MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 10:08:26 compute-0 nova_compute[257700]: 2025-11-24 10:08:26.568 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:08:26 compute-0 nova_compute[257700]: 2025-11-24 10:08:26.568 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:08:26 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:08:26 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1253: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:08:26 compute-0 nova_compute[257700]: 2025-11-24 10:08:26.742 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 10:08:26 compute-0 nova_compute[257700]: 2025-11-24 10:08:26.743 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 10:08:26 compute-0 nova_compute[257700]: 2025-11-24 10:08:26.853 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:08:27 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:08:27 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:27 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:27 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:27.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:27 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Nov 24 10:08:27 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:08:27.260901) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 10:08:27 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Nov 24 10:08:27 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978907260957, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 988, "num_deletes": 251, "total_data_size": 1725709, "memory_usage": 1741976, "flush_reason": "Manual Compaction"}
Nov 24 10:08:27 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Nov 24 10:08:27 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:08:27 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1085546858' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:08:27 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978907271778, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 1115589, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34665, "largest_seqno": 35652, "table_properties": {"data_size": 1111576, "index_size": 1665, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10536, "raw_average_key_size": 20, "raw_value_size": 1103012, "raw_average_value_size": 2197, "num_data_blocks": 71, "num_entries": 502, "num_filter_entries": 502, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763978823, "oldest_key_time": 1763978823, "file_creation_time": 1763978907, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "42aa12d2-c531-4ddc-8c4c-bc0b5971346b", "db_session_id": "RORHLERH15LC1QL8D0I4", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Nov 24 10:08:27 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 10906 microseconds, and 3556 cpu microseconds.
Nov 24 10:08:27 compute-0 ceph-mon[74331]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 10:08:27 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:08:27.271821) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 1115589 bytes OK
Nov 24 10:08:27 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:08:27.271842) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Nov 24 10:08:27 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:08:27.282371) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Nov 24 10:08:27 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:08:27.282390) EVENT_LOG_v1 {"time_micros": 1763978907282384, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 10:08:27 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:08:27.282406) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 10:08:27 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 1721146, prev total WAL file size 1721146, number of live WAL files 2.
Nov 24 10:08:27 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 10:08:27 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:08:27.283066) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303037' seq:72057594037927935, type:22 .. '6D6772737461740031323539' seq:0, type:0; will stop at (end)
Nov 24 10:08:27 compute-0 nova_compute[257700]: 2025-11-24 10:08:27.282 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:08:27 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 10:08:27 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(1089KB)], [74(14MB)]
Nov 24 10:08:27 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978907283134, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 15812274, "oldest_snapshot_seqno": -1}
Nov 24 10:08:27 compute-0 nova_compute[257700]: 2025-11-24 10:08:27.289 257704 DEBUG nova.compute.provider_tree [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Inventory has not changed in ProviderTree for provider: a50ce3b5-7e9e-4263-a4aa-c35573ac7257 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 10:08:27 compute-0 nova_compute[257700]: 2025-11-24 10:08:27.308 257704 DEBUG nova.scheduler.client.report [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Inventory has not changed for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 10:08:27 compute-0 nova_compute[257700]: 2025-11-24 10:08:27.311 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 10:08:27 compute-0 nova_compute[257700]: 2025-11-24 10:08:27.312 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:08:27 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1254: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 24 10:08:27 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 10:08:27 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 6564 keys, 12255710 bytes, temperature: kUnknown
Nov 24 10:08:27 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978907368826, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 12255710, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12214960, "index_size": 23230, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16453, "raw_key_size": 172696, "raw_average_key_size": 26, "raw_value_size": 12099843, "raw_average_value_size": 1843, "num_data_blocks": 909, "num_entries": 6564, "num_filter_entries": 6564, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976305, "oldest_key_time": 0, "file_creation_time": 1763978907, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "42aa12d2-c531-4ddc-8c4c-bc0b5971346b", "db_session_id": "RORHLERH15LC1QL8D0I4", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Nov 24 10:08:27 compute-0 ceph-mon[74331]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 10:08:27 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:08:27.369218) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 12255710 bytes
Nov 24 10:08:27 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:08:27.393818) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 184.0 rd, 142.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 14.0 +0.0 blob) out(11.7 +0.0 blob), read-write-amplify(25.2) write-amplify(11.0) OK, records in: 7049, records dropped: 485 output_compression: NoCompression
Nov 24 10:08:27 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:08:27.393834) EVENT_LOG_v1 {"time_micros": 1763978907393826, "job": 42, "event": "compaction_finished", "compaction_time_micros": 85916, "compaction_time_cpu_micros": 25795, "output_level": 6, "num_output_files": 1, "total_output_size": 12255710, "num_input_records": 7049, "num_output_records": 6564, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 10:08:27 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 10:08:27 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978907394236, "job": 42, "event": "table_file_deletion", "file_number": 76}
Nov 24 10:08:27 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 10:08:27 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763978907396536, "job": 42, "event": "table_file_deletion", "file_number": 74}
Nov 24 10:08:27 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:08:27.282974) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:08:27 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:08:27.396609) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:08:27 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:08:27.396617) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:08:27 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:08:27.396619) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:08:27 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:08:27.396621) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:08:27 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:08:27.396623) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:08:27 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:08:27 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:08:27 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1115580703' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:08:27 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:08:27 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:08:27 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:08:27 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1085546858' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:08:27 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:08:27 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 10:08:27 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 10:08:27 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:08:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:08:27.601Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 10:08:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:08:27.602Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:08:27 compute-0 sudo[292767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:08:27 compute-0 sudo[292767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:08:27 compute-0 sudo[292767]: pam_unix(sudo:session): session closed for user root
Nov 24 10:08:27 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:27 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:08:27 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:27.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:08:27 compute-0 sudo[292792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 24 10:08:27 compute-0 sudo[292792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:08:28 compute-0 podman[292857]: 2025-11-24 10:08:28.103206365 +0000 UTC m=+0.051181718 container create 8d5f2346bc5afa9b34108ca9016f6a22cddef0f5cb64a5aa41abf0cad01bba31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Nov 24 10:08:28 compute-0 systemd[1]: Started libpod-conmon-8d5f2346bc5afa9b34108ca9016f6a22cddef0f5cb64a5aa41abf0cad01bba31.scope.
Nov 24 10:08:28 compute-0 podman[292857]: 2025-11-24 10:08:28.072581181 +0000 UTC m=+0.020556554 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:08:28 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:08:28 compute-0 podman[292857]: 2025-11-24 10:08:28.236321417 +0000 UTC m=+0.184296790 container init 8d5f2346bc5afa9b34108ca9016f6a22cddef0f5cb64a5aa41abf0cad01bba31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 10:08:28 compute-0 podman[292857]: 2025-11-24 10:08:28.242134512 +0000 UTC m=+0.190109855 container start 8d5f2346bc5afa9b34108ca9016f6a22cddef0f5cb64a5aa41abf0cad01bba31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_raman, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 24 10:08:28 compute-0 objective_raman[292874]: 167 167
Nov 24 10:08:28 compute-0 systemd[1]: libpod-8d5f2346bc5afa9b34108ca9016f6a22cddef0f5cb64a5aa41abf0cad01bba31.scope: Deactivated successfully.
Nov 24 10:08:28 compute-0 podman[292857]: 2025-11-24 10:08:28.277960446 +0000 UTC m=+0.225935819 container attach 8d5f2346bc5afa9b34108ca9016f6a22cddef0f5cb64a5aa41abf0cad01bba31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_raman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1)
Nov 24 10:08:28 compute-0 podman[292857]: 2025-11-24 10:08:28.279291559 +0000 UTC m=+0.227266913 container died 8d5f2346bc5afa9b34108ca9016f6a22cddef0f5cb64a5aa41abf0cad01bba31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 10:08:28 compute-0 sshd-session[292762]: Invalid user invitado from 36.255.3.203 port 52501
Nov 24 10:08:28 compute-0 nova_compute[257700]: 2025-11-24 10:08:28.312 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:08:28 compute-0 nova_compute[257700]: 2025-11-24 10:08:28.313 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:08:28 compute-0 nova_compute[257700]: 2025-11-24 10:08:28.314 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 10:08:28 compute-0 nova_compute[257700]: 2025-11-24 10:08:28.314 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 10:08:28 compute-0 nova_compute[257700]: 2025-11-24 10:08:28.341 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 10:08:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-341865b72945641f3b1908ef5d1edd516555dfd1eda2f70954f56b9fd5baa1c0-merged.mount: Deactivated successfully.
Nov 24 10:08:28 compute-0 podman[292857]: 2025-11-24 10:08:28.4676626 +0000 UTC m=+0.415637963 container remove 8d5f2346bc5afa9b34108ca9016f6a22cddef0f5cb64a5aa41abf0cad01bba31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_raman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Nov 24 10:08:28 compute-0 sshd-session[292762]: Received disconnect from 36.255.3.203 port 52501:11: Bye Bye [preauth]
Nov 24 10:08:28 compute-0 sshd-session[292762]: Disconnected from invalid user invitado 36.255.3.203 port 52501 [preauth]
Nov 24 10:08:28 compute-0 systemd[1]: libpod-conmon-8d5f2346bc5afa9b34108ca9016f6a22cddef0f5cb64a5aa41abf0cad01bba31.scope: Deactivated successfully.
Nov 24 10:08:28 compute-0 ceph-mon[74331]: pgmap v1253: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:08:28 compute-0 ceph-mon[74331]: pgmap v1254: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 24 10:08:28 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:08:28 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:08:28 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 10:08:28 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 10:08:28 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:08:28 compute-0 podman[292901]: 2025-11-24 10:08:28.618913764 +0000 UTC m=+0.040974003 container create b466c6f194d34a8a3f749656a5af7c41cd00a8d28e2e3b27629d1ce15360dfa2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 10:08:28 compute-0 systemd[1]: Started libpod-conmon-b466c6f194d34a8a3f749656a5af7c41cd00a8d28e2e3b27629d1ce15360dfa2.scope.
Nov 24 10:08:28 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:08:28 compute-0 podman[292901]: 2025-11-24 10:08:28.600489914 +0000 UTC m=+0.022550153 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:08:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b1c51b1c1cbcde420f8b4ab4c604a95356ca7399d54e43a8ad8a0bb02838833/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 10:08:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b1c51b1c1cbcde420f8b4ab4c604a95356ca7399d54e43a8ad8a0bb02838833/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 10:08:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b1c51b1c1cbcde420f8b4ab4c604a95356ca7399d54e43a8ad8a0bb02838833/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 10:08:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b1c51b1c1cbcde420f8b4ab4c604a95356ca7399d54e43a8ad8a0bb02838833/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 10:08:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b1c51b1c1cbcde420f8b4ab4c604a95356ca7399d54e43a8ad8a0bb02838833/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 10:08:28 compute-0 podman[292901]: 2025-11-24 10:08:28.708955211 +0000 UTC m=+0.131015450 container init b466c6f194d34a8a3f749656a5af7c41cd00a8d28e2e3b27629d1ce15360dfa2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_fermi, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 10:08:28 compute-0 podman[292901]: 2025-11-24 10:08:28.721481923 +0000 UTC m=+0.143542172 container start b466c6f194d34a8a3f749656a5af7c41cd00a8d28e2e3b27629d1ce15360dfa2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 10:08:28 compute-0 podman[292901]: 2025-11-24 10:08:28.726217942 +0000 UTC m=+0.148278181 container attach b466c6f194d34a8a3f749656a5af7c41cd00a8d28e2e3b27629d1ce15360dfa2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_fermi, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 10:08:28 compute-0 nova_compute[257700]: 2025-11-24 10:08:28.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:08:28 compute-0 nova_compute[257700]: 2025-11-24 10:08:28.922 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 10:08:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:08:28.950Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:08:29 compute-0 jovial_fermi[292917]: --> passed data devices: 0 physical, 1 LVM
Nov 24 10:08:29 compute-0 jovial_fermi[292917]: --> All data devices are unavailable
Nov 24 10:08:29 compute-0 systemd[1]: libpod-b466c6f194d34a8a3f749656a5af7c41cd00a8d28e2e3b27629d1ce15360dfa2.scope: Deactivated successfully.
Nov 24 10:08:29 compute-0 podman[292901]: 2025-11-24 10:08:29.094530613 +0000 UTC m=+0.516590892 container died b466c6f194d34a8a3f749656a5af7c41cd00a8d28e2e3b27629d1ce15360dfa2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_fermi, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Nov 24 10:08:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b1c51b1c1cbcde420f8b4ab4c604a95356ca7399d54e43a8ad8a0bb02838833-merged.mount: Deactivated successfully.
Nov 24 10:08:29 compute-0 podman[292901]: 2025-11-24 10:08:29.136741656 +0000 UTC m=+0.558801875 container remove b466c6f194d34a8a3f749656a5af7c41cd00a8d28e2e3b27629d1ce15360dfa2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_fermi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 10:08:29 compute-0 systemd[1]: libpod-conmon-b466c6f194d34a8a3f749656a5af7c41cd00a8d28e2e3b27629d1ce15360dfa2.scope: Deactivated successfully.
Nov 24 10:08:29 compute-0 sudo[292792]: pam_unix(sudo:session): session closed for user root
Nov 24 10:08:29 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:29 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:29 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:29.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:29 compute-0 sudo[292944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:08:29 compute-0 sudo[292944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:08:29 compute-0 sudo[292944]: pam_unix(sudo:session): session closed for user root
Nov 24 10:08:29 compute-0 sudo[292969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- lvm list --format json
Nov 24 10:08:29 compute-0 sudo[292969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:08:29 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1255: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 24 10:08:29 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/820330601' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:08:29 compute-0 podman[293036]: 2025-11-24 10:08:29.672732511 +0000 UTC m=+0.041038075 container create 1523c1a2c64402d5c4038513df399e6b31318b82605059d0fa14127a48a2791a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_brown, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Nov 24 10:08:29 compute-0 systemd[1]: Started libpod-conmon-1523c1a2c64402d5c4038513df399e6b31318b82605059d0fa14127a48a2791a.scope.
Nov 24 10:08:29 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:29 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:29 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:29.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:29 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:08:29 compute-0 podman[293036]: 2025-11-24 10:08:29.652359883 +0000 UTC m=+0.020665517 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:08:29 compute-0 podman[293036]: 2025-11-24 10:08:29.748506812 +0000 UTC m=+0.116812446 container init 1523c1a2c64402d5c4038513df399e6b31318b82605059d0fa14127a48a2791a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_brown, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 24 10:08:29 compute-0 podman[293036]: 2025-11-24 10:08:29.756715627 +0000 UTC m=+0.125021201 container start 1523c1a2c64402d5c4038513df399e6b31318b82605059d0fa14127a48a2791a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True)
Nov 24 10:08:29 compute-0 podman[293036]: 2025-11-24 10:08:29.760164553 +0000 UTC m=+0.128470127 container attach 1523c1a2c64402d5c4038513df399e6b31318b82605059d0fa14127a48a2791a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_brown, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 10:08:29 compute-0 sleepy_brown[293053]: 167 167
Nov 24 10:08:29 compute-0 systemd[1]: libpod-1523c1a2c64402d5c4038513df399e6b31318b82605059d0fa14127a48a2791a.scope: Deactivated successfully.
Nov 24 10:08:29 compute-0 podman[293036]: 2025-11-24 10:08:29.761823544 +0000 UTC m=+0.130129148 container died 1523c1a2c64402d5c4038513df399e6b31318b82605059d0fa14127a48a2791a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True)
Nov 24 10:08:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1a234182c52edb73e8ab681c98691f141399cb02d703a47c5a7bc988a22c9dc-merged.mount: Deactivated successfully.
Nov 24 10:08:29 compute-0 podman[293036]: 2025-11-24 10:08:29.792616272 +0000 UTC m=+0.160921846 container remove 1523c1a2c64402d5c4038513df399e6b31318b82605059d0fa14127a48a2791a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_brown, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Nov 24 10:08:29 compute-0 systemd[1]: libpod-conmon-1523c1a2c64402d5c4038513df399e6b31318b82605059d0fa14127a48a2791a.scope: Deactivated successfully.
Nov 24 10:08:29 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:08:29 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1198663139' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:08:29 compute-0 podman[293072]: 2025-11-24 10:08:29.897056959 +0000 UTC m=+0.064239544 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 24 10:08:29 compute-0 podman[293093]: 2025-11-24 10:08:29.965538568 +0000 UTC m=+0.043722372 container create 0396def9a8fbb37b63f41ea1675f7bdbb1704e7e0bf4eafe29a63e13ee28eff0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_faraday, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 10:08:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:08:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:08:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:08:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:08:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:08:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:08:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:08:30 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:08:30 compute-0 systemd[1]: Started libpod-conmon-0396def9a8fbb37b63f41ea1675f7bdbb1704e7e0bf4eafe29a63e13ee28eff0.scope.
Nov 24 10:08:30 compute-0 sudo[293107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:08:30 compute-0 sudo[293107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:08:30 compute-0 sudo[293107]: pam_unix(sudo:session): session closed for user root
Nov 24 10:08:30 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:08:30 compute-0 podman[293093]: 2025-11-24 10:08:29.944616306 +0000 UTC m=+0.022800140 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:08:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1c3a22d1b56b844ed22d6adebaa02ae04cb60acf58e28d621a8e42ac384a2b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 10:08:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1c3a22d1b56b844ed22d6adebaa02ae04cb60acf58e28d621a8e42ac384a2b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 10:08:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1c3a22d1b56b844ed22d6adebaa02ae04cb60acf58e28d621a8e42ac384a2b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 10:08:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1c3a22d1b56b844ed22d6adebaa02ae04cb60acf58e28d621a8e42ac384a2b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 10:08:30 compute-0 podman[293093]: 2025-11-24 10:08:30.05538528 +0000 UTC m=+0.133569084 container init 0396def9a8fbb37b63f41ea1675f7bdbb1704e7e0bf4eafe29a63e13ee28eff0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_faraday, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 24 10:08:30 compute-0 podman[293093]: 2025-11-24 10:08:30.064659362 +0000 UTC m=+0.142843166 container start 0396def9a8fbb37b63f41ea1675f7bdbb1704e7e0bf4eafe29a63e13ee28eff0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_faraday, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Nov 24 10:08:30 compute-0 podman[293093]: 2025-11-24 10:08:30.066987189 +0000 UTC m=+0.145171013 container attach 0396def9a8fbb37b63f41ea1675f7bdbb1704e7e0bf4eafe29a63e13ee28eff0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_faraday, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Nov 24 10:08:30 compute-0 compassionate_faraday[293133]: {
Nov 24 10:08:30 compute-0 compassionate_faraday[293133]:     "0": [
Nov 24 10:08:30 compute-0 compassionate_faraday[293133]:         {
Nov 24 10:08:30 compute-0 compassionate_faraday[293133]:             "devices": [
Nov 24 10:08:30 compute-0 compassionate_faraday[293133]:                 "/dev/loop3"
Nov 24 10:08:30 compute-0 compassionate_faraday[293133]:             ],
Nov 24 10:08:30 compute-0 compassionate_faraday[293133]:             "lv_name": "ceph_lv0",
Nov 24 10:08:30 compute-0 compassionate_faraday[293133]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 10:08:30 compute-0 compassionate_faraday[293133]:             "lv_size": "21470642176",
Nov 24 10:08:30 compute-0 compassionate_faraday[293133]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=84a084c3-61a7-5de7-8207-1f88efa59a64,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 24 10:08:30 compute-0 compassionate_faraday[293133]:             "lv_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 10:08:30 compute-0 compassionate_faraday[293133]:             "name": "ceph_lv0",
Nov 24 10:08:30 compute-0 compassionate_faraday[293133]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 10:08:30 compute-0 compassionate_faraday[293133]:             "tags": {
Nov 24 10:08:30 compute-0 compassionate_faraday[293133]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 10:08:30 compute-0 compassionate_faraday[293133]:                 "ceph.block_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 10:08:30 compute-0 compassionate_faraday[293133]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 10:08:30 compute-0 compassionate_faraday[293133]:                 "ceph.cluster_fsid": "84a084c3-61a7-5de7-8207-1f88efa59a64",
Nov 24 10:08:30 compute-0 compassionate_faraday[293133]:                 "ceph.cluster_name": "ceph",
Nov 24 10:08:30 compute-0 compassionate_faraday[293133]:                 "ceph.crush_device_class": "",
Nov 24 10:08:30 compute-0 compassionate_faraday[293133]:                 "ceph.encrypted": "0",
Nov 24 10:08:30 compute-0 compassionate_faraday[293133]:                 "ceph.osd_fsid": "4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c",
Nov 24 10:08:30 compute-0 compassionate_faraday[293133]:                 "ceph.osd_id": "0",
Nov 24 10:08:30 compute-0 compassionate_faraday[293133]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 10:08:30 compute-0 compassionate_faraday[293133]:                 "ceph.type": "block",
Nov 24 10:08:30 compute-0 compassionate_faraday[293133]:                 "ceph.vdo": "0",
Nov 24 10:08:30 compute-0 compassionate_faraday[293133]:                 "ceph.with_tpm": "0"
Nov 24 10:08:30 compute-0 compassionate_faraday[293133]:             },
Nov 24 10:08:30 compute-0 compassionate_faraday[293133]:             "type": "block",
Nov 24 10:08:30 compute-0 compassionate_faraday[293133]:             "vg_name": "ceph_vg0"
Nov 24 10:08:30 compute-0 compassionate_faraday[293133]:         }
Nov 24 10:08:30 compute-0 compassionate_faraday[293133]:     ]
Nov 24 10:08:30 compute-0 compassionate_faraday[293133]: }
Nov 24 10:08:30 compute-0 systemd[1]: libpod-0396def9a8fbb37b63f41ea1675f7bdbb1704e7e0bf4eafe29a63e13ee28eff0.scope: Deactivated successfully.
Nov 24 10:08:30 compute-0 conmon[293133]: conmon 0396def9a8fbb37b63f4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0396def9a8fbb37b63f41ea1675f7bdbb1704e7e0bf4eafe29a63e13ee28eff0.scope/container/memory.events
Nov 24 10:08:30 compute-0 podman[293093]: 2025-11-24 10:08:30.334072234 +0000 UTC m=+0.412256078 container died 0396def9a8fbb37b63f41ea1675f7bdbb1704e7e0bf4eafe29a63e13ee28eff0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_faraday, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 10:08:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1c3a22d1b56b844ed22d6adebaa02ae04cb60acf58e28d621a8e42ac384a2b1-merged.mount: Deactivated successfully.
Nov 24 10:08:30 compute-0 podman[293093]: 2025-11-24 10:08:30.3803811 +0000 UTC m=+0.458564914 container remove 0396def9a8fbb37b63f41ea1675f7bdbb1704e7e0bf4eafe29a63e13ee28eff0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_faraday, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Nov 24 10:08:30 compute-0 systemd[1]: libpod-conmon-0396def9a8fbb37b63f41ea1675f7bdbb1704e7e0bf4eafe29a63e13ee28eff0.scope: Deactivated successfully.
Nov 24 10:08:30 compute-0 sudo[292969]: pam_unix(sudo:session): session closed for user root
Nov 24 10:08:30 compute-0 sudo[293155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:08:30 compute-0 sudo[293155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:08:30 compute-0 sudo[293155]: pam_unix(sudo:session): session closed for user root
Nov 24 10:08:30 compute-0 sudo[293180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- raw list --format json
Nov 24 10:08:30 compute-0 sudo[293180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:08:30 compute-0 ceph-mon[74331]: pgmap v1255: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 24 10:08:30 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/1198663139' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:08:30 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/463285442' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:08:30 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:08:30 compute-0 podman[293244]: 2025-11-24 10:08:30.916307334 +0000 UTC m=+0.034634516 container create ab8acbcc4216b4f67b39039bda7a8b503237eae5414ce38f821644d9efde5556 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_saha, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 10:08:30 compute-0 systemd[1]: Started libpod-conmon-ab8acbcc4216b4f67b39039bda7a8b503237eae5414ce38f821644d9efde5556.scope.
Nov 24 10:08:30 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:08:30 compute-0 podman[293244]: 2025-11-24 10:08:30.990668409 +0000 UTC m=+0.108995681 container init ab8acbcc4216b4f67b39039bda7a8b503237eae5414ce38f821644d9efde5556 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_saha, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 10:08:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:08:30] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Nov 24 10:08:30 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:08:30] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Nov 24 10:08:30 compute-0 podman[293244]: 2025-11-24 10:08:30.902483038 +0000 UTC m=+0.020810240 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:08:30 compute-0 podman[293244]: 2025-11-24 10:08:30.998909895 +0000 UTC m=+0.117237077 container start ab8acbcc4216b4f67b39039bda7a8b503237eae5414ce38f821644d9efde5556 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_saha, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 24 10:08:31 compute-0 podman[293244]: 2025-11-24 10:08:31.002038873 +0000 UTC m=+0.120366085 container attach ab8acbcc4216b4f67b39039bda7a8b503237eae5414ce38f821644d9efde5556 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_saha, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 24 10:08:31 compute-0 vigorous_saha[293260]: 167 167
Nov 24 10:08:31 compute-0 systemd[1]: libpod-ab8acbcc4216b4f67b39039bda7a8b503237eae5414ce38f821644d9efde5556.scope: Deactivated successfully.
Nov 24 10:08:31 compute-0 podman[293244]: 2025-11-24 10:08:31.004968166 +0000 UTC m=+0.123295348 container died ab8acbcc4216b4f67b39039bda7a8b503237eae5414ce38f821644d9efde5556 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_saha, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 24 10:08:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-3034aa17b443117cffd7f46b702f2fce6512a79d59b7d7ab34aab762871e3fa5-merged.mount: Deactivated successfully.
Nov 24 10:08:31 compute-0 podman[293244]: 2025-11-24 10:08:31.045789224 +0000 UTC m=+0.164116416 container remove ab8acbcc4216b4f67b39039bda7a8b503237eae5414ce38f821644d9efde5556 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_saha, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 10:08:31 compute-0 systemd[1]: libpod-conmon-ab8acbcc4216b4f67b39039bda7a8b503237eae5414ce38f821644d9efde5556.scope: Deactivated successfully.
Nov 24 10:08:31 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:31 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:31 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:31.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:31 compute-0 podman[293281]: 2025-11-24 10:08:31.226166356 +0000 UTC m=+0.067534306 container create b1d749e443a1672aa2387466b75c1b1a70c58b8292f35907ac13ffd2b4b1227d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Nov 24 10:08:31 compute-0 systemd[1]: Started libpod-conmon-b1d749e443a1672aa2387466b75c1b1a70c58b8292f35907ac13ffd2b4b1227d.scope.
Nov 24 10:08:31 compute-0 podman[293281]: 2025-11-24 10:08:31.20229434 +0000 UTC m=+0.043662260 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:08:31 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:08:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e99cba1667c9a0f1d50d7655e1b217557bcfdfe1997a8588f3770268e7ee5cc5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 10:08:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e99cba1667c9a0f1d50d7655e1b217557bcfdfe1997a8588f3770268e7ee5cc5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 10:08:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e99cba1667c9a0f1d50d7655e1b217557bcfdfe1997a8588f3770268e7ee5cc5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 10:08:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e99cba1667c9a0f1d50d7655e1b217557bcfdfe1997a8588f3770268e7ee5cc5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 10:08:31 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1256: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 24 10:08:31 compute-0 podman[293281]: 2025-11-24 10:08:31.329482884 +0000 UTC m=+0.170850804 container init b1d749e443a1672aa2387466b75c1b1a70c58b8292f35907ac13ffd2b4b1227d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 24 10:08:31 compute-0 podman[293281]: 2025-11-24 10:08:31.339026702 +0000 UTC m=+0.180394542 container start b1d749e443a1672aa2387466b75c1b1a70c58b8292f35907ac13ffd2b4b1227d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_shtern, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 10:08:31 compute-0 podman[293281]: 2025-11-24 10:08:31.342391526 +0000 UTC m=+0.183759396 container attach b1d749e443a1672aa2387466b75c1b1a70c58b8292f35907ac13ffd2b4b1227d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_shtern, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 10:08:31 compute-0 nova_compute[257700]: 2025-11-24 10:08:31.360 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:08:31 compute-0 nova_compute[257700]: 2025-11-24 10:08:31.548 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:08:31 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/3228508666' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:08:31 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:31 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:31 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:31.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:31 compute-0 nova_compute[257700]: 2025-11-24 10:08:31.923 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:08:31 compute-0 lvm[293373]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 10:08:31 compute-0 lvm[293373]: VG ceph_vg0 finished
Nov 24 10:08:32 compute-0 distracted_shtern[293298]: {}
Nov 24 10:08:32 compute-0 systemd[1]: libpod-b1d749e443a1672aa2387466b75c1b1a70c58b8292f35907ac13ffd2b4b1227d.scope: Deactivated successfully.
Nov 24 10:08:32 compute-0 systemd[1]: libpod-b1d749e443a1672aa2387466b75c1b1a70c58b8292f35907ac13ffd2b4b1227d.scope: Consumed 1.130s CPU time.
Nov 24 10:08:32 compute-0 podman[293281]: 2025-11-24 10:08:32.048667841 +0000 UTC m=+0.890035671 container died b1d749e443a1672aa2387466b75c1b1a70c58b8292f35907ac13ffd2b4b1227d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_shtern, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Nov 24 10:08:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-e99cba1667c9a0f1d50d7655e1b217557bcfdfe1997a8588f3770268e7ee5cc5-merged.mount: Deactivated successfully.
Nov 24 10:08:32 compute-0 podman[293281]: 2025-11-24 10:08:32.089647553 +0000 UTC m=+0.931015393 container remove b1d749e443a1672aa2387466b75c1b1a70c58b8292f35907ac13ffd2b4b1227d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_shtern, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 10:08:32 compute-0 systemd[1]: libpod-conmon-b1d749e443a1672aa2387466b75c1b1a70c58b8292f35907ac13ffd2b4b1227d.scope: Deactivated successfully.
Nov 24 10:08:32 compute-0 sudo[293180]: pam_unix(sudo:session): session closed for user root
Nov 24 10:08:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 10:08:32 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:08:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 10:08:32 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:08:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:08:32 compute-0 sudo[293388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 10:08:32 compute-0 sudo[293388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:08:32 compute-0 sudo[293388]: pam_unix(sudo:session): session closed for user root
Nov 24 10:08:32 compute-0 ceph-mon[74331]: pgmap v1256: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 24 10:08:32 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:08:32 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:08:32 compute-0 nova_compute[257700]: 2025-11-24 10:08:32.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:08:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:33.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:33 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1257: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 24 10:08:33 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:08:33.580Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:08:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:33.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:34 compute-0 ceph-mon[74331]: pgmap v1257: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 24 10:08:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:08:35 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:08:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:08:35 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:08:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:08:35 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:08:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:08:35 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:08:35 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:35 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:35 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:35.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:35 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1258: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 24 10:08:35 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:35 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:35 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:35.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:36 compute-0 nova_compute[257700]: 2025-11-24 10:08:36.361 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:08:36 compute-0 nova_compute[257700]: 2025-11-24 10:08:36.550 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:08:36 compute-0 ceph-mon[74331]: pgmap v1258: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 24 10:08:37 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:37 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:37 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:37.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:37 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:08:37 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1259: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 24 10:08:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:08:37.603Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:08:37 compute-0 ceph-mon[74331]: pgmap v1259: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 24 10:08:37 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:37 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:08:37 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:37.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:08:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:08:38.952Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:08:39 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:39 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:08:39 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:39.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:08:39 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1260: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:08:39 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:39 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:08:39 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:39.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:08:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:08:39 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:08:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:08:39 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:08:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:08:39 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:08:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:08:40 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:08:40 compute-0 ceph-mon[74331]: pgmap v1260: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:08:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:08:40] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Nov 24 10:08:40 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:08:40] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Nov 24 10:08:41 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:41 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:08:41 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:41.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:08:41 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1261: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:08:41 compute-0 nova_compute[257700]: 2025-11-24 10:08:41.364 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:08:41 compute-0 nova_compute[257700]: 2025-11-24 10:08:41.551 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:08:41 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:41 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:41 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:41.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:42 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:08:42 compute-0 ceph-mon[74331]: pgmap v1261: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:08:43 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:43 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:08:43 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:43.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:08:43 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1262: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:08:43 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:08:43.581Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:08:43 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:43 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:43 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:43.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:44 compute-0 ceph-mon[74331]: pgmap v1262: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:08:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:08:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:08:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:08:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:08:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:08:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:08:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:08:45 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:08:45 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:45 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:08:45 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:45.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:08:45 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1263: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:08:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Optimize plan auto_2025-11-24_10:08:45
Nov 24 10:08:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 10:08:45 compute-0 ceph-mgr[74626]: [balancer INFO root] do_upmap
Nov 24 10:08:45 compute-0 ceph-mgr[74626]: [balancer INFO root] pools ['backups', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', '.nfs', 'images', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'volumes']
Nov 24 10:08:45 compute-0 ceph-mgr[74626]: [balancer INFO root] prepared 0/10 upmap changes
Nov 24 10:08:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:08:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:08:45 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:08:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:08:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:08:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:08:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:08:45 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:45 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:45 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:45.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 10:08:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:08:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 10:08:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:08:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 10:08:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:08:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:08:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:08:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:08:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:08:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 24 10:08:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:08:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 10:08:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:08:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:08:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:08:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 24 10:08:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:08:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 10:08:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:08:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:08:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:08:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 10:08:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:08:45 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 10:08:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 10:08:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 10:08:45 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 10:08:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 10:08:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 10:08:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 10:08:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 10:08:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 10:08:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 10:08:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 10:08:46 compute-0 nova_compute[257700]: 2025-11-24 10:08:46.366 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:08:46 compute-0 ceph-mon[74331]: pgmap v1263: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:08:46 compute-0 nova_compute[257700]: 2025-11-24 10:08:46.590 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:08:47 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:08:47 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:47 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:08:47 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:47.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:08:47 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1264: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:08:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:08:47.605Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:08:47 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:47 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:08:47 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:47.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:08:48 compute-0 ceph-mon[74331]: pgmap v1264: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:08:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:08:48.952Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 10:08:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:08:48.952Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 10:08:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:08:48.953Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 10:08:49 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:49 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:49 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:49.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:49 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1265: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:08:49 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:49 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:49 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:49.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:08:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:08:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:08:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:08:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:08:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:08:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:08:50 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:08:50 compute-0 sudo[293433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:08:50 compute-0 sudo[293433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:08:50 compute-0 sudo[293433]: pam_unix(sudo:session): session closed for user root
Nov 24 10:08:50 compute-0 ceph-mon[74331]: pgmap v1265: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:08:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:08:50] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 24 10:08:50 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:08:50] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 24 10:08:51 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:51 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:08:51 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:51.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:08:51 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1266: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:08:51 compute-0 nova_compute[257700]: 2025-11-24 10:08:51.368 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:08:51 compute-0 nova_compute[257700]: 2025-11-24 10:08:51.591 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:08:51 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:51 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:51 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:51.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:52 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:08:52 compute-0 ceph-mon[74331]: pgmap v1266: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:08:53 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:53 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:53 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:53.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:53 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1267: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:08:53 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:08:53.583Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:08:53 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:53 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:08:53 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:53.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:08:54 compute-0 ceph-mon[74331]: pgmap v1267: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:08:54 compute-0 podman[293463]: 2025-11-24 10:08:54.829619007 +0000 UTC m=+0.083195017 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 10:08:54 compute-0 podman[293464]: 2025-11-24 10:08:54.85861002 +0000 UTC m=+0.112055847 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller)
Nov 24 10:08:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:08:54 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:08:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:08:54 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:08:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:08:54 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:08:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:08:55 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:08:55 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:55 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:55 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:55.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:55 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1268: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:08:55 compute-0 sshd-session[293426]: error: kex_exchange_identification: read: Connection timed out
Nov 24 10:08:55 compute-0 sshd-session[293426]: banner exchange: Connection from 14.215.126.91 port 43598: Connection timed out
Nov 24 10:08:55 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:55 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:55 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:55.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:56 compute-0 nova_compute[257700]: 2025-11-24 10:08:56.371 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:08:56 compute-0 nova_compute[257700]: 2025-11-24 10:08:56.594 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:08:56 compute-0 ceph-mon[74331]: pgmap v1268: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:08:57 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:08:57 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:57 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:08:57 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:57.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:08:57 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1269: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:08:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:08:57.606Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:08:57 compute-0 ceph-mon[74331]: pgmap v1269: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:08:57 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:57 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:57 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:57.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:08:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:08:58.954Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:08:59 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:59 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:08:59 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:08:59.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:08:59 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1270: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:08:59 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:08:59 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:08:59 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:08:59.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:08:59 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:09:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:08:59 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:09:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:08:59 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:09:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:09:00 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:09:00 compute-0 ceph-mon[74331]: pgmap v1270: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:09:00 compute-0 podman[293514]: 2025-11-24 10:09:00.793210232 +0000 UTC m=+0.063908705 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 10:09:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:09:00] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Nov 24 10:09:00 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:09:00] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Nov 24 10:09:01 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:01 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:09:01 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:01.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:09:01 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1271: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:09:01 compute-0 nova_compute[257700]: 2025-11-24 10:09:01.376 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:09:01 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:09:01 compute-0 nova_compute[257700]: 2025-11-24 10:09:01.596 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:09:01 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:01 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:01 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:01.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:02 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:09:02 compute-0 ceph-mon[74331]: pgmap v1271: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:09:02 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/4252246755' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 10:09:02 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/4252246755' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 10:09:03 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:03 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:03 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:03.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:03 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1272: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:09:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:09:03.584Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:09:03 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:03 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:03 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:03.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:04 compute-0 ceph-mon[74331]: pgmap v1272: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:09:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:09:04 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:09:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:09:04 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:09:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:09:04 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:09:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:09:05 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:09:05 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:05 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:05 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:05.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:05 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1273: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:09:05 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:05 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:09:05 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:05.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:09:06 compute-0 nova_compute[257700]: 2025-11-24 10:09:06.377 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:09:06 compute-0 ceph-mon[74331]: pgmap v1273: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:09:06 compute-0 nova_compute[257700]: 2025-11-24 10:09:06.597 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:09:06 compute-0 sshd-session[293537]: Invalid user user2 from 83.229.122.23 port 33292
Nov 24 10:09:07 compute-0 sshd-session[293537]: Received disconnect from 83.229.122.23 port 33292:11: Bye Bye [preauth]
Nov 24 10:09:07 compute-0 sshd-session[293537]: Disconnected from invalid user user2 83.229.122.23 port 33292 [preauth]
Nov 24 10:09:07 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:09:07 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:07 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:09:07 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:07.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:09:07 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1274: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:09:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:09:07.608Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:09:07 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:07 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:07 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:07.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:08 compute-0 ceph-mon[74331]: pgmap v1274: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:09:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:09:08.955Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:09:09 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:09 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:09 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:09.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:09 compute-0 sshd-session[293540]: Invalid user mm from 45.78.198.78 port 38502
Nov 24 10:09:09 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1275: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:09:09 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:09 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:09:09 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:09.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:09:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:09:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:09:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:09:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:09:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:09:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:09:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:09:10 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:09:10 compute-0 sudo[293545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:09:10 compute-0 sudo[293545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:09:10 compute-0 sudo[293545]: pam_unix(sudo:session): session closed for user root
Nov 24 10:09:10 compute-0 sshd-session[293540]: Received disconnect from 45.78.198.78 port 38502:11: Bye Bye [preauth]
Nov 24 10:09:10 compute-0 sshd-session[293540]: Disconnected from invalid user mm 45.78.198.78 port 38502 [preauth]
Nov 24 10:09:10 compute-0 ceph-mon[74331]: pgmap v1275: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:09:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:09:10] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Nov 24 10:09:10 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:09:10] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Nov 24 10:09:11 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:11 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:11 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:11.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:11 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1276: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:09:11 compute-0 nova_compute[257700]: 2025-11-24 10:09:11.378 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:09:11 compute-0 nova_compute[257700]: 2025-11-24 10:09:11.599 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:09:11 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:11 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:11 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:11.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:12 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:09:12 compute-0 ceph-mon[74331]: pgmap v1276: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:09:13 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:13 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:13 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:13.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:13 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1277: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:09:13 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:09:13.584Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 10:09:13 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:09:13.585Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 10:09:13 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:13 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:09:13 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:13.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:09:14 compute-0 ceph-mon[74331]: pgmap v1277: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:09:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:09:15 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:09:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:09:15 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:09:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:09:15 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:09:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:09:15 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:09:15 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:15 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:15 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:15.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:15 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1278: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:09:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:09:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:09:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:09:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:09:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:09:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:09:15 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:09:15 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:15 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:15 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:15.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:16 compute-0 nova_compute[257700]: 2025-11-24 10:09:16.380 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:09:16 compute-0 nova_compute[257700]: 2025-11-24 10:09:16.637 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:09:16 compute-0 ceph-mon[74331]: pgmap v1278: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:09:17 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:09:17 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:17 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:17 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:17.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:17 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1279: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:09:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:09:17.609Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:09:17 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:17 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:17 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:17.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:18 compute-0 ceph-mon[74331]: pgmap v1279: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:09:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:09:18.956Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:09:19 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:19 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:19 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:19.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:19 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1280: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:09:19 compute-0 ceph-mon[74331]: pgmap v1280: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:09:19 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:19 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:19 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:19.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:09:19 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:09:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:09:19 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:09:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:09:19 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:09:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:09:20 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:09:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:09:20.582 165073 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:09:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:09:20.582 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:09:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:09:20.582 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:09:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:09:20] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Nov 24 10:09:20 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:09:20] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Nov 24 10:09:21 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:21 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:21 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:21.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:21 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1281: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:09:21 compute-0 nova_compute[257700]: 2025-11-24 10:09:21.384 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:09:21 compute-0 nova_compute[257700]: 2025-11-24 10:09:21.639 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:09:21 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:21 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:09:21 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:21.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:09:22 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:09:22 compute-0 ceph-mon[74331]: pgmap v1281: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:09:23 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:23 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:23 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:23.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:23 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1282: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:09:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:09:23.585Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:09:23 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:23 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:09:23 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:23.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:09:24 compute-0 ceph-mon[74331]: pgmap v1282: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:09:24 compute-0 nova_compute[257700]: 2025-11-24 10:09:24.916 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:09:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:09:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:09:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:09:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:09:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:09:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:09:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:09:25 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:09:25 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:25 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:25 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:25.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:25 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1283: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:09:25 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:25 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:25 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:25.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:25 compute-0 podman[293587]: 2025-11-24 10:09:25.794795124 +0000 UTC m=+0.061491266 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=multipathd, org.label-schema.license=GPLv2)
Nov 24 10:09:25 compute-0 podman[293588]: 2025-11-24 10:09:25.904303776 +0000 UTC m=+0.165766987 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 10:09:25 compute-0 nova_compute[257700]: 2025-11-24 10:09:25.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:09:26 compute-0 nova_compute[257700]: 2025-11-24 10:09:26.387 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:09:26 compute-0 ceph-mon[74331]: pgmap v1283: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:09:26 compute-0 nova_compute[257700]: 2025-11-24 10:09:26.641 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:09:26 compute-0 nova_compute[257700]: 2025-11-24 10:09:26.920 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:09:27 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:09:27 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:27 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:27 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:27.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:27 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1284: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:09:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:09:27.609Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:09:27 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:27 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:27 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:27.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:27 compute-0 nova_compute[257700]: 2025-11-24 10:09:27.915 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:09:27 compute-0 nova_compute[257700]: 2025-11-24 10:09:27.920 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:09:27 compute-0 nova_compute[257700]: 2025-11-24 10:09:27.920 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 10:09:27 compute-0 nova_compute[257700]: 2025-11-24 10:09:27.920 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 10:09:27 compute-0 nova_compute[257700]: 2025-11-24 10:09:27.933 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 10:09:27 compute-0 nova_compute[257700]: 2025-11-24 10:09:27.933 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:09:27 compute-0 nova_compute[257700]: 2025-11-24 10:09:27.934 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:09:27 compute-0 nova_compute[257700]: 2025-11-24 10:09:27.950 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:09:27 compute-0 nova_compute[257700]: 2025-11-24 10:09:27.951 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:09:27 compute-0 nova_compute[257700]: 2025-11-24 10:09:27.951 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:09:27 compute-0 nova_compute[257700]: 2025-11-24 10:09:27.951 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 10:09:27 compute-0 nova_compute[257700]: 2025-11-24 10:09:27.951 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:09:28 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:09:28 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3011647413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:09:28 compute-0 nova_compute[257700]: 2025-11-24 10:09:28.373 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:09:28 compute-0 ceph-mon[74331]: pgmap v1284: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:09:28 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3011647413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:09:28 compute-0 nova_compute[257700]: 2025-11-24 10:09:28.505 257704 WARNING nova.virt.libvirt.driver [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 10:09:28 compute-0 nova_compute[257700]: 2025-11-24 10:09:28.506 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4499MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 10:09:28 compute-0 nova_compute[257700]: 2025-11-24 10:09:28.507 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:09:28 compute-0 nova_compute[257700]: 2025-11-24 10:09:28.507 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:09:28 compute-0 nova_compute[257700]: 2025-11-24 10:09:28.563 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 10:09:28 compute-0 nova_compute[257700]: 2025-11-24 10:09:28.563 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 10:09:28 compute-0 nova_compute[257700]: 2025-11-24 10:09:28.590 257704 DEBUG nova.scheduler.client.report [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Refreshing inventories for resource provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 24 10:09:28 compute-0 nova_compute[257700]: 2025-11-24 10:09:28.615 257704 DEBUG nova.scheduler.client.report [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Updating ProviderTree inventory for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 24 10:09:28 compute-0 nova_compute[257700]: 2025-11-24 10:09:28.615 257704 DEBUG nova.compute.provider_tree [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Updating inventory in ProviderTree for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 24 10:09:28 compute-0 nova_compute[257700]: 2025-11-24 10:09:28.627 257704 DEBUG nova.scheduler.client.report [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Refreshing aggregate associations for resource provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 24 10:09:28 compute-0 nova_compute[257700]: 2025-11-24 10:09:28.644 257704 DEBUG nova.scheduler.client.report [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Refreshing trait associations for resource provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257, traits: COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_F16C,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SATA,COMPUTE_ACCELERATORS,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_SHA,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSSE3,HW_CPU_X86_AVX,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AESNI,HW_CPU_X86_BMI,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_MMX,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 24 10:09:28 compute-0 nova_compute[257700]: 2025-11-24 10:09:28.661 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:09:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:09:28.957Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 10:09:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:09:28.957Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 10:09:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:09:28.958Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:09:29 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:09:29 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/801198514' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:09:29 compute-0 nova_compute[257700]: 2025-11-24 10:09:29.078 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:09:29 compute-0 nova_compute[257700]: 2025-11-24 10:09:29.083 257704 DEBUG nova.compute.provider_tree [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Inventory has not changed in ProviderTree for provider: a50ce3b5-7e9e-4263-a4aa-c35573ac7257 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 10:09:29 compute-0 nova_compute[257700]: 2025-11-24 10:09:29.096 257704 DEBUG nova.scheduler.client.report [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Inventory has not changed for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 10:09:29 compute-0 nova_compute[257700]: 2025-11-24 10:09:29.098 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 10:09:29 compute-0 nova_compute[257700]: 2025-11-24 10:09:29.099 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.592s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:09:29 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:29 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:09:29 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:29.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:09:29 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1285: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:09:29 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/801198514' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:09:29 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/2932132905' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:09:29 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:29 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:29 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:29.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:29 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:09:29 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3812342946' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:09:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:09:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:09:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:09:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:09:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:09:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:09:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:09:30 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:09:30 compute-0 nova_compute[257700]: 2025-11-24 10:09:30.087 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:09:30 compute-0 nova_compute[257700]: 2025-11-24 10:09:30.087 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 10:09:30 compute-0 sudo[293679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:09:30 compute-0 sudo[293679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:09:30 compute-0 sudo[293679]: pam_unix(sudo:session): session closed for user root
Nov 24 10:09:30 compute-0 ceph-mon[74331]: pgmap v1285: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:09:30 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/3812342946' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:09:30 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:09:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:09:30] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 24 10:09:30 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:09:30] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 24 10:09:31 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1286: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:09:31 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:31 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:09:31 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:31.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:09:31 compute-0 nova_compute[257700]: 2025-11-24 10:09:31.387 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:09:31 compute-0 nova_compute[257700]: 2025-11-24 10:09:31.683 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:09:31 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:31 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:31 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:31.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:31 compute-0 podman[293706]: 2025-11-24 10:09:31.796905742 +0000 UTC m=+0.068341267 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Nov 24 10:09:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:09:32 compute-0 sudo[293726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:09:32 compute-0 sudo[293726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:09:32 compute-0 sudo[293726]: pam_unix(sudo:session): session closed for user root
Nov 24 10:09:32 compute-0 ceph-mon[74331]: pgmap v1286: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:09:32 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/416098763' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:09:32 compute-0 sudo[293751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 10:09:32 compute-0 sudo[293751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:09:32 compute-0 nova_compute[257700]: 2025-11-24 10:09:32.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:09:33 compute-0 sudo[293751]: pam_unix(sudo:session): session closed for user root
Nov 24 10:09:33 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1287: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:09:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 10:09:33 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:09:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 10:09:33 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:09:33 compute-0 sudo[293808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:09:33 compute-0 sudo[293808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:09:33 compute-0 sudo[293808]: pam_unix(sudo:session): session closed for user root
Nov 24 10:09:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:33.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:33 compute-0 sudo[293833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 24 10:09:33 compute-0 sudo[293833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:09:33 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:09:33 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 10:09:33 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:09:33 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:09:33 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 10:09:33 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 10:09:33 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:09:33 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/2882460740' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:09:33 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:09:33.586Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 10:09:33 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:09:33.586Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:09:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:09:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:33.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:09:33 compute-0 podman[293900]: 2025-11-24 10:09:33.818481039 +0000 UTC m=+0.066095351 container create 343986e334dcfd444c144fa32155876b9bfd68fd5551a65a3dd462aea1422c84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Nov 24 10:09:33 compute-0 systemd[1]: Started libpod-conmon-343986e334dcfd444c144fa32155876b9bfd68fd5551a65a3dd462aea1422c84.scope.
Nov 24 10:09:33 compute-0 sshd-session[293583]: error: kex_exchange_identification: read: Connection timed out
Nov 24 10:09:33 compute-0 sshd-session[293583]: banner exchange: Connection from 14.215.126.91 port 42412: Connection timed out
Nov 24 10:09:33 compute-0 podman[293900]: 2025-11-24 10:09:33.794319895 +0000 UTC m=+0.041934287 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:09:33 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:09:33 compute-0 nova_compute[257700]: 2025-11-24 10:09:33.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:09:33 compute-0 podman[293900]: 2025-11-24 10:09:33.928280568 +0000 UTC m=+0.175894920 container init 343986e334dcfd444c144fa32155876b9bfd68fd5551a65a3dd462aea1422c84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_tu, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1)
Nov 24 10:09:33 compute-0 podman[293900]: 2025-11-24 10:09:33.940166665 +0000 UTC m=+0.187780987 container start 343986e334dcfd444c144fa32155876b9bfd68fd5551a65a3dd462aea1422c84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_tu, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 10:09:33 compute-0 podman[293900]: 2025-11-24 10:09:33.944446502 +0000 UTC m=+0.192060814 container attach 343986e334dcfd444c144fa32155876b9bfd68fd5551a65a3dd462aea1422c84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_tu, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True)
Nov 24 10:09:33 compute-0 zealous_tu[293916]: 167 167
Nov 24 10:09:33 compute-0 systemd[1]: libpod-343986e334dcfd444c144fa32155876b9bfd68fd5551a65a3dd462aea1422c84.scope: Deactivated successfully.
Nov 24 10:09:33 compute-0 podman[293900]: 2025-11-24 10:09:33.94676214 +0000 UTC m=+0.194376432 container died 343986e334dcfd444c144fa32155876b9bfd68fd5551a65a3dd462aea1422c84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 10:09:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-11636a2cb8b778db64ce56b0e4d3bda973c8382beebcdb3bab2253ede403f0bf-merged.mount: Deactivated successfully.
Nov 24 10:09:33 compute-0 podman[293900]: 2025-11-24 10:09:33.98562531 +0000 UTC m=+0.233239612 container remove 343986e334dcfd444c144fa32155876b9bfd68fd5551a65a3dd462aea1422c84 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_tu, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Nov 24 10:09:34 compute-0 systemd[1]: libpod-conmon-343986e334dcfd444c144fa32155876b9bfd68fd5551a65a3dd462aea1422c84.scope: Deactivated successfully.
Nov 24 10:09:34 compute-0 podman[293939]: 2025-11-24 10:09:34.140408751 +0000 UTC m=+0.038666475 container create 4c4d4a2e2a0f07ba902aae740aba602e0a278066c1f6118c26ee252c1f30a89f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 24 10:09:34 compute-0 systemd[1]: Started libpod-conmon-4c4d4a2e2a0f07ba902aae740aba602e0a278066c1f6118c26ee252c1f30a89f.scope.
Nov 24 10:09:34 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:09:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcab1f3c2a3a08969d990cab00e2ef556c8ceaf1cb3c21491a5d675d48ff3008/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 10:09:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcab1f3c2a3a08969d990cab00e2ef556c8ceaf1cb3c21491a5d675d48ff3008/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 10:09:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcab1f3c2a3a08969d990cab00e2ef556c8ceaf1cb3c21491a5d675d48ff3008/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 10:09:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcab1f3c2a3a08969d990cab00e2ef556c8ceaf1cb3c21491a5d675d48ff3008/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 10:09:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcab1f3c2a3a08969d990cab00e2ef556c8ceaf1cb3c21491a5d675d48ff3008/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 10:09:34 compute-0 podman[293939]: 2025-11-24 10:09:34.221745872 +0000 UTC m=+0.120003596 container init 4c4d4a2e2a0f07ba902aae740aba602e0a278066c1f6118c26ee252c1f30a89f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_swanson, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 10:09:34 compute-0 podman[293939]: 2025-11-24 10:09:34.124084045 +0000 UTC m=+0.022341789 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:09:34 compute-0 podman[293939]: 2025-11-24 10:09:34.228138721 +0000 UTC m=+0.126396445 container start 4c4d4a2e2a0f07ba902aae740aba602e0a278066c1f6118c26ee252c1f30a89f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_swanson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Nov 24 10:09:34 compute-0 podman[293939]: 2025-11-24 10:09:34.230982802 +0000 UTC m=+0.129240526 container attach 4c4d4a2e2a0f07ba902aae740aba602e0a278066c1f6118c26ee252c1f30a89f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Nov 24 10:09:34 compute-0 charming_swanson[293955]: --> passed data devices: 0 physical, 1 LVM
Nov 24 10:09:34 compute-0 charming_swanson[293955]: --> All data devices are unavailable
Nov 24 10:09:34 compute-0 ceph-mon[74331]: pgmap v1287: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:09:34 compute-0 systemd[1]: libpod-4c4d4a2e2a0f07ba902aae740aba602e0a278066c1f6118c26ee252c1f30a89f.scope: Deactivated successfully.
Nov 24 10:09:34 compute-0 podman[293939]: 2025-11-24 10:09:34.574145435 +0000 UTC m=+0.472403159 container died 4c4d4a2e2a0f07ba902aae740aba602e0a278066c1f6118c26ee252c1f30a89f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_swanson, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 10:09:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-dcab1f3c2a3a08969d990cab00e2ef556c8ceaf1cb3c21491a5d675d48ff3008-merged.mount: Deactivated successfully.
Nov 24 10:09:34 compute-0 podman[293939]: 2025-11-24 10:09:34.625891877 +0000 UTC m=+0.524149641 container remove 4c4d4a2e2a0f07ba902aae740aba602e0a278066c1f6118c26ee252c1f30a89f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_swanson, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 10:09:34 compute-0 systemd[1]: libpod-conmon-4c4d4a2e2a0f07ba902aae740aba602e0a278066c1f6118c26ee252c1f30a89f.scope: Deactivated successfully.
Nov 24 10:09:34 compute-0 sudo[293833]: pam_unix(sudo:session): session closed for user root
Nov 24 10:09:34 compute-0 sudo[293983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:09:34 compute-0 sudo[293983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:09:34 compute-0 sudo[293983]: pam_unix(sudo:session): session closed for user root
Nov 24 10:09:34 compute-0 sudo[294008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- lvm list --format json
Nov 24 10:09:34 compute-0 sudo[294008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:09:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:09:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:09:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:09:35 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:09:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:09:35 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:09:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:09:35 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:09:35 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1288: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:09:35 compute-0 podman[294074]: 2025-11-24 10:09:35.247448567 +0000 UTC m=+0.043120307 container create e0ed5105139776ead0ec0088a58236bd38032bc91b14ff3f3d29c3a5bbbfd583 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_nightingale, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 24 10:09:35 compute-0 systemd[1]: Started libpod-conmon-e0ed5105139776ead0ec0088a58236bd38032bc91b14ff3f3d29c3a5bbbfd583.scope.
Nov 24 10:09:35 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:09:35 compute-0 podman[294074]: 2025-11-24 10:09:35.226791721 +0000 UTC m=+0.022463511 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:09:35 compute-0 podman[294074]: 2025-11-24 10:09:35.328229963 +0000 UTC m=+0.123901733 container init e0ed5105139776ead0ec0088a58236bd38032bc91b14ff3f3d29c3a5bbbfd583 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_nightingale, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Nov 24 10:09:35 compute-0 podman[294074]: 2025-11-24 10:09:35.340803117 +0000 UTC m=+0.136474857 container start e0ed5105139776ead0ec0088a58236bd38032bc91b14ff3f3d29c3a5bbbfd583 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 10:09:35 compute-0 podman[294074]: 2025-11-24 10:09:35.343627087 +0000 UTC m=+0.139298847 container attach e0ed5105139776ead0ec0088a58236bd38032bc91b14ff3f3d29c3a5bbbfd583 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 10:09:35 compute-0 jovial_nightingale[294090]: 167 167
Nov 24 10:09:35 compute-0 systemd[1]: libpod-e0ed5105139776ead0ec0088a58236bd38032bc91b14ff3f3d29c3a5bbbfd583.scope: Deactivated successfully.
Nov 24 10:09:35 compute-0 podman[294074]: 2025-11-24 10:09:35.346235733 +0000 UTC m=+0.141907513 container died e0ed5105139776ead0ec0088a58236bd38032bc91b14ff3f3d29c3a5bbbfd583 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_nightingale, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 10:09:35 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:35 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:35 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:35.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-1904859937d5c65269ca2dda47c3cb4ea99349d67617272b8c71740c6288a344-merged.mount: Deactivated successfully.
Nov 24 10:09:35 compute-0 podman[294074]: 2025-11-24 10:09:35.387589944 +0000 UTC m=+0.183261694 container remove e0ed5105139776ead0ec0088a58236bd38032bc91b14ff3f3d29c3a5bbbfd583 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1)
Nov 24 10:09:35 compute-0 systemd[1]: libpod-conmon-e0ed5105139776ead0ec0088a58236bd38032bc91b14ff3f3d29c3a5bbbfd583.scope: Deactivated successfully.
Nov 24 10:09:35 compute-0 podman[294114]: 2025-11-24 10:09:35.612730683 +0000 UTC m=+0.058279856 container create 84d8a0e5615d60630fb255574808140ec54ff962e8399883f6e5399eed941ffd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_sutherland, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 10:09:35 compute-0 systemd[1]: Started libpod-conmon-84d8a0e5615d60630fb255574808140ec54ff962e8399883f6e5399eed941ffd.scope.
Nov 24 10:09:35 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:09:35 compute-0 podman[294114]: 2025-11-24 10:09:35.583837701 +0000 UTC m=+0.029386914 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:09:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7f0bdcce780b57994999745bc994c75a4cf4770904c5d7cb3ff1931e7774643/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 10:09:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7f0bdcce780b57994999745bc994c75a4cf4770904c5d7cb3ff1931e7774643/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 10:09:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7f0bdcce780b57994999745bc994c75a4cf4770904c5d7cb3ff1931e7774643/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 10:09:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7f0bdcce780b57994999745bc994c75a4cf4770904c5d7cb3ff1931e7774643/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 10:09:35 compute-0 podman[294114]: 2025-11-24 10:09:35.694260917 +0000 UTC m=+0.139810050 container init 84d8a0e5615d60630fb255574808140ec54ff962e8399883f6e5399eed941ffd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Nov 24 10:09:35 compute-0 podman[294114]: 2025-11-24 10:09:35.703485537 +0000 UTC m=+0.149034660 container start 84d8a0e5615d60630fb255574808140ec54ff962e8399883f6e5399eed941ffd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 10:09:35 compute-0 podman[294114]: 2025-11-24 10:09:35.706709467 +0000 UTC m=+0.152258600 container attach 84d8a0e5615d60630fb255574808140ec54ff962e8399883f6e5399eed941ffd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 10:09:35 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:35 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:35 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:35.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:35 compute-0 vibrant_sutherland[294132]: {
Nov 24 10:09:35 compute-0 vibrant_sutherland[294132]:     "0": [
Nov 24 10:09:35 compute-0 vibrant_sutherland[294132]:         {
Nov 24 10:09:35 compute-0 vibrant_sutherland[294132]:             "devices": [
Nov 24 10:09:35 compute-0 vibrant_sutherland[294132]:                 "/dev/loop3"
Nov 24 10:09:35 compute-0 vibrant_sutherland[294132]:             ],
Nov 24 10:09:35 compute-0 vibrant_sutherland[294132]:             "lv_name": "ceph_lv0",
Nov 24 10:09:35 compute-0 vibrant_sutherland[294132]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 10:09:35 compute-0 vibrant_sutherland[294132]:             "lv_size": "21470642176",
Nov 24 10:09:35 compute-0 vibrant_sutherland[294132]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=84a084c3-61a7-5de7-8207-1f88efa59a64,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 24 10:09:35 compute-0 vibrant_sutherland[294132]:             "lv_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 10:09:35 compute-0 vibrant_sutherland[294132]:             "name": "ceph_lv0",
Nov 24 10:09:35 compute-0 vibrant_sutherland[294132]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 10:09:35 compute-0 vibrant_sutherland[294132]:             "tags": {
Nov 24 10:09:35 compute-0 vibrant_sutherland[294132]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 10:09:35 compute-0 vibrant_sutherland[294132]:                 "ceph.block_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 10:09:35 compute-0 vibrant_sutherland[294132]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 10:09:35 compute-0 vibrant_sutherland[294132]:                 "ceph.cluster_fsid": "84a084c3-61a7-5de7-8207-1f88efa59a64",
Nov 24 10:09:35 compute-0 vibrant_sutherland[294132]:                 "ceph.cluster_name": "ceph",
Nov 24 10:09:35 compute-0 vibrant_sutherland[294132]:                 "ceph.crush_device_class": "",
Nov 24 10:09:35 compute-0 vibrant_sutherland[294132]:                 "ceph.encrypted": "0",
Nov 24 10:09:35 compute-0 vibrant_sutherland[294132]:                 "ceph.osd_fsid": "4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c",
Nov 24 10:09:35 compute-0 vibrant_sutherland[294132]:                 "ceph.osd_id": "0",
Nov 24 10:09:35 compute-0 vibrant_sutherland[294132]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 10:09:35 compute-0 vibrant_sutherland[294132]:                 "ceph.type": "block",
Nov 24 10:09:35 compute-0 vibrant_sutherland[294132]:                 "ceph.vdo": "0",
Nov 24 10:09:35 compute-0 vibrant_sutherland[294132]:                 "ceph.with_tpm": "0"
Nov 24 10:09:35 compute-0 vibrant_sutherland[294132]:             },
Nov 24 10:09:35 compute-0 vibrant_sutherland[294132]:             "type": "block",
Nov 24 10:09:35 compute-0 vibrant_sutherland[294132]:             "vg_name": "ceph_vg0"
Nov 24 10:09:35 compute-0 vibrant_sutherland[294132]:         }
Nov 24 10:09:35 compute-0 vibrant_sutherland[294132]:     ]
Nov 24 10:09:35 compute-0 vibrant_sutherland[294132]: }
Nov 24 10:09:35 compute-0 systemd[1]: libpod-84d8a0e5615d60630fb255574808140ec54ff962e8399883f6e5399eed941ffd.scope: Deactivated successfully.
Nov 24 10:09:35 compute-0 conmon[294132]: conmon 84d8a0e5615d60630fb2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-84d8a0e5615d60630fb255574808140ec54ff962e8399883f6e5399eed941ffd.scope/container/memory.events
Nov 24 10:09:35 compute-0 podman[294114]: 2025-11-24 10:09:35.982432728 +0000 UTC m=+0.427981871 container died 84d8a0e5615d60630fb255574808140ec54ff962e8399883f6e5399eed941ffd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1)
Nov 24 10:09:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-b7f0bdcce780b57994999745bc994c75a4cf4770904c5d7cb3ff1931e7774643-merged.mount: Deactivated successfully.
Nov 24 10:09:36 compute-0 podman[294114]: 2025-11-24 10:09:36.037590464 +0000 UTC m=+0.483139637 container remove 84d8a0e5615d60630fb255574808140ec54ff962e8399883f6e5399eed941ffd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_sutherland, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 24 10:09:36 compute-0 systemd[1]: libpod-conmon-84d8a0e5615d60630fb255574808140ec54ff962e8399883f6e5399eed941ffd.scope: Deactivated successfully.
Nov 24 10:09:36 compute-0 sudo[294008]: pam_unix(sudo:session): session closed for user root
Nov 24 10:09:36 compute-0 sudo[294156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:09:36 compute-0 sudo[294156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:09:36 compute-0 sudo[294156]: pam_unix(sudo:session): session closed for user root
Nov 24 10:09:36 compute-0 sudo[294181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- raw list --format json
Nov 24 10:09:36 compute-0 sudo[294181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:09:36 compute-0 nova_compute[257700]: 2025-11-24 10:09:36.389 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:09:36 compute-0 ceph-mon[74331]: pgmap v1288: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:09:36 compute-0 podman[294250]: 2025-11-24 10:09:36.636485759 +0000 UTC m=+0.054718766 container create c539f19ddcfd49ec8c8cc03d886031a3109036e4164a2c468f6ec475f375579c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_davinci, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Nov 24 10:09:36 compute-0 systemd[1]: Started libpod-conmon-c539f19ddcfd49ec8c8cc03d886031a3109036e4164a2c468f6ec475f375579c.scope.
Nov 24 10:09:36 compute-0 nova_compute[257700]: 2025-11-24 10:09:36.685 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:09:36 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:09:36 compute-0 podman[294250]: 2025-11-24 10:09:36.709872961 +0000 UTC m=+0.128105988 container init c539f19ddcfd49ec8c8cc03d886031a3109036e4164a2c468f6ec475f375579c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 10:09:36 compute-0 podman[294250]: 2025-11-24 10:09:36.622158462 +0000 UTC m=+0.040391489 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:09:36 compute-0 podman[294250]: 2025-11-24 10:09:36.718342052 +0000 UTC m=+0.136575069 container start c539f19ddcfd49ec8c8cc03d886031a3109036e4164a2c468f6ec475f375579c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 10:09:36 compute-0 podman[294250]: 2025-11-24 10:09:36.721953202 +0000 UTC m=+0.140186229 container attach c539f19ddcfd49ec8c8cc03d886031a3109036e4164a2c468f6ec475f375579c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_davinci, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 10:09:36 compute-0 practical_davinci[294266]: 167 167
Nov 24 10:09:36 compute-0 systemd[1]: libpod-c539f19ddcfd49ec8c8cc03d886031a3109036e4164a2c468f6ec475f375579c.scope: Deactivated successfully.
Nov 24 10:09:36 compute-0 podman[294250]: 2025-11-24 10:09:36.724548877 +0000 UTC m=+0.142781894 container died c539f19ddcfd49ec8c8cc03d886031a3109036e4164a2c468f6ec475f375579c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_davinci, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 10:09:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8361d26bb28bfacc36bbc6fdbc4e008778a1cbfdcf1a7fe22e01fa466ae0a38-merged.mount: Deactivated successfully.
Nov 24 10:09:36 compute-0 podman[294250]: 2025-11-24 10:09:36.760889694 +0000 UTC m=+0.179122711 container remove c539f19ddcfd49ec8c8cc03d886031a3109036e4164a2c468f6ec475f375579c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_davinci, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True)
Nov 24 10:09:36 compute-0 systemd[1]: libpod-conmon-c539f19ddcfd49ec8c8cc03d886031a3109036e4164a2c468f6ec475f375579c.scope: Deactivated successfully.
Nov 24 10:09:36 compute-0 podman[294290]: 2025-11-24 10:09:36.98192634 +0000 UTC m=+0.077421333 container create f8ead4bd80ab90f6e5f64eccaac9173e963c09cb836aa932159794c8e8120bee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_hawking, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid)
Nov 24 10:09:37 compute-0 sshd-session[294137]: Invalid user admin from 36.255.3.203 port 36428
Nov 24 10:09:37 compute-0 podman[294290]: 2025-11-24 10:09:36.936822844 +0000 UTC m=+0.032317857 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:09:37 compute-0 systemd[1]: Started libpod-conmon-f8ead4bd80ab90f6e5f64eccaac9173e963c09cb836aa932159794c8e8120bee.scope.
Nov 24 10:09:37 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:09:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afc0e89119a42570e0f396f5a34c77e32aa01760b1d64a25a7c7c484d16e26c7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 10:09:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afc0e89119a42570e0f396f5a34c77e32aa01760b1d64a25a7c7c484d16e26c7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 10:09:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afc0e89119a42570e0f396f5a34c77e32aa01760b1d64a25a7c7c484d16e26c7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 10:09:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afc0e89119a42570e0f396f5a34c77e32aa01760b1d64a25a7c7c484d16e26c7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 10:09:37 compute-0 podman[294290]: 2025-11-24 10:09:37.089238028 +0000 UTC m=+0.184733061 container init f8ead4bd80ab90f6e5f64eccaac9173e963c09cb836aa932159794c8e8120bee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_hawking, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Nov 24 10:09:37 compute-0 podman[294290]: 2025-11-24 10:09:37.095645397 +0000 UTC m=+0.191140360 container start f8ead4bd80ab90f6e5f64eccaac9173e963c09cb836aa932159794c8e8120bee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 10:09:37 compute-0 podman[294290]: 2025-11-24 10:09:37.120452117 +0000 UTC m=+0.215947120 container attach f8ead4bd80ab90f6e5f64eccaac9173e963c09cb836aa932159794c8e8120bee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Nov 24 10:09:37 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1289: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:09:37 compute-0 sshd-session[294137]: Received disconnect from 36.255.3.203 port 36428:11: Bye Bye [preauth]
Nov 24 10:09:37 compute-0 sshd-session[294137]: Disconnected from invalid user admin 36.255.3.203 port 36428 [preauth]
Nov 24 10:09:37 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:09:37 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:37 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:09:37 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:37.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:09:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:09:37.610Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:09:37 compute-0 lvm[294381]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 10:09:37 compute-0 lvm[294381]: VG ceph_vg0 finished
Nov 24 10:09:37 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:37 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:09:37 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:37.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:09:37 compute-0 elated_hawking[294306]: {}
Nov 24 10:09:37 compute-0 systemd[1]: libpod-f8ead4bd80ab90f6e5f64eccaac9173e963c09cb836aa932159794c8e8120bee.scope: Deactivated successfully.
Nov 24 10:09:37 compute-0 systemd[1]: libpod-f8ead4bd80ab90f6e5f64eccaac9173e963c09cb836aa932159794c8e8120bee.scope: Consumed 1.184s CPU time.
Nov 24 10:09:37 compute-0 podman[294385]: 2025-11-24 10:09:37.888549873 +0000 UTC m=+0.027387374 container died f8ead4bd80ab90f6e5f64eccaac9173e963c09cb836aa932159794c8e8120bee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Nov 24 10:09:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-afc0e89119a42570e0f396f5a34c77e32aa01760b1d64a25a7c7c484d16e26c7-merged.mount: Deactivated successfully.
Nov 24 10:09:37 compute-0 podman[294385]: 2025-11-24 10:09:37.932580122 +0000 UTC m=+0.071417603 container remove f8ead4bd80ab90f6e5f64eccaac9173e963c09cb836aa932159794c8e8120bee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_hawking, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Nov 24 10:09:37 compute-0 systemd[1]: libpod-conmon-f8ead4bd80ab90f6e5f64eccaac9173e963c09cb836aa932159794c8e8120bee.scope: Deactivated successfully.
Nov 24 10:09:37 compute-0 sudo[294181]: pam_unix(sudo:session): session closed for user root
Nov 24 10:09:37 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 10:09:37 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:09:38 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 10:09:38 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:09:38 compute-0 sudo[294402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 10:09:38 compute-0 sudo[294402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:09:38 compute-0 sudo[294402]: pam_unix(sudo:session): session closed for user root
Nov 24 10:09:38 compute-0 ceph-mon[74331]: pgmap v1289: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:09:38 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:09:38 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:09:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:09:38.959Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:09:39 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1290: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:09:39 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:39 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:39 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:39.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:39 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:39 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:39 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:39.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:09:39 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:09:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:09:39 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:09:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:09:39 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:09:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:09:40 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:09:40 compute-0 ceph-mon[74331]: pgmap v1290: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:09:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:09:40] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 24 10:09:40 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:09:40] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 24 10:09:41 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1291: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:09:41 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:41 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:41 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:41.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:41 compute-0 nova_compute[257700]: 2025-11-24 10:09:41.392 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:09:41 compute-0 nova_compute[257700]: 2025-11-24 10:09:41.687 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:09:41 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:41 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:41 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:41.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:42 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:09:42 compute-0 ceph-mon[74331]: pgmap v1291: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:09:43 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1292: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:09:43 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:43 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:43 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:43.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:43 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:09:43.587Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:09:43 compute-0 ceph-mon[74331]: pgmap v1292: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:09:43 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:43 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:43 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:43.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:09:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:09:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:09:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:09:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:09:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:09:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:09:45 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:09:45 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1293: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:09:45 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:45 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:45 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:45.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Optimize plan auto_2025-11-24_10:09:45
Nov 24 10:09:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 10:09:45 compute-0 ceph-mgr[74626]: [balancer INFO root] do_upmap
Nov 24 10:09:45 compute-0 ceph-mgr[74626]: [balancer INFO root] pools ['images', 'default.rgw.log', 'vms', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', 'default.rgw.meta', 'backups', '.nfs', 'volumes']
Nov 24 10:09:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:09:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:09:45 compute-0 ceph-mgr[74626]: [balancer INFO root] prepared 0/10 upmap changes
Nov 24 10:09:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:09:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:09:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:09:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:09:45 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:45 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:09:45 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:45.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:09:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 10:09:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:09:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 10:09:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:09:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 10:09:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:09:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:09:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:09:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:09:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:09:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 24 10:09:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:09:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 10:09:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:09:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:09:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:09:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 24 10:09:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:09:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 10:09:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:09:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:09:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:09:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 10:09:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:09:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 10:09:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 10:09:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 10:09:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 10:09:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 10:09:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 10:09:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 10:09:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 10:09:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 10:09:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 10:09:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 10:09:46 compute-0 ceph-mon[74331]: pgmap v1293: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:09:46 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:09:46 compute-0 nova_compute[257700]: 2025-11-24 10:09:46.448 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:09:46 compute-0 nova_compute[257700]: 2025-11-24 10:09:46.689 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:09:47 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1294: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:09:47 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:09:47 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:47 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:47 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:47.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:47 compute-0 ceph-mgr[74626]: [devicehealth INFO root] Check health
Nov 24 10:09:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:09:47.611Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 10:09:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:09:47.611Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 10:09:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:09:47.612Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 10:09:47 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:47 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:09:47 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:47.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:09:48 compute-0 ceph-mon[74331]: pgmap v1294: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:09:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:09:48.960Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 10:09:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:09:48.961Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:09:49 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1295: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:09:49 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:49 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:49 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:49.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:49 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:49 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:49 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:49.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:09:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:09:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:09:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:09:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:09:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:09:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:09:50 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:09:50 compute-0 ceph-mon[74331]: pgmap v1295: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:09:50 compute-0 sudo[294439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:09:50 compute-0 sudo[294439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:09:50 compute-0 sudo[294439]: pam_unix(sudo:session): session closed for user root
Nov 24 10:09:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:09:50] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Nov 24 10:09:50 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:09:50] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Nov 24 10:09:51 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1296: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:09:51 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:51 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:51 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:51.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:51 compute-0 nova_compute[257700]: 2025-11-24 10:09:51.451 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:09:51 compute-0 nova_compute[257700]: 2025-11-24 10:09:51.691 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:09:51 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:51 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:51 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:51.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:52 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:09:52 compute-0 ceph-mon[74331]: pgmap v1296: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:09:53 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1297: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:09:53 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:53 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:09:53 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:53.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:09:53 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:09:53.588Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 10:09:53 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:09:53.589Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:09:53 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:53 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:53 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:53.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:54 compute-0 ceph-mon[74331]: pgmap v1297: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:09:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:09:55 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:09:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:09:55 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:09:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:09:55 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:09:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:09:55 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:09:55 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1298: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:09:55 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:55 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:09:55 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:55.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:09:55 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:55 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:09:55 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:55.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:09:56 compute-0 ceph-mon[74331]: pgmap v1298: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:09:56 compute-0 nova_compute[257700]: 2025-11-24 10:09:56.455 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:09:56 compute-0 nova_compute[257700]: 2025-11-24 10:09:56.692 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:09:56 compute-0 podman[294471]: 2025-11-24 10:09:56.782437783 +0000 UTC m=+0.056841133 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 24 10:09:56 compute-0 podman[294472]: 2025-11-24 10:09:56.837134061 +0000 UTC m=+0.106277207 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 24 10:09:57 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1299: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:09:57 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:09:57 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:57 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:57 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:57.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:09:57.612Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 10:09:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:09:57.612Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 10:09:57 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:57 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:09:57 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:57.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:09:58 compute-0 ceph-mon[74331]: pgmap v1299: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:09:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:09:58.961Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 10:09:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:09:58.961Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 10:09:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:09:58.962Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:09:59 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1300: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:09:59 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:59 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:09:59 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:09:59.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:09:59 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:09:59 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:09:59 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:09:59.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:10:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:10:00 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:10:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:10:00 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:10:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:10:00 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:10:00 compute-0 ceph-mon[74331]: log_channel(cluster) log [WRN] : overall HEALTH_WARN 1 failed cephadm daemon(s)
Nov 24 10:10:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:10:00 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:10:00 compute-0 ceph-mon[74331]: pgmap v1300: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:10:00 compute-0 ceph-mon[74331]: overall HEALTH_WARN 1 failed cephadm daemon(s)
Nov 24 10:10:00 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:10:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:10:00] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 24 10:10:00 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:10:00] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 24 10:10:01 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1301: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:10:01 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:01 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:10:01 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:01.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:10:01 compute-0 nova_compute[257700]: 2025-11-24 10:10:01.457 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:10:01 compute-0 nova_compute[257700]: 2025-11-24 10:10:01.695 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:10:01 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:01 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:01 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:01.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:02 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:10:02 compute-0 ceph-mon[74331]: pgmap v1301: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:10:02 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/983930137' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 10:10:02 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/983930137' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 10:10:02 compute-0 podman[294524]: 2025-11-24 10:10:02.81279269 +0000 UTC m=+0.075165118 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 10:10:03 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1302: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:10:03 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:03 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:03 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:03.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:10:03.590Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:10:03 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:03 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:10:03 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:03.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:10:04 compute-0 ceph-mon[74331]: pgmap v1302: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:10:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:10:04 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:10:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:10:04 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:10:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:10:04 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:10:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:10:05 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:10:05 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1303: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:10:05 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:05 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:05 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:05.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:05 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:05 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:05 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:05.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:06 compute-0 nova_compute[257700]: 2025-11-24 10:10:06.459 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:10:06 compute-0 sshd-session[294522]: Received disconnect from 14.215.126.91 port 36276:11: Bye Bye [preauth]
Nov 24 10:10:06 compute-0 sshd-session[294522]: Disconnected from authenticating user root 14.215.126.91 port 36276 [preauth]
Nov 24 10:10:06 compute-0 ceph-mon[74331]: pgmap v1303: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:10:06 compute-0 nova_compute[257700]: 2025-11-24 10:10:06.696 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:10:07 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1304: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:10:07 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:10:07 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:07 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:07 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:07.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:10:07.613Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:10:07 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:07 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:07 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:07.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:08 compute-0 ceph-mon[74331]: pgmap v1304: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:10:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:10:08.963Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 10:10:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:10:08.963Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 10:10:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:10:08.963Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 10:10:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=cleanup t=2025-11-24T10:10:09.182227195Z level=info msg="Completed cleanup jobs" duration=24.602374ms
Nov 24 10:10:09 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1305: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:10:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=grafana.update.checker t=2025-11-24T10:10:09.309450313Z level=info msg="Update check succeeded" duration=60.733961ms
Nov 24 10:10:09 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0[104526]: logger=plugins.update.checker t=2025-11-24T10:10:09.360907167Z level=info msg="Update check succeeded" duration=102.874609ms
Nov 24 10:10:09 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:09 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:09 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:09.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:09 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:09 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:10:09 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:09.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:10:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:10:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:10:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:10:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:10:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:10:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:10:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:10:10 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:10:10 compute-0 sudo[294553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:10:10 compute-0 sudo[294553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:10:10 compute-0 sudo[294553]: pam_unix(sudo:session): session closed for user root
Nov 24 10:10:10 compute-0 ceph-mon[74331]: pgmap v1305: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:10:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:10:10] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 24 10:10:10 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:10:10] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 24 10:10:11 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1306: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:10:11 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:11 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:11 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:11.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:11 compute-0 nova_compute[257700]: 2025-11-24 10:10:11.460 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:10:11 compute-0 nova_compute[257700]: 2025-11-24 10:10:11.698 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:10:11 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:11 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:11 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:11.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:12 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:10:12 compute-0 ceph-mon[74331]: pgmap v1306: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:10:13 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1307: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:10:13 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:13 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:10:13 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:13.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:10:13 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:10:13.591Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 10:10:13 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:10:13.591Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:10:13 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:13 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:13 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:13.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:14 compute-0 ceph-mon[74331]: pgmap v1307: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:10:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:10:14 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:10:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:10:15 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:10:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:10:15 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:10:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:10:15 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:10:15 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1308: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:10:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:10:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:10:15 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:15 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:15 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:15.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:10:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:10:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:10:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:10:15 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:10:15 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:15 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:15 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:15.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:16 compute-0 nova_compute[257700]: 2025-11-24 10:10:16.461 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:10:16 compute-0 ceph-mon[74331]: pgmap v1308: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:10:16 compute-0 nova_compute[257700]: 2025-11-24 10:10:16.700 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:10:17 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1309: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:10:17 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:10:17 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:17 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:17 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:17.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:10:17.614Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:10:17 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:17 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:17 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:17.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:18 compute-0 ceph-mon[74331]: pgmap v1309: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:10:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:10:18.963Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:10:19 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1310: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:10:19 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:19 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:10:19 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:19.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:10:19 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:19 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:10:19 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:19.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:10:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:10:19 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:10:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:10:19 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:10:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:10:19 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:10:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:10:20 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:10:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:10:20.584 165073 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:10:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:10:20.584 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:10:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:10:20.584 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:10:20 compute-0 ceph-mon[74331]: pgmap v1310: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:10:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:10:20] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Nov 24 10:10:20 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:10:20] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Nov 24 10:10:21 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1311: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:10:21 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:21 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:21 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:21.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:21 compute-0 nova_compute[257700]: 2025-11-24 10:10:21.463 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:10:21 compute-0 nova_compute[257700]: 2025-11-24 10:10:21.702 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:10:21 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:21 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:10:21 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:21.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:10:22 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:10:22 compute-0 sshd-session[294588]: Received disconnect from 83.229.122.23 port 34422:11: Bye Bye [preauth]
Nov 24 10:10:22 compute-0 sshd-session[294588]: Disconnected from authenticating user root 83.229.122.23 port 34422 [preauth]
Nov 24 10:10:22 compute-0 ceph-mon[74331]: pgmap v1311: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:10:23 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1312: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:10:23 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:23 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:23 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:23.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:10:23.592Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:10:23 compute-0 ceph-mon[74331]: pgmap v1312: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:10:23 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:23 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:10:23 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:23.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:10:24 compute-0 nova_compute[257700]: 2025-11-24 10:10:24.678 257704 DEBUG oslo_concurrency.processutils [None req-df03cd5c-5660-4536-b19c-eb403e13ec09 1498c4791c234bc884ea0fabb778d239 cf636babb68a4ebe9bf137d3fe0e4c0c - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:10:24 compute-0 nova_compute[257700]: 2025-11-24 10:10:24.700 257704 DEBUG oslo_concurrency.processutils [None req-df03cd5c-5660-4536-b19c-eb403e13ec09 1498c4791c234bc884ea0fabb778d239 cf636babb68a4ebe9bf137d3fe0e4c0c - - default default] CMD "env LANG=C uptime" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:10:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:10:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:10:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:10:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:10:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:10:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:10:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:10:25 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:10:25 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1313: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:10:25 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:25 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:25 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:25.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:25 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:25 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:25 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:25.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:26 compute-0 ceph-mon[74331]: pgmap v1313: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:10:26 compute-0 nova_compute[257700]: 2025-11-24 10:10:26.463 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:10:26 compute-0 nova_compute[257700]: 2025-11-24 10:10:26.704 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:10:26 compute-0 nova_compute[257700]: 2025-11-24 10:10:26.922 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:10:27 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1314: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:10:27 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:10:27 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:27 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:27 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:27.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:10:27.615Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 10:10:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:10:27.615Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 10:10:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:10:27.616Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 10:10:27 compute-0 podman[294598]: 2025-11-24 10:10:27.782999689 +0000 UTC m=+0.057165841 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd)
Nov 24 10:10:27 compute-0 podman[294599]: 2025-11-24 10:10:27.800888423 +0000 UTC m=+0.075467565 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 24 10:10:27 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:27 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:27 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:27.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:27 compute-0 nova_compute[257700]: 2025-11-24 10:10:27.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:10:27 compute-0 nova_compute[257700]: 2025-11-24 10:10:27.921 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 10:10:27 compute-0 nova_compute[257700]: 2025-11-24 10:10:27.922 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 10:10:27 compute-0 nova_compute[257700]: 2025-11-24 10:10:27.946 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 10:10:27 compute-0 nova_compute[257700]: 2025-11-24 10:10:27.946 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:10:27 compute-0 nova_compute[257700]: 2025-11-24 10:10:27.964 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:10:27 compute-0 nova_compute[257700]: 2025-11-24 10:10:27.965 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:10:27 compute-0 nova_compute[257700]: 2025-11-24 10:10:27.965 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:10:27 compute-0 nova_compute[257700]: 2025-11-24 10:10:27.965 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 10:10:27 compute-0 nova_compute[257700]: 2025-11-24 10:10:27.966 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:10:28 compute-0 ceph-mon[74331]: pgmap v1314: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:10:28 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:10:28 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3190216163' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:10:28 compute-0 nova_compute[257700]: 2025-11-24 10:10:28.399 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:10:28 compute-0 nova_compute[257700]: 2025-11-24 10:10:28.578 257704 WARNING nova.virt.libvirt.driver [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 10:10:28 compute-0 nova_compute[257700]: 2025-11-24 10:10:28.579 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4506MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 10:10:28 compute-0 nova_compute[257700]: 2025-11-24 10:10:28.580 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:10:28 compute-0 nova_compute[257700]: 2025-11-24 10:10:28.580 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:10:28 compute-0 nova_compute[257700]: 2025-11-24 10:10:28.649 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 10:10:28 compute-0 nova_compute[257700]: 2025-11-24 10:10:28.649 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 10:10:28 compute-0 nova_compute[257700]: 2025-11-24 10:10:28.666 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:10:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:10:28.964Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:10:29 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:10:29 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1476322504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:10:29 compute-0 nova_compute[257700]: 2025-11-24 10:10:29.089 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:10:29 compute-0 nova_compute[257700]: 2025-11-24 10:10:29.095 257704 DEBUG nova.compute.provider_tree [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Inventory has not changed in ProviderTree for provider: a50ce3b5-7e9e-4263-a4aa-c35573ac7257 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 10:10:29 compute-0 nova_compute[257700]: 2025-11-24 10:10:29.111 257704 DEBUG nova.scheduler.client.report [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Inventory has not changed for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 10:10:29 compute-0 nova_compute[257700]: 2025-11-24 10:10:29.113 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 10:10:29 compute-0 nova_compute[257700]: 2025-11-24 10:10:29.113 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.533s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:10:29 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1315: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:10:29 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3190216163' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:10:29 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1476322504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:10:29 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:29 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:29 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:29.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:29 compute-0 nova_compute[257700]: 2025-11-24 10:10:29.700 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:10:29 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:10:29.701 165073 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:13:51', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4e:f0:a8:6f:5e:1b'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 24 10:10:29 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:10:29.702 165073 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 24 10:10:29 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:29 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:29 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:29.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:10:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:10:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:10:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:10:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:10:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:10:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:10:30 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:10:30 compute-0 nova_compute[257700]: 2025-11-24 10:10:30.089 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:10:30 compute-0 nova_compute[257700]: 2025-11-24 10:10:30.090 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:10:30 compute-0 nova_compute[257700]: 2025-11-24 10:10:30.090 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:10:30 compute-0 nova_compute[257700]: 2025-11-24 10:10:30.090 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:10:30 compute-0 nova_compute[257700]: 2025-11-24 10:10:30.090 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 10:10:30 compute-0 ceph-mon[74331]: pgmap v1315: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:10:30 compute-0 sudo[294691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:10:30 compute-0 sudo[294691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:10:30 compute-0 sudo[294691]: pam_unix(sudo:session): session closed for user root
Nov 24 10:10:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:10:30] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 24 10:10:30 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:10:30] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 24 10:10:31 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1316: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:10:31 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:10:31 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/1545364565' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:10:31 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:31 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:31 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:31.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:31 compute-0 nova_compute[257700]: 2025-11-24 10:10:31.465 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:10:31 compute-0 nova_compute[257700]: 2025-11-24 10:10:31.706 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:10:31 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:31 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:10:31 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:31.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:10:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:10:32 compute-0 ceph-mon[74331]: pgmap v1316: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:10:32 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/2150094402' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:10:32 compute-0 nova_compute[257700]: 2025-11-24 10:10:32.922 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:10:33 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1317: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:10:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:33.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:33 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:10:33.593Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:10:33 compute-0 podman[294719]: 2025-11-24 10:10:33.829057704 +0000 UTC m=+0.094044397 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 24 10:10:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:33.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:33 compute-0 nova_compute[257700]: 2025-11-24 10:10:33.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:10:34 compute-0 ceph-mon[74331]: pgmap v1317: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:10:34 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/4143398664' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:10:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:10:35 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:10:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:10:35 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:10:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:10:35 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:10:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:10:35 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:10:35 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1318: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:10:35 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/2672638891' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:10:35 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:35 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:35 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:35.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:35 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:35 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:10:35 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:35.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:10:36 compute-0 ceph-mon[74331]: pgmap v1318: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:10:36 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Nov 24 10:10:36 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:10:36.427746) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 10:10:36 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Nov 24 10:10:36 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763979036427801, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 1377, "num_deletes": 251, "total_data_size": 2471453, "memory_usage": 2506640, "flush_reason": "Manual Compaction"}
Nov 24 10:10:36 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Nov 24 10:10:36 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763979036446363, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 2415944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35653, "largest_seqno": 37029, "table_properties": {"data_size": 2409563, "index_size": 3580, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13829, "raw_average_key_size": 20, "raw_value_size": 2396602, "raw_average_value_size": 3488, "num_data_blocks": 156, "num_entries": 687, "num_filter_entries": 687, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763978907, "oldest_key_time": 1763978907, "file_creation_time": 1763979036, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "42aa12d2-c531-4ddc-8c4c-bc0b5971346b", "db_session_id": "RORHLERH15LC1QL8D0I4", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Nov 24 10:10:36 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 18677 microseconds, and 10698 cpu microseconds.
Nov 24 10:10:36 compute-0 ceph-mon[74331]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 10:10:36 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:10:36.446420) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 2415944 bytes OK
Nov 24 10:10:36 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:10:36.446446) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Nov 24 10:10:36 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:10:36.448186) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Nov 24 10:10:36 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:10:36.448206) EVENT_LOG_v1 {"time_micros": 1763979036448200, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 10:10:36 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:10:36.448229) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 10:10:36 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 2465499, prev total WAL file size 2465499, number of live WAL files 2.
Nov 24 10:10:36 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 10:10:36 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:10:36.449618) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Nov 24 10:10:36 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 10:10:36 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(2359KB)], [77(11MB)]
Nov 24 10:10:36 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763979036449748, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 14671654, "oldest_snapshot_seqno": -1}
Nov 24 10:10:36 compute-0 nova_compute[257700]: 2025-11-24 10:10:36.489 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:10:36 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 6735 keys, 12552313 bytes, temperature: kUnknown
Nov 24 10:10:36 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763979036531720, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 12552313, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12510182, "index_size": 24163, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16901, "raw_key_size": 177013, "raw_average_key_size": 26, "raw_value_size": 12391815, "raw_average_value_size": 1839, "num_data_blocks": 946, "num_entries": 6735, "num_filter_entries": 6735, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976305, "oldest_key_time": 0, "file_creation_time": 1763979036, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "42aa12d2-c531-4ddc-8c4c-bc0b5971346b", "db_session_id": "RORHLERH15LC1QL8D0I4", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Nov 24 10:10:36 compute-0 ceph-mon[74331]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 10:10:36 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:10:36.532144) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 12552313 bytes
Nov 24 10:10:36 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:10:36.533653) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 178.8 rd, 153.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 11.7 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(11.3) write-amplify(5.2) OK, records in: 7251, records dropped: 516 output_compression: NoCompression
Nov 24 10:10:36 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:10:36.533677) EVENT_LOG_v1 {"time_micros": 1763979036533664, "job": 44, "event": "compaction_finished", "compaction_time_micros": 82046, "compaction_time_cpu_micros": 34924, "output_level": 6, "num_output_files": 1, "total_output_size": 12552313, "num_input_records": 7251, "num_output_records": 6735, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 10:10:36 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 10:10:36 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763979036534438, "job": 44, "event": "table_file_deletion", "file_number": 79}
Nov 24 10:10:36 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 10:10:36 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763979036538521, "job": 44, "event": "table_file_deletion", "file_number": 77}
Nov 24 10:10:36 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:10:36.449411) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:10:36 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:10:36.538624) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:10:36 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:10:36.538634) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:10:36 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:10:36.538637) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:10:36 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:10:36.538639) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:10:36 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:10:36.538642) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:10:36 compute-0 nova_compute[257700]: 2025-11-24 10:10:36.708 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:10:37 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1319: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:10:37 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:10:37 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:37 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:10:37 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:37.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:10:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:10:37.616Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 10:10:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:10:37.617Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:10:37 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:10:37.705 165073 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feb242b9-6422-4c37-bc7a-5c14a79beaf8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 24 10:10:37 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:37 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:10:37 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:37.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:10:38 compute-0 sudo[294742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:10:38 compute-0 sudo[294742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:10:38 compute-0 sudo[294742]: pam_unix(sudo:session): session closed for user root
Nov 24 10:10:38 compute-0 ceph-mon[74331]: pgmap v1319: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:10:38 compute-0 sudo[294768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 10:10:38 compute-0 sudo[294768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:10:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:10:38.964Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:10:39 compute-0 sudo[294768]: pam_unix(sudo:session): session closed for user root
Nov 24 10:10:39 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Nov 24 10:10:39 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 10:10:39 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1320: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:10:39 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 10:10:39 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:10:39 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 10:10:39 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:10:39 compute-0 sudo[294827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:10:39 compute-0 sudo[294827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:10:39 compute-0 sudo[294827]: pam_unix(sudo:session): session closed for user root
Nov 24 10:10:39 compute-0 sudo[294852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 24 10:10:39 compute-0 sudo[294852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:10:39 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 10:10:39 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 10:10:39 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:10:39 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 10:10:39 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:10:39 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:10:39 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 10:10:39 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 10:10:39 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:10:39 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:39 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:39 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:39.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:39 compute-0 podman[294919]: 2025-11-24 10:10:39.812730987 +0000 UTC m=+0.061847341 container create c94bb5669cf11464613dfbe23f5ce2fba7d6965500c79118253b774e68ff11fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 10:10:39 compute-0 systemd[1]: Started libpod-conmon-c94bb5669cf11464613dfbe23f5ce2fba7d6965500c79118253b774e68ff11fb.scope.
Nov 24 10:10:39 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:39 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:10:39 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:39.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:10:39 compute-0 podman[294919]: 2025-11-24 10:10:39.787473325 +0000 UTC m=+0.036589659 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:10:39 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:10:39 compute-0 podman[294919]: 2025-11-24 10:10:39.919563536 +0000 UTC m=+0.168679900 container init c94bb5669cf11464613dfbe23f5ce2fba7d6965500c79118253b774e68ff11fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325)
Nov 24 10:10:39 compute-0 podman[294919]: 2025-11-24 10:10:39.929875297 +0000 UTC m=+0.178991671 container start c94bb5669cf11464613dfbe23f5ce2fba7d6965500c79118253b774e68ff11fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 10:10:39 compute-0 podman[294919]: 2025-11-24 10:10:39.934593798 +0000 UTC m=+0.183710172 container attach c94bb5669cf11464613dfbe23f5ce2fba7d6965500c79118253b774e68ff11fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_wiles, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 10:10:39 compute-0 recursing_wiles[294935]: 167 167
Nov 24 10:10:39 compute-0 systemd[1]: libpod-c94bb5669cf11464613dfbe23f5ce2fba7d6965500c79118253b774e68ff11fb.scope: Deactivated successfully.
Nov 24 10:10:39 compute-0 podman[294919]: 2025-11-24 10:10:39.941475571 +0000 UTC m=+0.190591935 container died c94bb5669cf11464613dfbe23f5ce2fba7d6965500c79118253b774e68ff11fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_wiles, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Nov 24 10:10:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2cf8c85ecd29c1cc22056d77ae9c3cdd0cb8bc0bc00ed0d5afad9c5dcadaa03-merged.mount: Deactivated successfully.
Nov 24 10:10:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:10:39 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:10:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:10:40 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:10:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:10:40 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:10:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:10:40 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:10:40 compute-0 podman[294919]: 2025-11-24 10:10:40.010440031 +0000 UTC m=+0.259556355 container remove c94bb5669cf11464613dfbe23f5ce2fba7d6965500c79118253b774e68ff11fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 10:10:40 compute-0 systemd[1]: libpod-conmon-c94bb5669cf11464613dfbe23f5ce2fba7d6965500c79118253b774e68ff11fb.scope: Deactivated successfully.
Nov 24 10:10:40 compute-0 sshd-session[294813]: Invalid user system from 36.255.3.203 port 48586
Nov 24 10:10:40 compute-0 podman[294957]: 2025-11-24 10:10:40.195093865 +0000 UTC m=+0.050739868 container create 613772b67a2f1e06d31d84c729ae804967042d5c2dd291231464bc955c18bee6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_tharp, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True)
Nov 24 10:10:40 compute-0 systemd[1]: Started libpod-conmon-613772b67a2f1e06d31d84c729ae804967042d5c2dd291231464bc955c18bee6.scope.
Nov 24 10:10:40 compute-0 podman[294957]: 2025-11-24 10:10:40.172762739 +0000 UTC m=+0.028408762 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:10:40 compute-0 sshd-session[294813]: Received disconnect from 36.255.3.203 port 48586:11: Bye Bye [preauth]
Nov 24 10:10:40 compute-0 sshd-session[294813]: Disconnected from invalid user system 36.255.3.203 port 48586 [preauth]
Nov 24 10:10:40 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:10:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d1579816047cf507e4d61a05a61fe5e147a2a57fbadfb90edea605c4f3f65ef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 10:10:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d1579816047cf507e4d61a05a61fe5e147a2a57fbadfb90edea605c4f3f65ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 10:10:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d1579816047cf507e4d61a05a61fe5e147a2a57fbadfb90edea605c4f3f65ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 10:10:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d1579816047cf507e4d61a05a61fe5e147a2a57fbadfb90edea605c4f3f65ef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 10:10:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d1579816047cf507e4d61a05a61fe5e147a2a57fbadfb90edea605c4f3f65ef/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 10:10:40 compute-0 podman[294957]: 2025-11-24 10:10:40.301459263 +0000 UTC m=+0.157105286 container init 613772b67a2f1e06d31d84c729ae804967042d5c2dd291231464bc955c18bee6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_tharp, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 10:10:40 compute-0 podman[294957]: 2025-11-24 10:10:40.313201701 +0000 UTC m=+0.168847704 container start 613772b67a2f1e06d31d84c729ae804967042d5c2dd291231464bc955c18bee6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_tharp, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Nov 24 10:10:40 compute-0 podman[294957]: 2025-11-24 10:10:40.316384381 +0000 UTC m=+0.172030384 container attach 613772b67a2f1e06d31d84c729ae804967042d5c2dd291231464bc955c18bee6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 24 10:10:40 compute-0 ceph-mon[74331]: pgmap v1320: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:10:40 compute-0 sharp_tharp[294973]: --> passed data devices: 0 physical, 1 LVM
Nov 24 10:10:40 compute-0 sharp_tharp[294973]: --> All data devices are unavailable
Nov 24 10:10:40 compute-0 systemd[1]: libpod-613772b67a2f1e06d31d84c729ae804967042d5c2dd291231464bc955c18bee6.scope: Deactivated successfully.
Nov 24 10:10:40 compute-0 podman[294957]: 2025-11-24 10:10:40.676450665 +0000 UTC m=+0.532096668 container died 613772b67a2f1e06d31d84c729ae804967042d5c2dd291231464bc955c18bee6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 24 10:10:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d1579816047cf507e4d61a05a61fe5e147a2a57fbadfb90edea605c4f3f65ef-merged.mount: Deactivated successfully.
Nov 24 10:10:40 compute-0 podman[294957]: 2025-11-24 10:10:40.727995992 +0000 UTC m=+0.583641995 container remove 613772b67a2f1e06d31d84c729ae804967042d5c2dd291231464bc955c18bee6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 24 10:10:40 compute-0 systemd[1]: libpod-conmon-613772b67a2f1e06d31d84c729ae804967042d5c2dd291231464bc955c18bee6.scope: Deactivated successfully.
Nov 24 10:10:40 compute-0 sudo[294852]: pam_unix(sudo:session): session closed for user root
Nov 24 10:10:40 compute-0 sudo[295000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:10:40 compute-0 sudo[295000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:10:40 compute-0 sudo[295000]: pam_unix(sudo:session): session closed for user root
Nov 24 10:10:40 compute-0 sudo[295025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- lvm list --format json
Nov 24 10:10:40 compute-0 sudo[295025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:10:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:10:40] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 24 10:10:40 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:10:40] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 24 10:10:41 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1321: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:10:41 compute-0 podman[295092]: 2025-11-24 10:10:41.373335742 +0000 UTC m=+0.044719705 container create bc3fb971ce79e817a0d1a19eeecd9ed6fa7e819e336b35dd54a92e01a772f001 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_hoover, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Nov 24 10:10:41 compute-0 systemd[1]: Started libpod-conmon-bc3fb971ce79e817a0d1a19eeecd9ed6fa7e819e336b35dd54a92e01a772f001.scope.
Nov 24 10:10:41 compute-0 podman[295092]: 2025-11-24 10:10:41.355200742 +0000 UTC m=+0.026584725 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:10:41 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:10:41 compute-0 podman[295092]: 2025-11-24 10:10:41.476855748 +0000 UTC m=+0.148239711 container init bc3fb971ce79e817a0d1a19eeecd9ed6fa7e819e336b35dd54a92e01a772f001 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_hoover, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325)
Nov 24 10:10:41 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:41 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:10:41 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:41.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:10:41 compute-0 podman[295092]: 2025-11-24 10:10:41.489730294 +0000 UTC m=+0.161114257 container start bc3fb971ce79e817a0d1a19eeecd9ed6fa7e819e336b35dd54a92e01a772f001 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_hoover, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 10:10:41 compute-0 nova_compute[257700]: 2025-11-24 10:10:41.491 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:10:41 compute-0 podman[295092]: 2025-11-24 10:10:41.493623303 +0000 UTC m=+0.165007336 container attach bc3fb971ce79e817a0d1a19eeecd9ed6fa7e819e336b35dd54a92e01a772f001 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_hoover, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 10:10:41 compute-0 clever_hoover[295108]: 167 167
Nov 24 10:10:41 compute-0 systemd[1]: libpod-bc3fb971ce79e817a0d1a19eeecd9ed6fa7e819e336b35dd54a92e01a772f001.scope: Deactivated successfully.
Nov 24 10:10:41 compute-0 podman[295092]: 2025-11-24 10:10:41.499335458 +0000 UTC m=+0.170719421 container died bc3fb971ce79e817a0d1a19eeecd9ed6fa7e819e336b35dd54a92e01a772f001 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 10:10:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-776050f276631a7c1c6a96be619a3806528c30b64ca1593c212bd636e916b52c-merged.mount: Deactivated successfully.
Nov 24 10:10:41 compute-0 podman[295092]: 2025-11-24 10:10:41.53292769 +0000 UTC m=+0.204311653 container remove bc3fb971ce79e817a0d1a19eeecd9ed6fa7e819e336b35dd54a92e01a772f001 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_hoover, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Nov 24 10:10:41 compute-0 systemd[1]: libpod-conmon-bc3fb971ce79e817a0d1a19eeecd9ed6fa7e819e336b35dd54a92e01a772f001.scope: Deactivated successfully.
Nov 24 10:10:41 compute-0 nova_compute[257700]: 2025-11-24 10:10:41.710 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:10:41 compute-0 podman[295134]: 2025-11-24 10:10:41.790608107 +0000 UTC m=+0.079049606 container create 6d9a50c889c5edbe106f20c869906d78db47f9c256bae6c9b7598e220aecf0f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_ritchie, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 24 10:10:41 compute-0 systemd[1]: Started libpod-conmon-6d9a50c889c5edbe106f20c869906d78db47f9c256bae6c9b7598e220aecf0f6.scope.
Nov 24 10:10:41 compute-0 podman[295134]: 2025-11-24 10:10:41.759630311 +0000 UTC m=+0.048071850 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:10:41 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:10:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2158f42b89a8d34dbad076f8eb31b9631f34d9f5caeefafdebb03b61fbfa0aee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 10:10:41 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:41 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:41 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:41.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2158f42b89a8d34dbad076f8eb31b9631f34d9f5caeefafdebb03b61fbfa0aee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 10:10:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2158f42b89a8d34dbad076f8eb31b9631f34d9f5caeefafdebb03b61fbfa0aee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 10:10:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2158f42b89a8d34dbad076f8eb31b9631f34d9f5caeefafdebb03b61fbfa0aee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 10:10:41 compute-0 podman[295134]: 2025-11-24 10:10:41.884724214 +0000 UTC m=+0.173165763 container init 6d9a50c889c5edbe106f20c869906d78db47f9c256bae6c9b7598e220aecf0f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_ritchie, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 24 10:10:41 compute-0 podman[295134]: 2025-11-24 10:10:41.894679186 +0000 UTC m=+0.183120635 container start 6d9a50c889c5edbe106f20c869906d78db47f9c256bae6c9b7598e220aecf0f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_ritchie, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Nov 24 10:10:41 compute-0 podman[295134]: 2025-11-24 10:10:41.898205646 +0000 UTC m=+0.186647095 container attach 6d9a50c889c5edbe106f20c869906d78db47f9c256bae6c9b7598e220aecf0f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 10:10:42 compute-0 gallant_ritchie[295150]: {
Nov 24 10:10:42 compute-0 gallant_ritchie[295150]:     "0": [
Nov 24 10:10:42 compute-0 gallant_ritchie[295150]:         {
Nov 24 10:10:42 compute-0 gallant_ritchie[295150]:             "devices": [
Nov 24 10:10:42 compute-0 gallant_ritchie[295150]:                 "/dev/loop3"
Nov 24 10:10:42 compute-0 gallant_ritchie[295150]:             ],
Nov 24 10:10:42 compute-0 gallant_ritchie[295150]:             "lv_name": "ceph_lv0",
Nov 24 10:10:42 compute-0 gallant_ritchie[295150]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 10:10:42 compute-0 gallant_ritchie[295150]:             "lv_size": "21470642176",
Nov 24 10:10:42 compute-0 gallant_ritchie[295150]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=84a084c3-61a7-5de7-8207-1f88efa59a64,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 24 10:10:42 compute-0 gallant_ritchie[295150]:             "lv_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 10:10:42 compute-0 gallant_ritchie[295150]:             "name": "ceph_lv0",
Nov 24 10:10:42 compute-0 gallant_ritchie[295150]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 10:10:42 compute-0 gallant_ritchie[295150]:             "tags": {
Nov 24 10:10:42 compute-0 gallant_ritchie[295150]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 10:10:42 compute-0 gallant_ritchie[295150]:                 "ceph.block_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 10:10:42 compute-0 gallant_ritchie[295150]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 10:10:42 compute-0 gallant_ritchie[295150]:                 "ceph.cluster_fsid": "84a084c3-61a7-5de7-8207-1f88efa59a64",
Nov 24 10:10:42 compute-0 gallant_ritchie[295150]:                 "ceph.cluster_name": "ceph",
Nov 24 10:10:42 compute-0 gallant_ritchie[295150]:                 "ceph.crush_device_class": "",
Nov 24 10:10:42 compute-0 gallant_ritchie[295150]:                 "ceph.encrypted": "0",
Nov 24 10:10:42 compute-0 gallant_ritchie[295150]:                 "ceph.osd_fsid": "4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c",
Nov 24 10:10:42 compute-0 gallant_ritchie[295150]:                 "ceph.osd_id": "0",
Nov 24 10:10:42 compute-0 gallant_ritchie[295150]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 10:10:42 compute-0 gallant_ritchie[295150]:                 "ceph.type": "block",
Nov 24 10:10:42 compute-0 gallant_ritchie[295150]:                 "ceph.vdo": "0",
Nov 24 10:10:42 compute-0 gallant_ritchie[295150]:                 "ceph.with_tpm": "0"
Nov 24 10:10:42 compute-0 gallant_ritchie[295150]:             },
Nov 24 10:10:42 compute-0 gallant_ritchie[295150]:             "type": "block",
Nov 24 10:10:42 compute-0 gallant_ritchie[295150]:             "vg_name": "ceph_vg0"
Nov 24 10:10:42 compute-0 gallant_ritchie[295150]:         }
Nov 24 10:10:42 compute-0 gallant_ritchie[295150]:     ]
Nov 24 10:10:42 compute-0 gallant_ritchie[295150]: }
Nov 24 10:10:42 compute-0 systemd[1]: libpod-6d9a50c889c5edbe106f20c869906d78db47f9c256bae6c9b7598e220aecf0f6.scope: Deactivated successfully.
Nov 24 10:10:42 compute-0 podman[295134]: 2025-11-24 10:10:42.192230354 +0000 UTC m=+0.480671833 container died 6d9a50c889c5edbe106f20c869906d78db47f9c256bae6c9b7598e220aecf0f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 10:10:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-2158f42b89a8d34dbad076f8eb31b9631f34d9f5caeefafdebb03b61fbfa0aee-merged.mount: Deactivated successfully.
Nov 24 10:10:42 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:10:42 compute-0 podman[295134]: 2025-11-24 10:10:42.254077533 +0000 UTC m=+0.542518992 container remove 6d9a50c889c5edbe106f20c869906d78db47f9c256bae6c9b7598e220aecf0f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid)
Nov 24 10:10:42 compute-0 systemd[1]: libpod-conmon-6d9a50c889c5edbe106f20c869906d78db47f9c256bae6c9b7598e220aecf0f6.scope: Deactivated successfully.
Nov 24 10:10:42 compute-0 sudo[295025]: pam_unix(sudo:session): session closed for user root
Nov 24 10:10:42 compute-0 sudo[295172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:10:42 compute-0 sudo[295172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:10:42 compute-0 sudo[295172]: pam_unix(sudo:session): session closed for user root
Nov 24 10:10:42 compute-0 sudo[295198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- raw list --format json
Nov 24 10:10:42 compute-0 sudo[295198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:10:42 compute-0 ceph-mon[74331]: pgmap v1321: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:10:42 compute-0 podman[295263]: 2025-11-24 10:10:42.890852555 +0000 UTC m=+0.043078154 container create 4e8d6d3a7db642eeda714add2667040576bdb03fe69ef4949b10822f22231301 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_blackburn, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Nov 24 10:10:42 compute-0 systemd[1]: Started libpod-conmon-4e8d6d3a7db642eeda714add2667040576bdb03fe69ef4949b10822f22231301.scope.
Nov 24 10:10:42 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:10:42 compute-0 podman[295263]: 2025-11-24 10:10:42.95804261 +0000 UTC m=+0.110268189 container init 4e8d6d3a7db642eeda714add2667040576bdb03fe69ef4949b10822f22231301 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Nov 24 10:10:42 compute-0 podman[295263]: 2025-11-24 10:10:42.963622251 +0000 UTC m=+0.115847820 container start 4e8d6d3a7db642eeda714add2667040576bdb03fe69ef4949b10822f22231301 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_blackburn, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 10:10:42 compute-0 podman[295263]: 2025-11-24 10:10:42.869435492 +0000 UTC m=+0.021661081 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:10:42 compute-0 podman[295263]: 2025-11-24 10:10:42.967059428 +0000 UTC m=+0.119285027 container attach 4e8d6d3a7db642eeda714add2667040576bdb03fe69ef4949b10822f22231301 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_blackburn, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 10:10:42 compute-0 sleepy_blackburn[295278]: 167 167
Nov 24 10:10:42 compute-0 systemd[1]: libpod-4e8d6d3a7db642eeda714add2667040576bdb03fe69ef4949b10822f22231301.scope: Deactivated successfully.
Nov 24 10:10:42 compute-0 podman[295263]: 2025-11-24 10:10:42.968510935 +0000 UTC m=+0.120736534 container died 4e8d6d3a7db642eeda714add2667040576bdb03fe69ef4949b10822f22231301 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 10:10:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-317ce6bc3c7f9d8672931a75f809e3704bdc5462b16aa18ea70073189c1be860-merged.mount: Deactivated successfully.
Nov 24 10:10:43 compute-0 podman[295263]: 2025-11-24 10:10:43.003728049 +0000 UTC m=+0.155953618 container remove 4e8d6d3a7db642eeda714add2667040576bdb03fe69ef4949b10822f22231301 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 24 10:10:43 compute-0 systemd[1]: libpod-conmon-4e8d6d3a7db642eeda714add2667040576bdb03fe69ef4949b10822f22231301.scope: Deactivated successfully.
Nov 24 10:10:43 compute-0 podman[295300]: 2025-11-24 10:10:43.156015601 +0000 UTC m=+0.037297426 container create 06a190d843001c784f3669a7731ab2dc618b4bb78db4a45a172b720def86899a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_dirac, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 10:10:43 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1322: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:10:43 compute-0 systemd[1]: Started libpod-conmon-06a190d843001c784f3669a7731ab2dc618b4bb78db4a45a172b720def86899a.scope.
Nov 24 10:10:43 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:10:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7d3fb996de96dc1ad83edc3b4fea8929bbe9b6dd734ebe0e342e2d35ef06e1b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 10:10:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7d3fb996de96dc1ad83edc3b4fea8929bbe9b6dd734ebe0e342e2d35ef06e1b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 10:10:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7d3fb996de96dc1ad83edc3b4fea8929bbe9b6dd734ebe0e342e2d35ef06e1b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 10:10:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7d3fb996de96dc1ad83edc3b4fea8929bbe9b6dd734ebe0e342e2d35ef06e1b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 10:10:43 compute-0 podman[295300]: 2025-11-24 10:10:43.140313813 +0000 UTC m=+0.021595658 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:10:43 compute-0 podman[295300]: 2025-11-24 10:10:43.242907235 +0000 UTC m=+0.124189090 container init 06a190d843001c784f3669a7731ab2dc618b4bb78db4a45a172b720def86899a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 10:10:43 compute-0 podman[295300]: 2025-11-24 10:10:43.255743562 +0000 UTC m=+0.137025387 container start 06a190d843001c784f3669a7731ab2dc618b4bb78db4a45a172b720def86899a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 10:10:43 compute-0 podman[295300]: 2025-11-24 10:10:43.259284961 +0000 UTC m=+0.140566906 container attach 06a190d843001c784f3669a7731ab2dc618b4bb78db4a45a172b720def86899a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Nov 24 10:10:43 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:43 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:10:43 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:43.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:10:43 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:10:43.594Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:10:43 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:43 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:10:43 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:43.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:10:43 compute-0 lvm[295394]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 10:10:43 compute-0 lvm[295394]: VG ceph_vg0 finished
Nov 24 10:10:44 compute-0 hardcore_dirac[295317]: {}
Nov 24 10:10:44 compute-0 systemd[1]: libpod-06a190d843001c784f3669a7731ab2dc618b4bb78db4a45a172b720def86899a.scope: Deactivated successfully.
Nov 24 10:10:44 compute-0 podman[295300]: 2025-11-24 10:10:44.070090268 +0000 UTC m=+0.951372093 container died 06a190d843001c784f3669a7731ab2dc618b4bb78db4a45a172b720def86899a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 10:10:44 compute-0 systemd[1]: libpod-06a190d843001c784f3669a7731ab2dc618b4bb78db4a45a172b720def86899a.scope: Consumed 1.249s CPU time.
Nov 24 10:10:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-a7d3fb996de96dc1ad83edc3b4fea8929bbe9b6dd734ebe0e342e2d35ef06e1b-merged.mount: Deactivated successfully.
Nov 24 10:10:44 compute-0 podman[295300]: 2025-11-24 10:10:44.12500578 +0000 UTC m=+1.006287605 container remove 06a190d843001c784f3669a7731ab2dc618b4bb78db4a45a172b720def86899a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_dirac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 10:10:44 compute-0 systemd[1]: libpod-conmon-06a190d843001c784f3669a7731ab2dc618b4bb78db4a45a172b720def86899a.scope: Deactivated successfully.
Nov 24 10:10:44 compute-0 sudo[295198]: pam_unix(sudo:session): session closed for user root
Nov 24 10:10:44 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 10:10:44 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:10:44 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 10:10:44 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:10:44 compute-0 sudo[295412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 10:10:44 compute-0 sudo[295412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:10:44 compute-0 sudo[295412]: pam_unix(sudo:session): session closed for user root
Nov 24 10:10:44 compute-0 ceph-mon[74331]: pgmap v1322: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:10:44 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:10:44 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:10:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:10:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:10:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:10:45 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:10:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:10:45 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:10:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:10:45 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:10:45 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1323: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:10:45 compute-0 sshd-session[295322]: Invalid user root2 from 14.215.126.91 port 52016
Nov 24 10:10:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:10:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:10:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Optimize plan auto_2025-11-24_10:10:45
Nov 24 10:10:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 10:10:45 compute-0 ceph-mgr[74626]: [balancer INFO root] do_upmap
Nov 24 10:10:45 compute-0 ceph-mgr[74626]: [balancer INFO root] pools ['default.rgw.meta', '.nfs', 'vms', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', 'images', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.control', '.rgw.root']
Nov 24 10:10:45 compute-0 ceph-mgr[74626]: [balancer INFO root] prepared 0/10 upmap changes
Nov 24 10:10:45 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:45 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:45 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:45.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:45 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:10:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:10:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:10:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:10:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:10:45 compute-0 sshd-session[295322]: Received disconnect from 14.215.126.91 port 52016:11: Bye Bye [preauth]
Nov 24 10:10:45 compute-0 sshd-session[295322]: Disconnected from invalid user root2 14.215.126.91 port 52016 [preauth]
Nov 24 10:10:45 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:45 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:10:45 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:45.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:10:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 10:10:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:10:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 10:10:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:10:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 10:10:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:10:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:10:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:10:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:10:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:10:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 24 10:10:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:10:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 10:10:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:10:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:10:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:10:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 24 10:10:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:10:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 10:10:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:10:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:10:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:10:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 10:10:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:10:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 10:10:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 10:10:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 10:10:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 10:10:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 10:10:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 10:10:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 10:10:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 10:10:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 10:10:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 10:10:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 10:10:46 compute-0 nova_compute[257700]: 2025-11-24 10:10:46.493 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:10:46 compute-0 ceph-mon[74331]: pgmap v1323: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:10:46 compute-0 nova_compute[257700]: 2025-11-24 10:10:46.713 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:10:47 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1324: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:10:47 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:10:47 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:47 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:10:47 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:47.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:10:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:10:47.618Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:10:47 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:47 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:47 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:47.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:48 compute-0 ceph-mon[74331]: pgmap v1324: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:10:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:10:48.967Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:10:49 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1325: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:10:49 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:49 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:49 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:49.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:49 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:49 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:10:49 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:49.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:10:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:10:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:10:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:10:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:10:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:10:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:10:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:10:50 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:10:50 compute-0 ceph-mon[74331]: pgmap v1325: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:10:50 compute-0 sudo[295444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:10:50 compute-0 sudo[295444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:10:50 compute-0 sudo[295444]: pam_unix(sudo:session): session closed for user root
Nov 24 10:10:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:10:50] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 24 10:10:50 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:10:50] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 24 10:10:51 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1326: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:10:51 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:51 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:51 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:51.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:51 compute-0 nova_compute[257700]: 2025-11-24 10:10:51.529 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:10:51 compute-0 nova_compute[257700]: 2025-11-24 10:10:51.715 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:10:51 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:51 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:10:51 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:51.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:10:52 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:10:52 compute-0 ceph-mon[74331]: pgmap v1326: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:10:53 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1327: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:10:53 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:53 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:53 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:53.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:53 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:10:53.595Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:10:53 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:53 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:10:53 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:53.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:10:54 compute-0 ceph-mon[74331]: pgmap v1327: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:10:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:10:54 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:10:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:10:54 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:10:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:10:54 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:10:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:10:55 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:10:55 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1328: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:10:55 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:55 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:55 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:55.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:55 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:55 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:55 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:55.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:56 compute-0 nova_compute[257700]: 2025-11-24 10:10:56.531 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:10:56 compute-0 ceph-mon[74331]: pgmap v1328: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:10:56 compute-0 nova_compute[257700]: 2025-11-24 10:10:56.716 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:10:57 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1329: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:10:57 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:10:57 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:57 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:57 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:57.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:10:57.619Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:10:57 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:57 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:10:57 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:57.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:10:58 compute-0 ceph-mon[74331]: pgmap v1329: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:10:58 compute-0 podman[295477]: 2025-11-24 10:10:58.792993039 +0000 UTC m=+0.068762346 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Nov 24 10:10:58 compute-0 podman[295478]: 2025-11-24 10:10:58.818919637 +0000 UTC m=+0.091126573 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 24 10:10:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:10:58.967Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:10:59 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1330: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:10:59 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:59 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:10:59 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:10:59.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:10:59 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:10:59 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:10:59 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:10:59.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:11:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:10:59 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:11:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:10:59 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:11:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:10:59 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:11:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:11:00 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:11:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:11:00] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 24 10:11:00 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:11:00] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 24 10:11:01 compute-0 ceph-mon[74331]: pgmap v1330: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:11:01 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:11:01 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1331: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:11:01 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:01 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:01 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:01.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:01 compute-0 nova_compute[257700]: 2025-11-24 10:11:01.534 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:11:01 compute-0 nova_compute[257700]: 2025-11-24 10:11:01.717 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:11:01 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:01 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:11:01 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:01.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:11:02 compute-0 ceph-mon[74331]: pgmap v1331: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:11:02 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/4142673402' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 10:11:02 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/4142673402' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 10:11:02 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:11:03 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1332: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:11:03 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:03 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:11:03 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:03.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:11:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:11:03.596Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:11:03 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:03 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:03 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:03.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:04 compute-0 ceph-mon[74331]: pgmap v1332: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:11:04 compute-0 podman[295531]: 2025-11-24 10:11:04.787082586 +0000 UTC m=+0.059507161 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent)
Nov 24 10:11:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:11:04 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:11:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:11:04 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:11:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:11:04 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:11:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:11:05 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:11:05 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1333: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:11:05 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:05 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:11:05 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:05.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:11:05 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:05 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:05 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:05.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:06 compute-0 ceph-mon[74331]: pgmap v1333: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:11:06 compute-0 nova_compute[257700]: 2025-11-24 10:11:06.537 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:11:06 compute-0 nova_compute[257700]: 2025-11-24 10:11:06.719 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:11:07 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1334: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:11:07 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:11:07 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:07 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:11:07 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:07.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:11:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:11:07.620Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:11:07 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:07 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:11:07 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:07.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:11:08 compute-0 ceph-mon[74331]: pgmap v1334: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:11:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:11:08.969Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 10:11:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:11:08.969Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 10:11:09 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1335: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:11:09 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:09 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:09 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:09.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:09 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:09 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:11:09 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:09.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:11:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:11:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:11:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:11:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:11:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:11:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:11:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:11:10 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:11:10 compute-0 ceph-mon[74331]: pgmap v1335: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:11:10 compute-0 sudo[295557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:11:10 compute-0 sudo[295557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:11:10 compute-0 sudo[295557]: pam_unix(sudo:session): session closed for user root
Nov 24 10:11:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:11:10] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 24 10:11:10 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:11:10] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 24 10:11:11 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1336: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:11:11 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:11 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:11:11 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:11.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:11:11 compute-0 nova_compute[257700]: 2025-11-24 10:11:11.539 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:11:11 compute-0 nova_compute[257700]: 2025-11-24 10:11:11.721 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:11:11 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:11 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:11 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:11.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:12 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:11:12 compute-0 ceph-mon[74331]: pgmap v1336: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:11:13 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1337: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:11:13 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:13 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:11:13 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:13.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:11:13 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:11:13.598Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:11:13 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:13 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:11:13 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:13.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:11:14 compute-0 ceph-mon[74331]: pgmap v1337: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:11:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:11:14 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:11:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:11:14 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:11:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:11:14 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:11:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:11:15 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:11:15 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1338: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:11:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:11:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:11:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:11:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:11:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:11:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:11:15 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:15 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:11:15 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:15.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:11:15 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:15 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:11:15 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:15.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:11:16 compute-0 ceph-mon[74331]: pgmap v1338: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:11:16 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:11:16 compute-0 nova_compute[257700]: 2025-11-24 10:11:16.540 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:11:16 compute-0 nova_compute[257700]: 2025-11-24 10:11:16.722 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:11:17 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1339: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:11:17 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:11:17 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:17 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:11:17 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:17.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:11:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:11:17.621Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 10:11:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:11:17.621Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 10:11:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:11:17.621Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 10:11:17 compute-0 sshd-session[295586]: Received disconnect from 14.215.126.91 port 45074:11: Bye Bye [preauth]
Nov 24 10:11:17 compute-0 sshd-session[295586]: Disconnected from authenticating user root 14.215.126.91 port 45074 [preauth]
Nov 24 10:11:17 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:17 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:11:17 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:17.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:11:18 compute-0 ceph-mon[74331]: pgmap v1339: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:11:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:11:18.970Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:11:19 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1340: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:11:19 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:19 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:19 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:19.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:19 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:19 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:19 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:19.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:11:19 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:11:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:11:19 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:11:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:11:19 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:11:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:11:20 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:11:20 compute-0 ceph-mon[74331]: pgmap v1340: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:11:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:11:20.585 165073 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:11:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:11:20.585 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:11:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:11:20.586 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:11:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:11:20] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 24 10:11:20 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:11:20] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 24 10:11:21 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1341: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:11:21 compute-0 nova_compute[257700]: 2025-11-24 10:11:21.541 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:11:21 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:21 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:21 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:21.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:21 compute-0 nova_compute[257700]: 2025-11-24 10:11:21.724 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:11:21 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:21 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:11:21 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:21.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:11:22 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:11:22 compute-0 ceph-mon[74331]: pgmap v1341: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:11:23 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1342: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:11:23 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:23 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:11:23 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:23.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:11:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:11:23.599Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:11:23 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:23 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:11:23 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:23.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:11:24 compute-0 ceph-mon[74331]: pgmap v1342: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:11:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:11:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:11:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:11:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:11:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:11:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:11:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:11:25 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:11:25 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1343: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:11:25 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:25 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:25 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:25.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:25 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:25 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:25 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:25.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:26 compute-0 ceph-mon[74331]: pgmap v1343: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:11:26 compute-0 nova_compute[257700]: 2025-11-24 10:11:26.543 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:11:26 compute-0 nova_compute[257700]: 2025-11-24 10:11:26.725 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:11:27 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1344: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:11:27 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:11:27 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:27 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:11:27 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:27.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:11:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:11:27.622Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:11:27 compute-0 nova_compute[257700]: 2025-11-24 10:11:27.916 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:11:27 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:27 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:27 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:27.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:27 compute-0 nova_compute[257700]: 2025-11-24 10:11:27.933 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:11:27 compute-0 nova_compute[257700]: 2025-11-24 10:11:27.933 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:11:27 compute-0 nova_compute[257700]: 2025-11-24 10:11:27.955 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:11:27 compute-0 nova_compute[257700]: 2025-11-24 10:11:27.955 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:11:27 compute-0 nova_compute[257700]: 2025-11-24 10:11:27.955 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:11:27 compute-0 nova_compute[257700]: 2025-11-24 10:11:27.956 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 10:11:27 compute-0 nova_compute[257700]: 2025-11-24 10:11:27.956 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:11:28 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:11:28 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4194509216' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:11:28 compute-0 nova_compute[257700]: 2025-11-24 10:11:28.400 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:11:28 compute-0 ceph-mon[74331]: pgmap v1344: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:11:28 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/4194509216' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:11:28 compute-0 nova_compute[257700]: 2025-11-24 10:11:28.565 257704 WARNING nova.virt.libvirt.driver [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 10:11:28 compute-0 nova_compute[257700]: 2025-11-24 10:11:28.566 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4496MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 10:11:28 compute-0 nova_compute[257700]: 2025-11-24 10:11:28.566 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:11:28 compute-0 nova_compute[257700]: 2025-11-24 10:11:28.566 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:11:28 compute-0 nova_compute[257700]: 2025-11-24 10:11:28.643 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 10:11:28 compute-0 nova_compute[257700]: 2025-11-24 10:11:28.643 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 10:11:28 compute-0 nova_compute[257700]: 2025-11-24 10:11:28.665 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:11:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:11:28.972Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:11:29 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:11:29 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3357223504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:11:29 compute-0 nova_compute[257700]: 2025-11-24 10:11:29.096 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:11:29 compute-0 nova_compute[257700]: 2025-11-24 10:11:29.102 257704 DEBUG nova.compute.provider_tree [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Inventory has not changed in ProviderTree for provider: a50ce3b5-7e9e-4263-a4aa-c35573ac7257 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 10:11:29 compute-0 nova_compute[257700]: 2025-11-24 10:11:29.119 257704 DEBUG nova.scheduler.client.report [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Inventory has not changed for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 10:11:29 compute-0 nova_compute[257700]: 2025-11-24 10:11:29.120 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 10:11:29 compute-0 nova_compute[257700]: 2025-11-24 10:11:29.120 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.554s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:11:29 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1345: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:11:29 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3357223504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:11:29 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:29 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:29 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:29.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:29 compute-0 podman[295647]: 2025-11-24 10:11:29.789803504 +0000 UTC m=+0.059153011 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 24 10:11:29 compute-0 podman[295648]: 2025-11-24 10:11:29.866527491 +0000 UTC m=+0.126478340 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 10:11:29 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:29 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:29 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:29.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:11:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:11:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:11:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:11:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:11:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:11:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:11:30 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:11:30 compute-0 nova_compute[257700]: 2025-11-24 10:11:30.108 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:11:30 compute-0 nova_compute[257700]: 2025-11-24 10:11:30.108 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:11:30 compute-0 nova_compute[257700]: 2025-11-24 10:11:30.108 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 10:11:30 compute-0 nova_compute[257700]: 2025-11-24 10:11:30.108 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 10:11:30 compute-0 nova_compute[257700]: 2025-11-24 10:11:30.142 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 10:11:30 compute-0 nova_compute[257700]: 2025-11-24 10:11:30.143 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:11:30 compute-0 nova_compute[257700]: 2025-11-24 10:11:30.143 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 10:11:30 compute-0 ceph-mon[74331]: pgmap v1345: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:11:30 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:11:30 compute-0 sudo[295694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:11:30 compute-0 sudo[295694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:11:30 compute-0 sudo[295694]: pam_unix(sudo:session): session closed for user root
Nov 24 10:11:30 compute-0 nova_compute[257700]: 2025-11-24 10:11:30.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:11:30 compute-0 nova_compute[257700]: 2025-11-24 10:11:30.922 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:11:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:11:30] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 24 10:11:30 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:11:30] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 24 10:11:31 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1346: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:11:31 compute-0 nova_compute[257700]: 2025-11-24 10:11:31.545 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:11:31 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:31 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:31 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:31.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:31 compute-0 nova_compute[257700]: 2025-11-24 10:11:31.726 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:11:31 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:31 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:31 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:31.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:11:32 compute-0 ceph-mon[74331]: pgmap v1346: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:11:33 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1347: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:11:33 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/1801012329' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:11:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:33.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:33 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:11:33.600Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:11:33 compute-0 nova_compute[257700]: 2025-11-24 10:11:33.920 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:11:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:11:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:33.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:11:34 compute-0 ceph-mon[74331]: pgmap v1347: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:11:34 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/451229042' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:11:34 compute-0 nova_compute[257700]: 2025-11-24 10:11:34.922 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:11:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:11:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:11:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:11:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:11:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:11:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:11:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:11:35 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:11:35 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1348: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:11:35 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:35 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:35 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:35.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:35 compute-0 podman[295724]: 2025-11-24 10:11:35.801858864 +0000 UTC m=+0.073092326 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 10:11:35 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:35 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:35 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:35.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:36 compute-0 nova_compute[257700]: 2025-11-24 10:11:36.549 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:11:36 compute-0 ceph-mon[74331]: pgmap v1348: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:11:36 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/3225313100' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:11:36 compute-0 nova_compute[257700]: 2025-11-24 10:11:36.727 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:11:37 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1349: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:11:37 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:11:37 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/2059603497' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:11:37 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:37 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:11:37 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:37.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:11:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:11:37.622Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:11:37 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:37 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:11:37 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:37.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:11:38 compute-0 sshd-session[295745]: Invalid user backup from 83.229.122.23 port 56842
Nov 24 10:11:38 compute-0 sshd-session[295745]: Received disconnect from 83.229.122.23 port 56842:11: Bye Bye [preauth]
Nov 24 10:11:38 compute-0 sshd-session[295745]: Disconnected from invalid user backup 83.229.122.23 port 56842 [preauth]
Nov 24 10:11:38 compute-0 ceph-mon[74331]: pgmap v1349: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:11:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:11:38.973Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:11:39 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1350: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:11:39 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:39 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:39 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:39.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:39 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:39 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:11:39 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:39.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:11:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:11:39 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:11:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:11:39 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:11:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:11:39 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:11:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:11:40 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:11:40 compute-0 ceph-mon[74331]: pgmap v1350: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:11:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:11:40] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 24 10:11:40 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:11:40] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 24 10:11:41 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1351: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:11:41 compute-0 nova_compute[257700]: 2025-11-24 10:11:41.550 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:11:41 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:41 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:11:41 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:41.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:11:41 compute-0 nova_compute[257700]: 2025-11-24 10:11:41.729 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:11:41 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:41 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:41 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:41.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:41 compute-0 sshd-session[295751]: Invalid user mm from 36.255.3.203 port 60743
Nov 24 10:11:42 compute-0 sshd-session[295751]: Received disconnect from 36.255.3.203 port 60743:11: Bye Bye [preauth]
Nov 24 10:11:42 compute-0 sshd-session[295751]: Disconnected from invalid user mm 36.255.3.203 port 60743 [preauth]
Nov 24 10:11:42 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:11:42 compute-0 ceph-mon[74331]: pgmap v1351: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:11:43 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1352: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:11:43 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:43 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:43 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:43.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:43 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:11:43.601Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 10:11:43 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:11:43.601Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 10:11:43 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:43 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:43 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:43.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:44 compute-0 sudo[295759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:11:44 compute-0 sudo[295759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:11:44 compute-0 sudo[295759]: pam_unix(sudo:session): session closed for user root
Nov 24 10:11:44 compute-0 ceph-mon[74331]: pgmap v1352: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:11:44 compute-0 sudo[295784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 10:11:44 compute-0 sudo[295784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:11:44 compute-0 sshd-session[295755]: Invalid user test1 from 45.78.198.78 port 53432
Nov 24 10:11:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:11:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:11:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:11:45 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:11:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:11:45 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:11:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:11:45 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:11:45 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1353: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:11:45 compute-0 sshd-session[295755]: Received disconnect from 45.78.198.78 port 53432:11: Bye Bye [preauth]
Nov 24 10:11:45 compute-0 sshd-session[295755]: Disconnected from invalid user test1 45.78.198.78 port 53432 [preauth]
Nov 24 10:11:45 compute-0 sudo[295784]: pam_unix(sudo:session): session closed for user root
Nov 24 10:11:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:11:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:11:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Optimize plan auto_2025-11-24_10:11:45
Nov 24 10:11:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 10:11:45 compute-0 ceph-mgr[74626]: [balancer INFO root] do_upmap
Nov 24 10:11:45 compute-0 ceph-mgr[74626]: [balancer INFO root] pools ['default.rgw.control', 'backups', '.nfs', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data', '.mgr', 'images', 'volumes', 'vms', '.rgw.root', 'default.rgw.log']
Nov 24 10:11:45 compute-0 ceph-mgr[74626]: [balancer INFO root] prepared 0/10 upmap changes
Nov 24 10:11:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:11:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:11:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:11:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:11:45 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:45 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:45 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:45.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:45 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:11:45 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:45 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:11:45 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:45.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:11:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 10:11:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 10:11:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 10:11:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 10:11:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 10:11:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 10:11:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:11:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 10:11:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:11:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 10:11:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:11:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:11:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:11:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:11:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:11:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 24 10:11:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:11:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 10:11:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:11:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:11:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:11:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 24 10:11:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:11:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 10:11:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:11:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:11:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:11:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 10:11:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:11:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 10:11:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 10:11:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 10:11:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 10:11:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 10:11:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 10:11:46 compute-0 nova_compute[257700]: 2025-11-24 10:11:46.551 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:11:46 compute-0 ceph-mon[74331]: pgmap v1353: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:11:46 compute-0 nova_compute[257700]: 2025-11-24 10:11:46.731 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:11:46 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 24 10:11:46 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:11:46 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 24 10:11:46 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:11:47 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1354: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:11:47 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:11:47 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 24 10:11:47 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:11:47 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 24 10:11:47 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:11:47 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:47 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:11:47 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:47.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:11:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:11:47.623Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:11:47 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Nov 24 10:11:47 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 24 10:11:47 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:47 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:11:47 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:47.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:11:47 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:11:47 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:11:47 compute-0 ceph-mon[74331]: pgmap v1354: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:11:47 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:11:47 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:11:47 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 24 10:11:47 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 24 10:11:48 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Nov 24 10:11:48 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 24 10:11:48 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1355: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 24 10:11:48 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 10:11:48 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:11:48 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 10:11:48 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:11:48 compute-0 sudo[295845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:11:48 compute-0 sudo[295845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:11:48 compute-0 sudo[295845]: pam_unix(sudo:session): session closed for user root
Nov 24 10:11:48 compute-0 sudo[295870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 24 10:11:48 compute-0 sudo[295870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:11:48 compute-0 podman[295938]: 2025-11-24 10:11:48.781378471 +0000 UTC m=+0.050295007 container create 808bd7d080d991f4c8f71380df0ba714ee3bb48efdc073a31932c615f33109d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_buck, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Nov 24 10:11:48 compute-0 systemd[1]: Started libpod-conmon-808bd7d080d991f4c8f71380df0ba714ee3bb48efdc073a31932c615f33109d7.scope.
Nov 24 10:11:48 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:11:48 compute-0 podman[295938]: 2025-11-24 10:11:48.759303621 +0000 UTC m=+0.028220207 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:11:48 compute-0 podman[295938]: 2025-11-24 10:11:48.863630688 +0000 UTC m=+0.132547244 container init 808bd7d080d991f4c8f71380df0ba714ee3bb48efdc073a31932c615f33109d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_buck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 24 10:11:48 compute-0 podman[295938]: 2025-11-24 10:11:48.870765929 +0000 UTC m=+0.139682465 container start 808bd7d080d991f4c8f71380df0ba714ee3bb48efdc073a31932c615f33109d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_buck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 10:11:48 compute-0 podman[295938]: 2025-11-24 10:11:48.874151404 +0000 UTC m=+0.143067960 container attach 808bd7d080d991f4c8f71380df0ba714ee3bb48efdc073a31932c615f33109d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_buck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 10:11:48 compute-0 determined_buck[295954]: 167 167
Nov 24 10:11:48 compute-0 systemd[1]: libpod-808bd7d080d991f4c8f71380df0ba714ee3bb48efdc073a31932c615f33109d7.scope: Deactivated successfully.
Nov 24 10:11:48 compute-0 conmon[295954]: conmon 808bd7d080d991f4c8f7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-808bd7d080d991f4c8f71380df0ba714ee3bb48efdc073a31932c615f33109d7.scope/container/memory.events
Nov 24 10:11:48 compute-0 podman[295938]: 2025-11-24 10:11:48.879134571 +0000 UTC m=+0.148051127 container died 808bd7d080d991f4c8f71380df0ba714ee3bb48efdc073a31932c615f33109d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_buck, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 10:11:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c508008cafa8db163bd79a5e848682681c6f1fa38e217e2ac65e29cedacb9ec-merged.mount: Deactivated successfully.
Nov 24 10:11:48 compute-0 podman[295938]: 2025-11-24 10:11:48.919420422 +0000 UTC m=+0.188336958 container remove 808bd7d080d991f4c8f71380df0ba714ee3bb48efdc073a31932c615f33109d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 10:11:48 compute-0 systemd[1]: libpod-conmon-808bd7d080d991f4c8f71380df0ba714ee3bb48efdc073a31932c615f33109d7.scope: Deactivated successfully.
Nov 24 10:11:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:11:48.974Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:11:49 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 24 10:11:49 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 24 10:11:49 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:11:49 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 10:11:49 compute-0 ceph-mon[74331]: pgmap v1355: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 24 10:11:49 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:11:49 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:11:49 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 10:11:49 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 10:11:49 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:11:49 compute-0 podman[295979]: 2025-11-24 10:11:49.070619318 +0000 UTC m=+0.039742989 container create a7766eb2e1cd5e8dd9b61c2ea28fd4ff4f28245ce35e0358b808261421942f91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_shamir, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 24 10:11:49 compute-0 systemd[1]: Started libpod-conmon-a7766eb2e1cd5e8dd9b61c2ea28fd4ff4f28245ce35e0358b808261421942f91.scope.
Nov 24 10:11:49 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:11:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08a7f9fb3c7a3fc2de34a3281fbb3dd0b216b09640faa3f244b20f7adfa07fac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 10:11:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08a7f9fb3c7a3fc2de34a3281fbb3dd0b216b09640faa3f244b20f7adfa07fac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 10:11:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08a7f9fb3c7a3fc2de34a3281fbb3dd0b216b09640faa3f244b20f7adfa07fac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 10:11:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08a7f9fb3c7a3fc2de34a3281fbb3dd0b216b09640faa3f244b20f7adfa07fac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 10:11:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08a7f9fb3c7a3fc2de34a3281fbb3dd0b216b09640faa3f244b20f7adfa07fac/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 10:11:49 compute-0 podman[295979]: 2025-11-24 10:11:49.054462918 +0000 UTC m=+0.023586619 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:11:49 compute-0 podman[295979]: 2025-11-24 10:11:49.168273155 +0000 UTC m=+0.137396866 container init a7766eb2e1cd5e8dd9b61c2ea28fd4ff4f28245ce35e0358b808261421942f91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True)
Nov 24 10:11:49 compute-0 podman[295979]: 2025-11-24 10:11:49.176055493 +0000 UTC m=+0.145179184 container start a7766eb2e1cd5e8dd9b61c2ea28fd4ff4f28245ce35e0358b808261421942f91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_shamir, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 24 10:11:49 compute-0 podman[295979]: 2025-11-24 10:11:49.183682426 +0000 UTC m=+0.152806117 container attach a7766eb2e1cd5e8dd9b61c2ea28fd4ff4f28245ce35e0358b808261421942f91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 10:11:49 compute-0 amazing_shamir[295996]: --> passed data devices: 0 physical, 1 LVM
Nov 24 10:11:49 compute-0 amazing_shamir[295996]: --> All data devices are unavailable
Nov 24 10:11:49 compute-0 systemd[1]: libpod-a7766eb2e1cd5e8dd9b61c2ea28fd4ff4f28245ce35e0358b808261421942f91.scope: Deactivated successfully.
Nov 24 10:11:49 compute-0 podman[295979]: 2025-11-24 10:11:49.527654241 +0000 UTC m=+0.496777902 container died a7766eb2e1cd5e8dd9b61c2ea28fd4ff4f28245ce35e0358b808261421942f91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 10:11:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-08a7f9fb3c7a3fc2de34a3281fbb3dd0b216b09640faa3f244b20f7adfa07fac-merged.mount: Deactivated successfully.
Nov 24 10:11:49 compute-0 podman[295979]: 2025-11-24 10:11:49.570395535 +0000 UTC m=+0.539519206 container remove a7766eb2e1cd5e8dd9b61c2ea28fd4ff4f28245ce35e0358b808261421942f91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_shamir, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Nov 24 10:11:49 compute-0 systemd[1]: libpod-conmon-a7766eb2e1cd5e8dd9b61c2ea28fd4ff4f28245ce35e0358b808261421942f91.scope: Deactivated successfully.
Nov 24 10:11:49 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:49 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:11:49 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:49.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:11:49 compute-0 sudo[295870]: pam_unix(sudo:session): session closed for user root
Nov 24 10:11:49 compute-0 sudo[296024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:11:49 compute-0 sudo[296024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:11:49 compute-0 sudo[296024]: pam_unix(sudo:session): session closed for user root
Nov 24 10:11:49 compute-0 sudo[296049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- lvm list --format json
Nov 24 10:11:49 compute-0 sudo[296049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:11:49 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:49 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:49 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:49.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:11:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:11:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:11:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:11:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:11:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:11:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:11:50 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:11:50 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1356: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 24 10:11:50 compute-0 podman[296116]: 2025-11-24 10:11:50.238199625 +0000 UTC m=+0.046888610 container create 09b6abf6745b0b3c39e758ee583d22e8ba8c21d35d5e705f78e50cb027a8b70d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_northcutt, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 10:11:50 compute-0 systemd[1]: Started libpod-conmon-09b6abf6745b0b3c39e758ee583d22e8ba8c21d35d5e705f78e50cb027a8b70d.scope.
Nov 24 10:11:50 compute-0 podman[296116]: 2025-11-24 10:11:50.217654344 +0000 UTC m=+0.026343359 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:11:50 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:11:50 compute-0 podman[296116]: 2025-11-24 10:11:50.335951394 +0000 UTC m=+0.144640409 container init 09b6abf6745b0b3c39e758ee583d22e8ba8c21d35d5e705f78e50cb027a8b70d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 24 10:11:50 compute-0 podman[296116]: 2025-11-24 10:11:50.348171345 +0000 UTC m=+0.156860340 container start 09b6abf6745b0b3c39e758ee583d22e8ba8c21d35d5e705f78e50cb027a8b70d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 24 10:11:50 compute-0 podman[296116]: 2025-11-24 10:11:50.352805102 +0000 UTC m=+0.161494107 container attach 09b6abf6745b0b3c39e758ee583d22e8ba8c21d35d5e705f78e50cb027a8b70d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_northcutt, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Nov 24 10:11:50 compute-0 stoic_northcutt[296132]: 167 167
Nov 24 10:11:50 compute-0 systemd[1]: libpod-09b6abf6745b0b3c39e758ee583d22e8ba8c21d35d5e705f78e50cb027a8b70d.scope: Deactivated successfully.
Nov 24 10:11:50 compute-0 podman[296116]: 2025-11-24 10:11:50.355330476 +0000 UTC m=+0.164019461 container died 09b6abf6745b0b3c39e758ee583d22e8ba8c21d35d5e705f78e50cb027a8b70d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_northcutt, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 10:11:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-199ae5bcfc681706f65516b0a01907529c3a579d040c213b7a9fb21652b6ed6c-merged.mount: Deactivated successfully.
Nov 24 10:11:50 compute-0 podman[296116]: 2025-11-24 10:11:50.410354942 +0000 UTC m=+0.219043957 container remove 09b6abf6745b0b3c39e758ee583d22e8ba8c21d35d5e705f78e50cb027a8b70d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_northcutt, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 10:11:50 compute-0 systemd[1]: libpod-conmon-09b6abf6745b0b3c39e758ee583d22e8ba8c21d35d5e705f78e50cb027a8b70d.scope: Deactivated successfully.
Nov 24 10:11:50 compute-0 podman[296156]: 2025-11-24 10:11:50.637844473 +0000 UTC m=+0.060359412 container create 0ef507f3b358233f09ad421c5e3f40de3f6bdeaef22ba4b305fe38c7ef33195f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Nov 24 10:11:50 compute-0 systemd[1]: Started libpod-conmon-0ef507f3b358233f09ad421c5e3f40de3f6bdeaef22ba4b305fe38c7ef33195f.scope.
Nov 24 10:11:50 compute-0 podman[296156]: 2025-11-24 10:11:50.614064009 +0000 UTC m=+0.036578958 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:11:50 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:11:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03008067a7474a7c141efa13a364d4da5a834f282bbad6f587edb3ea54dbd06d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 10:11:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03008067a7474a7c141efa13a364d4da5a834f282bbad6f587edb3ea54dbd06d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 10:11:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03008067a7474a7c141efa13a364d4da5a834f282bbad6f587edb3ea54dbd06d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 10:11:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03008067a7474a7c141efa13a364d4da5a834f282bbad6f587edb3ea54dbd06d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 10:11:50 compute-0 podman[296156]: 2025-11-24 10:11:50.740196929 +0000 UTC m=+0.162711878 container init 0ef507f3b358233f09ad421c5e3f40de3f6bdeaef22ba4b305fe38c7ef33195f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 10:11:50 compute-0 podman[296156]: 2025-11-24 10:11:50.749249458 +0000 UTC m=+0.171764347 container start 0ef507f3b358233f09ad421c5e3f40de3f6bdeaef22ba4b305fe38c7ef33195f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_bose, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 10:11:50 compute-0 podman[296156]: 2025-11-24 10:11:50.752649325 +0000 UTC m=+0.175164324 container attach 0ef507f3b358233f09ad421c5e3f40de3f6bdeaef22ba4b305fe38c7ef33195f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_bose, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Nov 24 10:11:50 compute-0 sudo[296177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:11:50 compute-0 sudo[296177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:11:50 compute-0 sudo[296177]: pam_unix(sudo:session): session closed for user root
Nov 24 10:11:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:11:50] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Nov 24 10:11:50 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:11:50] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Nov 24 10:11:51 compute-0 determined_bose[296172]: {
Nov 24 10:11:51 compute-0 determined_bose[296172]:     "0": [
Nov 24 10:11:51 compute-0 determined_bose[296172]:         {
Nov 24 10:11:51 compute-0 determined_bose[296172]:             "devices": [
Nov 24 10:11:51 compute-0 determined_bose[296172]:                 "/dev/loop3"
Nov 24 10:11:51 compute-0 determined_bose[296172]:             ],
Nov 24 10:11:51 compute-0 determined_bose[296172]:             "lv_name": "ceph_lv0",
Nov 24 10:11:51 compute-0 determined_bose[296172]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 10:11:51 compute-0 determined_bose[296172]:             "lv_size": "21470642176",
Nov 24 10:11:51 compute-0 determined_bose[296172]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=84a084c3-61a7-5de7-8207-1f88efa59a64,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 24 10:11:51 compute-0 determined_bose[296172]:             "lv_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 10:11:51 compute-0 determined_bose[296172]:             "name": "ceph_lv0",
Nov 24 10:11:51 compute-0 determined_bose[296172]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 10:11:51 compute-0 determined_bose[296172]:             "tags": {
Nov 24 10:11:51 compute-0 determined_bose[296172]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 10:11:51 compute-0 determined_bose[296172]:                 "ceph.block_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 10:11:51 compute-0 determined_bose[296172]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 10:11:51 compute-0 determined_bose[296172]:                 "ceph.cluster_fsid": "84a084c3-61a7-5de7-8207-1f88efa59a64",
Nov 24 10:11:51 compute-0 determined_bose[296172]:                 "ceph.cluster_name": "ceph",
Nov 24 10:11:51 compute-0 determined_bose[296172]:                 "ceph.crush_device_class": "",
Nov 24 10:11:51 compute-0 determined_bose[296172]:                 "ceph.encrypted": "0",
Nov 24 10:11:51 compute-0 determined_bose[296172]:                 "ceph.osd_fsid": "4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c",
Nov 24 10:11:51 compute-0 determined_bose[296172]:                 "ceph.osd_id": "0",
Nov 24 10:11:51 compute-0 determined_bose[296172]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 10:11:51 compute-0 determined_bose[296172]:                 "ceph.type": "block",
Nov 24 10:11:51 compute-0 determined_bose[296172]:                 "ceph.vdo": "0",
Nov 24 10:11:51 compute-0 determined_bose[296172]:                 "ceph.with_tpm": "0"
Nov 24 10:11:51 compute-0 determined_bose[296172]:             },
Nov 24 10:11:51 compute-0 determined_bose[296172]:             "type": "block",
Nov 24 10:11:51 compute-0 determined_bose[296172]:             "vg_name": "ceph_vg0"
Nov 24 10:11:51 compute-0 determined_bose[296172]:         }
Nov 24 10:11:51 compute-0 determined_bose[296172]:     ]
Nov 24 10:11:51 compute-0 determined_bose[296172]: }
Nov 24 10:11:51 compute-0 systemd[1]: libpod-0ef507f3b358233f09ad421c5e3f40de3f6bdeaef22ba4b305fe38c7ef33195f.scope: Deactivated successfully.
Nov 24 10:11:51 compute-0 podman[296156]: 2025-11-24 10:11:51.075867934 +0000 UTC m=+0.498382843 container died 0ef507f3b358233f09ad421c5e3f40de3f6bdeaef22ba4b305fe38c7ef33195f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 10:11:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-03008067a7474a7c141efa13a364d4da5a834f282bbad6f587edb3ea54dbd06d-merged.mount: Deactivated successfully.
Nov 24 10:11:51 compute-0 podman[296156]: 2025-11-24 10:11:51.12067441 +0000 UTC m=+0.543189349 container remove 0ef507f3b358233f09ad421c5e3f40de3f6bdeaef22ba4b305fe38c7ef33195f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Nov 24 10:11:51 compute-0 systemd[1]: libpod-conmon-0ef507f3b358233f09ad421c5e3f40de3f6bdeaef22ba4b305fe38c7ef33195f.scope: Deactivated successfully.
Nov 24 10:11:51 compute-0 sudo[296049]: pam_unix(sudo:session): session closed for user root
Nov 24 10:11:51 compute-0 sudo[296217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:11:51 compute-0 sudo[296217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:11:51 compute-0 sudo[296217]: pam_unix(sudo:session): session closed for user root
Nov 24 10:11:51 compute-0 ceph-mon[74331]: pgmap v1356: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 24 10:11:51 compute-0 sudo[296242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- raw list --format json
Nov 24 10:11:51 compute-0 sudo[296242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:11:51 compute-0 nova_compute[257700]: 2025-11-24 10:11:51.606 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:11:51 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:51 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:51 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:51.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:51 compute-0 nova_compute[257700]: 2025-11-24 10:11:51.732 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:11:51 compute-0 podman[296304]: 2025-11-24 10:11:51.772845213 +0000 UTC m=+0.051646842 container create 9850d753da86fb08529a3d851f0aabe793b6ad1de36722d5f06248426c054554 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_cerf, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Nov 24 10:11:51 compute-0 systemd[1]: Started libpod-conmon-9850d753da86fb08529a3d851f0aabe793b6ad1de36722d5f06248426c054554.scope.
Nov 24 10:11:51 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:11:51 compute-0 podman[296304]: 2025-11-24 10:11:51.748201748 +0000 UTC m=+0.027003397 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:11:51 compute-0 podman[296304]: 2025-11-24 10:11:51.849983639 +0000 UTC m=+0.128785278 container init 9850d753da86fb08529a3d851f0aabe793b6ad1de36722d5f06248426c054554 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_cerf, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 10:11:51 compute-0 podman[296304]: 2025-11-24 10:11:51.859752667 +0000 UTC m=+0.138554286 container start 9850d753da86fb08529a3d851f0aabe793b6ad1de36722d5f06248426c054554 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Nov 24 10:11:51 compute-0 podman[296304]: 2025-11-24 10:11:51.865351629 +0000 UTC m=+0.144153248 container attach 9850d753da86fb08529a3d851f0aabe793b6ad1de36722d5f06248426c054554 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_cerf, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Nov 24 10:11:51 compute-0 silly_cerf[296321]: 167 167
Nov 24 10:11:51 compute-0 systemd[1]: libpod-9850d753da86fb08529a3d851f0aabe793b6ad1de36722d5f06248426c054554.scope: Deactivated successfully.
Nov 24 10:11:51 compute-0 podman[296304]: 2025-11-24 10:11:51.869920726 +0000 UTC m=+0.148722345 container died 9850d753da86fb08529a3d851f0aabe793b6ad1de36722d5f06248426c054554 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 10:11:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-240e9484a311176e3b0132a422d87ddd6a1d38e21893abec48ad07695bc1c2c9-merged.mount: Deactivated successfully.
Nov 24 10:11:51 compute-0 podman[296304]: 2025-11-24 10:11:51.917330758 +0000 UTC m=+0.196132377 container remove 9850d753da86fb08529a3d851f0aabe793b6ad1de36722d5f06248426c054554 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Nov 24 10:11:51 compute-0 systemd[1]: libpod-conmon-9850d753da86fb08529a3d851f0aabe793b6ad1de36722d5f06248426c054554.scope: Deactivated successfully.
Nov 24 10:11:51 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:51 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:51 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:51.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:52 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1357: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Nov 24 10:11:52 compute-0 podman[296345]: 2025-11-24 10:11:52.209322044 +0000 UTC m=+0.080993395 container create b2a5baa78ddccd15f39778fadf427413bcf0979e6115efd28382c9fda30d60a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Nov 24 10:11:52 compute-0 systemd[1]: Started libpod-conmon-b2a5baa78ddccd15f39778fadf427413bcf0979e6115efd28382c9fda30d60a6.scope.
Nov 24 10:11:52 compute-0 podman[296345]: 2025-11-24 10:11:52.17797212 +0000 UTC m=+0.049643511 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:11:52 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:11:52 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:11:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3818a44158ca47652151e6a64e09de3766cacee7edf021265303daebf7258aac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 10:11:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3818a44158ca47652151e6a64e09de3766cacee7edf021265303daebf7258aac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 10:11:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3818a44158ca47652151e6a64e09de3766cacee7edf021265303daebf7258aac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 10:11:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3818a44158ca47652151e6a64e09de3766cacee7edf021265303daebf7258aac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 10:11:52 compute-0 podman[296345]: 2025-11-24 10:11:52.329700168 +0000 UTC m=+0.201371559 container init b2a5baa78ddccd15f39778fadf427413bcf0979e6115efd28382c9fda30d60a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_ardinghelli, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 10:11:52 compute-0 podman[296345]: 2025-11-24 10:11:52.344465053 +0000 UTC m=+0.216136404 container start b2a5baa78ddccd15f39778fadf427413bcf0979e6115efd28382c9fda30d60a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_ardinghelli, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Nov 24 10:11:52 compute-0 podman[296345]: 2025-11-24 10:11:52.349849259 +0000 UTC m=+0.221520650 container attach b2a5baa78ddccd15f39778fadf427413bcf0979e6115efd28382c9fda30d60a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 10:11:53 compute-0 lvm[296437]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 10:11:53 compute-0 lvm[296437]: VG ceph_vg0 finished
Nov 24 10:11:53 compute-0 great_ardinghelli[296362]: {}
Nov 24 10:11:53 compute-0 systemd[1]: libpod-b2a5baa78ddccd15f39778fadf427413bcf0979e6115efd28382c9fda30d60a6.scope: Deactivated successfully.
Nov 24 10:11:53 compute-0 podman[296345]: 2025-11-24 10:11:53.227014059 +0000 UTC m=+1.098685370 container died b2a5baa78ddccd15f39778fadf427413bcf0979e6115efd28382c9fda30d60a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_ardinghelli, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Nov 24 10:11:53 compute-0 systemd[1]: libpod-b2a5baa78ddccd15f39778fadf427413bcf0979e6115efd28382c9fda30d60a6.scope: Consumed 1.458s CPU time.
Nov 24 10:11:53 compute-0 ceph-mon[74331]: pgmap v1357: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Nov 24 10:11:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-3818a44158ca47652151e6a64e09de3766cacee7edf021265303daebf7258aac-merged.mount: Deactivated successfully.
Nov 24 10:11:53 compute-0 podman[296345]: 2025-11-24 10:11:53.283969204 +0000 UTC m=+1.155640525 container remove b2a5baa78ddccd15f39778fadf427413bcf0979e6115efd28382c9fda30d60a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Nov 24 10:11:53 compute-0 systemd[1]: libpod-conmon-b2a5baa78ddccd15f39778fadf427413bcf0979e6115efd28382c9fda30d60a6.scope: Deactivated successfully.
Nov 24 10:11:53 compute-0 sudo[296242]: pam_unix(sudo:session): session closed for user root
Nov 24 10:11:53 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 10:11:53 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:11:53 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 10:11:53 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:11:53 compute-0 sudo[296452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 10:11:53 compute-0 sudo[296452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:11:53 compute-0 sudo[296452]: pam_unix(sudo:session): session closed for user root
Nov 24 10:11:53 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:11:53.602Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:11:53 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:53 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:11:53 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:53.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:11:53 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:53 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:53 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:53.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:54 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1358: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 24 10:11:54 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:11:54 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:11:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:11:54 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:11:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:11:54 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:11:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:11:54 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:11:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:11:55 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:11:55 compute-0 ceph-mon[74331]: pgmap v1358: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 24 10:11:55 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:55 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:11:55 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:55.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:11:55 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:55 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:11:55 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:55.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:11:56 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1359: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 24 10:11:56 compute-0 nova_compute[257700]: 2025-11-24 10:11:56.608 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:11:56 compute-0 nova_compute[257700]: 2025-11-24 10:11:56.734 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:11:57 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:11:57 compute-0 ceph-mon[74331]: pgmap v1359: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 24 10:11:57 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:57 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:57 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:57.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:11:57.624Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:11:57 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:57 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:57 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:57.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:58 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1360: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 24 10:11:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:11:58.975Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 10:11:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:11:58.975Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 10:11:59 compute-0 ceph-mon[74331]: pgmap v1360: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Nov 24 10:11:59 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:59 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:59 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:11:59.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:11:59 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:11:59 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:11:59 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:11:59.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:12:00 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:12:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:12:00 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:12:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:12:00 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:12:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:12:00 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:12:00 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1361: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:12:00 compute-0 podman[296487]: 2025-11-24 10:12:00.801004192 +0000 UTC m=+0.065509512 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 24 10:12:00 compute-0 podman[296488]: 2025-11-24 10:12:00.850449316 +0000 UTC m=+0.107358104 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 10:12:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:12:00] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Nov 24 10:12:00 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:12:00] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Nov 24 10:12:01 compute-0 nova_compute[257700]: 2025-11-24 10:12:01.610 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:12:01 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:01 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:01 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:01.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:01 compute-0 nova_compute[257700]: 2025-11-24 10:12:01.737 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:12:01 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:01 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:12:01 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:01.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:12:02 compute-0 ceph-mon[74331]: pgmap v1361: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:12:02 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:12:02 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1362: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:12:02 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:12:03 compute-0 ceph-mon[74331]: pgmap v1362: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:12:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:12:03.604Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 10:12:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:12:03.604Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 10:12:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:12:03.604Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 10:12:03 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:03 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:12:03 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:03.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:12:03 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:03 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:12:03 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:03.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:12:04 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1363: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:12:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:12:04 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:12:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:12:04 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:12:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:12:04 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:12:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:12:05 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:12:05 compute-0 ceph-mon[74331]: pgmap v1363: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:12:05 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:05 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:12:05 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:05.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:12:05 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:05 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:12:05 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:05.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:12:06 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1364: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:12:06 compute-0 nova_compute[257700]: 2025-11-24 10:12:06.613 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:12:06 compute-0 nova_compute[257700]: 2025-11-24 10:12:06.738 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:12:06 compute-0 podman[296537]: 2025-11-24 10:12:06.806035457 +0000 UTC m=+0.075250699 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 24 10:12:07 compute-0 ceph-mon[74331]: pgmap v1364: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:12:07 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:12:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:12:07.626Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:12:07 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:07 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:07 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:07.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:07 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:07 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:07 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:07.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:08 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1365: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:12:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:12:08.976Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:12:09 compute-0 ceph-mon[74331]: pgmap v1365: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:12:09 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:09 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:09 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:09.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:09 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:09 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:09 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:09.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:12:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:12:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:12:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:12:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:12:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:12:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:12:10 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:12:10 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1366: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:12:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:12:10] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Nov 24 10:12:10 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:12:10] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Nov 24 10:12:11 compute-0 sudo[296561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:12:11 compute-0 sudo[296561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:12:11 compute-0 sudo[296561]: pam_unix(sudo:session): session closed for user root
Nov 24 10:12:11 compute-0 ceph-mon[74331]: pgmap v1366: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:12:11 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:11 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:11 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:11.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:11 compute-0 nova_compute[257700]: 2025-11-24 10:12:11.663 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:12:11 compute-0 nova_compute[257700]: 2025-11-24 10:12:11.740 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:12:11 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:11 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:11 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:11.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:12 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1367: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:12:12 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:12:13 compute-0 ceph-mon[74331]: pgmap v1367: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:12:13 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:12:13.605Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:12:13 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:13 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:13 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:13.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:13 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:13 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:12:13 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:13.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:12:14 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1368: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:12:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:12:14 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:12:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:12:14 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:12:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:12:14 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:12:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:12:15 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:12:15 compute-0 ceph-mon[74331]: pgmap v1368: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:12:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:12:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:12:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:12:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:12:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:12:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:12:15 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:15 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:12:15 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:15.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:12:15 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:15 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:15 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:15.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:16 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1369: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:12:16 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:12:16 compute-0 nova_compute[257700]: 2025-11-24 10:12:16.665 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:12:16 compute-0 nova_compute[257700]: 2025-11-24 10:12:16.743 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:12:17 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:12:17 compute-0 ceph-mon[74331]: pgmap v1369: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:12:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:12:17.628Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:12:17 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:17 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:12:17 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:17.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:12:17 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:17 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:12:17 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:17.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:12:18 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1370: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:12:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:12:18.977Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:12:19 compute-0 ceph-mon[74331]: pgmap v1370: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:12:19 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:19 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:19 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:19.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:19 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:19 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:12:19 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:19.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:12:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:12:19 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:12:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:12:19 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:12:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:12:19 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:12:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:12:20 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:12:20 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1371: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:12:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:12:20.587 165073 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:12:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:12:20.587 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:12:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:12:20.588 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:12:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:12:20] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Nov 24 10:12:20 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:12:20] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Nov 24 10:12:21 compute-0 ceph-mon[74331]: pgmap v1371: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:12:21 compute-0 nova_compute[257700]: 2025-11-24 10:12:21.666 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:12:21 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:21 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:21 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:21.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:21 compute-0 nova_compute[257700]: 2025-11-24 10:12:21.744 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:12:21 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:21 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:21 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:21.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:22 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1372: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:12:22 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:12:23 compute-0 ceph-mon[74331]: pgmap v1372: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:12:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:12:23.606Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:12:23 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:23 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:23 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:23.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:23 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:23 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:23 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:23.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:24 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1373: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:12:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:12:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:12:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:12:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:12:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:12:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:12:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:12:25 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:12:25 compute-0 ceph-mon[74331]: pgmap v1373: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:12:25 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:25 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:25 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:25.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:25 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:25 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:25 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:25.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:26 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1374: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:12:26 compute-0 nova_compute[257700]: 2025-11-24 10:12:26.669 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:12:26 compute-0 nova_compute[257700]: 2025-11-24 10:12:26.745 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:12:27 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:12:27 compute-0 ceph-mon[74331]: pgmap v1374: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:12:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:12:27.629Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:12:27 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:27 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:12:27 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:27.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:12:27 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:27 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:12:27 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:27.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:12:28 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1375: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:12:28 compute-0 nova_compute[257700]: 2025-11-24 10:12:28.920 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:12:28 compute-0 nova_compute[257700]: 2025-11-24 10:12:28.921 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 10:12:28 compute-0 nova_compute[257700]: 2025-11-24 10:12:28.921 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 10:12:28 compute-0 nova_compute[257700]: 2025-11-24 10:12:28.931 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 10:12:28 compute-0 nova_compute[257700]: 2025-11-24 10:12:28.931 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:12:28 compute-0 nova_compute[257700]: 2025-11-24 10:12:28.932 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:12:28 compute-0 nova_compute[257700]: 2025-11-24 10:12:28.932 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 10:12:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:12:28.977Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:12:29 compute-0 ceph-mon[74331]: pgmap v1375: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:12:29 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:29 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:12:29 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:29.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:12:29 compute-0 nova_compute[257700]: 2025-11-24 10:12:29.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:12:29 compute-0 nova_compute[257700]: 2025-11-24 10:12:29.941 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:12:29 compute-0 nova_compute[257700]: 2025-11-24 10:12:29.941 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:12:29 compute-0 nova_compute[257700]: 2025-11-24 10:12:29.941 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:12:29 compute-0 nova_compute[257700]: 2025-11-24 10:12:29.942 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 10:12:29 compute-0 nova_compute[257700]: 2025-11-24 10:12:29.942 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:12:29 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:29 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:29 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:29.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:12:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:12:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:12:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:12:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:12:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:12:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:12:30 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:12:30 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1376: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:12:30 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:12:30 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3648104697' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:12:30 compute-0 nova_compute[257700]: 2025-11-24 10:12:30.398 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:12:30 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3648104697' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:12:30 compute-0 nova_compute[257700]: 2025-11-24 10:12:30.553 257704 WARNING nova.virt.libvirt.driver [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 10:12:30 compute-0 nova_compute[257700]: 2025-11-24 10:12:30.555 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4503MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 10:12:30 compute-0 nova_compute[257700]: 2025-11-24 10:12:30.555 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:12:30 compute-0 nova_compute[257700]: 2025-11-24 10:12:30.555 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:12:30 compute-0 nova_compute[257700]: 2025-11-24 10:12:30.636 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 10:12:30 compute-0 nova_compute[257700]: 2025-11-24 10:12:30.636 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 10:12:30 compute-0 nova_compute[257700]: 2025-11-24 10:12:30.669 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:12:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:12:30] "GET /metrics HTTP/1.1" 200 48448 "" "Prometheus/2.51.0"
Nov 24 10:12:30 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:12:30] "GET /metrics HTTP/1.1" 200 48448 "" "Prometheus/2.51.0"
Nov 24 10:12:31 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:12:31 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/574713676' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:12:31 compute-0 nova_compute[257700]: 2025-11-24 10:12:31.132 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:12:31 compute-0 nova_compute[257700]: 2025-11-24 10:12:31.139 257704 DEBUG nova.compute.provider_tree [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Inventory has not changed in ProviderTree for provider: a50ce3b5-7e9e-4263-a4aa-c35573ac7257 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 10:12:31 compute-0 nova_compute[257700]: 2025-11-24 10:12:31.152 257704 DEBUG nova.scheduler.client.report [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Inventory has not changed for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 10:12:31 compute-0 nova_compute[257700]: 2025-11-24 10:12:31.153 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 10:12:31 compute-0 nova_compute[257700]: 2025-11-24 10:12:31.154 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:12:31 compute-0 sudo[296650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:12:31 compute-0 sudo[296650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:12:31 compute-0 sudo[296650]: pam_unix(sudo:session): session closed for user root
Nov 24 10:12:31 compute-0 podman[296674]: 2025-11-24 10:12:31.264047318 +0000 UTC m=+0.053411217 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.build-date=20251118, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 10:12:31 compute-0 podman[296675]: 2025-11-24 10:12:31.31617925 +0000 UTC m=+0.102418929 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Nov 24 10:12:31 compute-0 ceph-mon[74331]: pgmap v1376: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:12:31 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:12:31 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/574713676' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:12:31 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:31 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:31 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:31.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:31 compute-0 nova_compute[257700]: 2025-11-24 10:12:31.717 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:12:31 compute-0 nova_compute[257700]: 2025-11-24 10:12:31.746 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:12:31 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:31 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:31 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:31.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:32 compute-0 nova_compute[257700]: 2025-11-24 10:12:32.148 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:12:32 compute-0 nova_compute[257700]: 2025-11-24 10:12:32.149 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:12:32 compute-0 nova_compute[257700]: 2025-11-24 10:12:32.150 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:12:32 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1377: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:12:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:12:33 compute-0 ceph-mon[74331]: pgmap v1377: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:12:33 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:12:33.607Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:12:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:33.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:33.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:34 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1378: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:12:34 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/1166892128' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:12:34 compute-0 nova_compute[257700]: 2025-11-24 10:12:34.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:12:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:12:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:12:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:12:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:12:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:12:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:12:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:12:35 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:12:35 compute-0 ceph-mon[74331]: pgmap v1378: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:12:35 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:35 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:35 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:35.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:35 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:35 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:35 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:35.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:36 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1379: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:12:36 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/4225103940' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:12:36 compute-0 nova_compute[257700]: 2025-11-24 10:12:36.718 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:12:36 compute-0 nova_compute[257700]: 2025-11-24 10:12:36.747 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:12:36 compute-0 nova_compute[257700]: 2025-11-24 10:12:36.922 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:12:37 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:12:37 compute-0 ceph-mon[74331]: pgmap v1379: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:12:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:12:37.630Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:12:37 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:37 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:37 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:37.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:37 compute-0 podman[296729]: 2025-11-24 10:12:37.821260759 +0000 UTC m=+0.082294128 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 24 10:12:37 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:38 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:38 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:37.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:38 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1380: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:12:38 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/1011159927' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:12:38 compute-0 sshd-session[296722]: Connection closed by 14.215.126.91 port 49472 [preauth]
Nov 24 10:12:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:12:38.978Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:12:39 compute-0 ceph-mon[74331]: pgmap v1380: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:12:39 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/1739914990' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:12:39 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:39 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:12:39 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:39.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:12:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:12:40 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:12:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:12:40 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:12:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:12:40 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:12:40 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:40 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:40 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:40.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:12:40 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:12:40 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1381: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:12:40 compute-0 nova_compute[257700]: 2025-11-24 10:12:40.922 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:12:40 compute-0 nova_compute[257700]: 2025-11-24 10:12:40.923 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 24 10:12:40 compute-0 nova_compute[257700]: 2025-11-24 10:12:40.936 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 24 10:12:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:12:40] "GET /metrics HTTP/1.1" 200 48448 "" "Prometheus/2.51.0"
Nov 24 10:12:40 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:12:40] "GET /metrics HTTP/1.1" 200 48448 "" "Prometheus/2.51.0"
Nov 24 10:12:41 compute-0 ceph-mon[74331]: pgmap v1381: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:12:41 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:41 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:41 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:41.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:41 compute-0 nova_compute[257700]: 2025-11-24 10:12:41.720 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:12:41 compute-0 nova_compute[257700]: 2025-11-24 10:12:41.748 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:12:42 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:42 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:42 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:42.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:42 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1382: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:12:42 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:12:42 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Nov 24 10:12:42 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:12:42.305409) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 10:12:42 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Nov 24 10:12:42 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763979162305479, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 1377, "num_deletes": 255, "total_data_size": 2512684, "memory_usage": 2561024, "flush_reason": "Manual Compaction"}
Nov 24 10:12:42 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Nov 24 10:12:42 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763979162322949, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 2458052, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37030, "largest_seqno": 38406, "table_properties": {"data_size": 2451627, "index_size": 3624, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13693, "raw_average_key_size": 19, "raw_value_size": 2438570, "raw_average_value_size": 3539, "num_data_blocks": 156, "num_entries": 689, "num_filter_entries": 689, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763979037, "oldest_key_time": 1763979037, "file_creation_time": 1763979162, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "42aa12d2-c531-4ddc-8c4c-bc0b5971346b", "db_session_id": "RORHLERH15LC1QL8D0I4", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Nov 24 10:12:42 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 17585 microseconds, and 7834 cpu microseconds.
Nov 24 10:12:42 compute-0 ceph-mon[74331]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 10:12:42 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:12:42.322996) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 2458052 bytes OK
Nov 24 10:12:42 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:12:42.323017) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Nov 24 10:12:42 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:12:42.324687) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Nov 24 10:12:42 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:12:42.324704) EVENT_LOG_v1 {"time_micros": 1763979162324698, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 10:12:42 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:12:42.324724) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 10:12:42 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 2506714, prev total WAL file size 2506714, number of live WAL files 2.
Nov 24 10:12:42 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 10:12:42 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:12:42.325732) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303033' seq:72057594037927935, type:22 .. '6C6F676D0031323534' seq:0, type:0; will stop at (end)
Nov 24 10:12:42 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 10:12:42 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(2400KB)], [80(11MB)]
Nov 24 10:12:42 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763979162325847, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 15010365, "oldest_snapshot_seqno": -1}
Nov 24 10:12:42 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 6896 keys, 14849120 bytes, temperature: kUnknown
Nov 24 10:12:42 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763979162435564, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 14849120, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14803558, "index_size": 27210, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17285, "raw_key_size": 181295, "raw_average_key_size": 26, "raw_value_size": 14679952, "raw_average_value_size": 2128, "num_data_blocks": 1072, "num_entries": 6896, "num_filter_entries": 6896, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976305, "oldest_key_time": 0, "file_creation_time": 1763979162, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "42aa12d2-c531-4ddc-8c4c-bc0b5971346b", "db_session_id": "RORHLERH15LC1QL8D0I4", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Nov 24 10:12:42 compute-0 ceph-mon[74331]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 10:12:42 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:12:42.436022) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 14849120 bytes
Nov 24 10:12:42 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:12:42.437532) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 136.7 rd, 135.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 12.0 +0.0 blob) out(14.2 +0.0 blob), read-write-amplify(12.1) write-amplify(6.0) OK, records in: 7424, records dropped: 528 output_compression: NoCompression
Nov 24 10:12:42 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:12:42.437563) EVENT_LOG_v1 {"time_micros": 1763979162437548, "job": 46, "event": "compaction_finished", "compaction_time_micros": 109844, "compaction_time_cpu_micros": 63344, "output_level": 6, "num_output_files": 1, "total_output_size": 14849120, "num_input_records": 7424, "num_output_records": 6896, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 10:12:42 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 10:12:42 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763979162438645, "job": 46, "event": "table_file_deletion", "file_number": 82}
Nov 24 10:12:42 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 10:12:42 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763979162444936, "job": 46, "event": "table_file_deletion", "file_number": 80}
Nov 24 10:12:42 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:12:42.325560) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:12:42 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:12:42.445040) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:12:42 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:12:42.445047) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:12:42 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:12:42.445048) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:12:42 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:12:42.445050) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:12:42 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:12:42.445051) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:12:43 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:12:43.608Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:12:43 compute-0 ceph-mon[74331]: pgmap v1382: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:12:43 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:43 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:43 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:43.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:44 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:44 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:44 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:44.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:44 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1383: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:12:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:12:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:12:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:12:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:12:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:12:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:12:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:12:45 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:12:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:12:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:12:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Optimize plan auto_2025-11-24_10:12:45
Nov 24 10:12:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 10:12:45 compute-0 ceph-mgr[74626]: [balancer INFO root] do_upmap
Nov 24 10:12:45 compute-0 ceph-mgr[74626]: [balancer INFO root] pools ['backups', 'vms', 'volumes', 'cephfs.cephfs.data', '.mgr', 'default.rgw.meta', 'images', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', '.nfs', 'default.rgw.log']
Nov 24 10:12:45 compute-0 ceph-mgr[74626]: [balancer INFO root] prepared 0/10 upmap changes
Nov 24 10:12:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:12:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:12:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:12:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:12:45 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:45 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:45 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:45.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:45 compute-0 nova_compute[257700]: 2025-11-24 10:12:45.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:12:45 compute-0 ceph-mon[74331]: pgmap v1383: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:12:45 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:12:46 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:46 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:46 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:46.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 10:12:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:12:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 10:12:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:12:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 10:12:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:12:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:12:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:12:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:12:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:12:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 24 10:12:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:12:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 10:12:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:12:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:12:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:12:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 24 10:12:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:12:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 10:12:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:12:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:12:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:12:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 10:12:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:12:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 10:12:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 10:12:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 10:12:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 10:12:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 10:12:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 10:12:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 10:12:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 10:12:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 10:12:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 10:12:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 10:12:46 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1384: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:12:46 compute-0 sshd-session[296758]: Received disconnect from 36.255.3.203 port 44665:11: Bye Bye [preauth]
Nov 24 10:12:46 compute-0 sshd-session[296758]: Disconnected from authenticating user root 36.255.3.203 port 44665 [preauth]
Nov 24 10:12:46 compute-0 nova_compute[257700]: 2025-11-24 10:12:46.724 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:12:46 compute-0 nova_compute[257700]: 2025-11-24 10:12:46.750 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:12:46 compute-0 ceph-mon[74331]: pgmap v1384: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:12:47 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:12:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:12:47.630Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 10:12:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:12:47.631Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 10:12:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:12:47.631Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:12:47 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:47 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:47 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:47.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:48 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:48 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:48 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:48.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:48 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1385: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:12:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:12:48.979Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 10:12:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:12:48.979Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 10:12:49 compute-0 ceph-mon[74331]: pgmap v1385: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:12:49 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:49 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:49 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:49.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:49 compute-0 sshd-session[296764]: Invalid user admin from 83.229.122.23 port 43218
Nov 24 10:12:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:12:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:12:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:12:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:12:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:12:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:12:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:12:50 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:12:50 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:50 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:50 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:50.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:50 compute-0 sshd-session[296764]: Received disconnect from 83.229.122.23 port 43218:11: Bye Bye [preauth]
Nov 24 10:12:50 compute-0 sshd-session[296764]: Disconnected from invalid user admin 83.229.122.23 port 43218 [preauth]
Nov 24 10:12:50 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1386: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:12:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:12:50] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Nov 24 10:12:50 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:12:50] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Nov 24 10:12:51 compute-0 sudo[296768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:12:51 compute-0 ceph-mon[74331]: pgmap v1386: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:12:51 compute-0 sudo[296768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:12:51 compute-0 sudo[296768]: pam_unix(sudo:session): session closed for user root
Nov 24 10:12:51 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:51 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:51 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:51.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:51 compute-0 nova_compute[257700]: 2025-11-24 10:12:51.725 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:12:51 compute-0 nova_compute[257700]: 2025-11-24 10:12:51.752 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:12:51 compute-0 sshd-session[296753]: error: kex_exchange_identification: read: Connection timed out
Nov 24 10:12:51 compute-0 sshd-session[296753]: banner exchange: Connection from 121.31.210.125 port 51126: Connection timed out
Nov 24 10:12:52 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:52 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:52 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:52.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:52 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1387: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:12:52 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:12:53 compute-0 ceph-mon[74331]: pgmap v1387: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:12:53 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:12:53.609Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:12:53 compute-0 sudo[296796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:12:53 compute-0 sudo[296796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:12:53 compute-0 sudo[296796]: pam_unix(sudo:session): session closed for user root
Nov 24 10:12:53 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:53 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:12:53 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:53.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:12:53 compute-0 sudo[296821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 10:12:53 compute-0 sudo[296821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:12:54 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:54 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:54 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:54.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:54 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1388: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:12:54 compute-0 sudo[296821]: pam_unix(sudo:session): session closed for user root
Nov 24 10:12:54 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1389: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 24 10:12:54 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 10:12:54 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:12:54 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 10:12:54 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:12:54 compute-0 sudo[296880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:12:54 compute-0 sudo[296880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:12:54 compute-0 sudo[296880]: pam_unix(sudo:session): session closed for user root
Nov 24 10:12:54 compute-0 sudo[296905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 24 10:12:54 compute-0 sudo[296905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:12:54 compute-0 podman[296972]: 2025-11-24 10:12:54.984284778 +0000 UTC m=+0.041249948 container create 1e4531c6acbe4d714a3424d3c33d800a60112da0ecce4fd77736f35e73cde460 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_wilbur, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Nov 24 10:12:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:12:54 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:12:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:12:54 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:12:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:12:54 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:12:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:12:55 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:12:55 compute-0 systemd[1]: Started libpod-conmon-1e4531c6acbe4d714a3424d3c33d800a60112da0ecce4fd77736f35e73cde460.scope.
Nov 24 10:12:55 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:12:55 compute-0 podman[296972]: 2025-11-24 10:12:54.96664927 +0000 UTC m=+0.023614470 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:12:55 compute-0 podman[296972]: 2025-11-24 10:12:55.065350994 +0000 UTC m=+0.122316194 container init 1e4531c6acbe4d714a3424d3c33d800a60112da0ecce4fd77736f35e73cde460 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 10:12:55 compute-0 podman[296972]: 2025-11-24 10:12:55.071966801 +0000 UTC m=+0.128931971 container start 1e4531c6acbe4d714a3424d3c33d800a60112da0ecce4fd77736f35e73cde460 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_wilbur, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 24 10:12:55 compute-0 podman[296972]: 2025-11-24 10:12:55.07467286 +0000 UTC m=+0.131638040 container attach 1e4531c6acbe4d714a3424d3c33d800a60112da0ecce4fd77736f35e73cde460 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_wilbur, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 24 10:12:55 compute-0 competent_wilbur[296989]: 167 167
Nov 24 10:12:55 compute-0 systemd[1]: libpod-1e4531c6acbe4d714a3424d3c33d800a60112da0ecce4fd77736f35e73cde460.scope: Deactivated successfully.
Nov 24 10:12:55 compute-0 podman[296972]: 2025-11-24 10:12:55.079343329 +0000 UTC m=+0.136308539 container died 1e4531c6acbe4d714a3424d3c33d800a60112da0ecce4fd77736f35e73cde460 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_wilbur, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 24 10:12:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-89b9ee26c9739dacd2b67ed7a5f9de02a55bf7ddc5771dd74b8700dcb9964134-merged.mount: Deactivated successfully.
Nov 24 10:12:55 compute-0 podman[296972]: 2025-11-24 10:12:55.118344458 +0000 UTC m=+0.175309638 container remove 1e4531c6acbe4d714a3424d3c33d800a60112da0ecce4fd77736f35e73cde460 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Nov 24 10:12:55 compute-0 systemd[1]: libpod-conmon-1e4531c6acbe4d714a3424d3c33d800a60112da0ecce4fd77736f35e73cde460.scope: Deactivated successfully.
Nov 24 10:12:55 compute-0 ceph-mon[74331]: pgmap v1388: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:12:55 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:12:55 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 10:12:55 compute-0 ceph-mon[74331]: pgmap v1389: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 24 10:12:55 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:12:55 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:12:55 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 10:12:55 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 10:12:55 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:12:55 compute-0 podman[297014]: 2025-11-24 10:12:55.337691772 +0000 UTC m=+0.066192790 container create be20e0eff06c1f1531943b0c97712b02ae466d49479e265f2edfbf8f3364416d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_curran, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 10:12:55 compute-0 systemd[1]: Started libpod-conmon-be20e0eff06c1f1531943b0c97712b02ae466d49479e265f2edfbf8f3364416d.scope.
Nov 24 10:12:55 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:12:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6560c31e254ad55c032043f73f9538ff1c740abdb694ed469b8da92d9c0278d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 10:12:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6560c31e254ad55c032043f73f9538ff1c740abdb694ed469b8da92d9c0278d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 10:12:55 compute-0 podman[297014]: 2025-11-24 10:12:55.316216447 +0000 UTC m=+0.044717465 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:12:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6560c31e254ad55c032043f73f9538ff1c740abdb694ed469b8da92d9c0278d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 10:12:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6560c31e254ad55c032043f73f9538ff1c740abdb694ed469b8da92d9c0278d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 10:12:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6560c31e254ad55c032043f73f9538ff1c740abdb694ed469b8da92d9c0278d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 10:12:55 compute-0 podman[297014]: 2025-11-24 10:12:55.419177538 +0000 UTC m=+0.147678566 container init be20e0eff06c1f1531943b0c97712b02ae466d49479e265f2edfbf8f3364416d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_curran, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 24 10:12:55 compute-0 podman[297014]: 2025-11-24 10:12:55.432255071 +0000 UTC m=+0.160756069 container start be20e0eff06c1f1531943b0c97712b02ae466d49479e265f2edfbf8f3364416d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Nov 24 10:12:55 compute-0 podman[297014]: 2025-11-24 10:12:55.43539534 +0000 UTC m=+0.163896338 container attach be20e0eff06c1f1531943b0c97712b02ae466d49479e265f2edfbf8f3364416d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_curran, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 10:12:55 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:55 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:12:55 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:55.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:12:55 compute-0 musing_curran[297030]: --> passed data devices: 0 physical, 1 LVM
Nov 24 10:12:55 compute-0 musing_curran[297030]: --> All data devices are unavailable
Nov 24 10:12:55 compute-0 systemd[1]: libpod-be20e0eff06c1f1531943b0c97712b02ae466d49479e265f2edfbf8f3364416d.scope: Deactivated successfully.
Nov 24 10:12:55 compute-0 podman[297014]: 2025-11-24 10:12:55.788402575 +0000 UTC m=+0.516903573 container died be20e0eff06c1f1531943b0c97712b02ae466d49479e265f2edfbf8f3364416d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_curran, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 10:12:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-f6560c31e254ad55c032043f73f9538ff1c740abdb694ed469b8da92d9c0278d-merged.mount: Deactivated successfully.
Nov 24 10:12:55 compute-0 podman[297014]: 2025-11-24 10:12:55.834200937 +0000 UTC m=+0.562701955 container remove be20e0eff06c1f1531943b0c97712b02ae466d49479e265f2edfbf8f3364416d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_curran, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 10:12:55 compute-0 systemd[1]: libpod-conmon-be20e0eff06c1f1531943b0c97712b02ae466d49479e265f2edfbf8f3364416d.scope: Deactivated successfully.
Nov 24 10:12:55 compute-0 sudo[296905]: pam_unix(sudo:session): session closed for user root
Nov 24 10:12:55 compute-0 sudo[297057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:12:55 compute-0 sudo[297057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:12:55 compute-0 sudo[297057]: pam_unix(sudo:session): session closed for user root
Nov 24 10:12:55 compute-0 nova_compute[257700]: 2025-11-24 10:12:55.942 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:12:55 compute-0 nova_compute[257700]: 2025-11-24 10:12:55.943 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 24 10:12:55 compute-0 sudo[297082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- lvm list --format json
Nov 24 10:12:55 compute-0 sudo[297082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:12:56 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:56 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:56 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:56.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:56 compute-0 podman[297149]: 2025-11-24 10:12:56.403495737 +0000 UTC m=+0.038831836 container create 709aad21794c6649dfad2c5b66ae2a3482c0e51fd0c2d61de0ca53412fe0dc70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_solomon, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325)
Nov 24 10:12:56 compute-0 systemd[1]: Started libpod-conmon-709aad21794c6649dfad2c5b66ae2a3482c0e51fd0c2d61de0ca53412fe0dc70.scope.
Nov 24 10:12:56 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1390: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 24 10:12:56 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:12:56 compute-0 podman[297149]: 2025-11-24 10:12:56.38783877 +0000 UTC m=+0.023174879 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:12:56 compute-0 podman[297149]: 2025-11-24 10:12:56.488963385 +0000 UTC m=+0.124299524 container init 709aad21794c6649dfad2c5b66ae2a3482c0e51fd0c2d61de0ca53412fe0dc70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_solomon, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2)
Nov 24 10:12:56 compute-0 podman[297149]: 2025-11-24 10:12:56.499626776 +0000 UTC m=+0.134962915 container start 709aad21794c6649dfad2c5b66ae2a3482c0e51fd0c2d61de0ca53412fe0dc70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_solomon, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 24 10:12:56 compute-0 podman[297149]: 2025-11-24 10:12:56.503041643 +0000 UTC m=+0.138377742 container attach 709aad21794c6649dfad2c5b66ae2a3482c0e51fd0c2d61de0ca53412fe0dc70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True)
Nov 24 10:12:56 compute-0 upbeat_solomon[297165]: 167 167
Nov 24 10:12:56 compute-0 systemd[1]: libpod-709aad21794c6649dfad2c5b66ae2a3482c0e51fd0c2d61de0ca53412fe0dc70.scope: Deactivated successfully.
Nov 24 10:12:56 compute-0 podman[297149]: 2025-11-24 10:12:56.50572199 +0000 UTC m=+0.141058189 container died 709aad21794c6649dfad2c5b66ae2a3482c0e51fd0c2d61de0ca53412fe0dc70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 10:12:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-44bdffda7fbd1bfc11b94e557795546379010c548e24e5088764f889b54c310b-merged.mount: Deactivated successfully.
Nov 24 10:12:56 compute-0 podman[297149]: 2025-11-24 10:12:56.544585406 +0000 UTC m=+0.179921505 container remove 709aad21794c6649dfad2c5b66ae2a3482c0e51fd0c2d61de0ca53412fe0dc70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_solomon, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 10:12:56 compute-0 systemd[1]: libpod-conmon-709aad21794c6649dfad2c5b66ae2a3482c0e51fd0c2d61de0ca53412fe0dc70.scope: Deactivated successfully.
Nov 24 10:12:56 compute-0 nova_compute[257700]: 2025-11-24 10:12:56.728 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:12:56 compute-0 nova_compute[257700]: 2025-11-24 10:12:56.753 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:12:56 compute-0 podman[297188]: 2025-11-24 10:12:56.779073764 +0000 UTC m=+0.073350892 container create c3ea79a039087dce149d158080492793480ad1c5660b0cf8b10ae368fbfa9a82 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_burnell, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Nov 24 10:12:56 compute-0 systemd[1]: Started libpod-conmon-c3ea79a039087dce149d158080492793480ad1c5660b0cf8b10ae368fbfa9a82.scope.
Nov 24 10:12:56 compute-0 podman[297188]: 2025-11-24 10:12:56.748892288 +0000 UTC m=+0.043169516 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:12:56 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:12:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dcfba65b30fa4b431fda6b4f7e4d3a797cc655e47756e53fb2818600fbc899d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 10:12:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dcfba65b30fa4b431fda6b4f7e4d3a797cc655e47756e53fb2818600fbc899d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 10:12:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dcfba65b30fa4b431fda6b4f7e4d3a797cc655e47756e53fb2818600fbc899d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 10:12:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dcfba65b30fa4b431fda6b4f7e4d3a797cc655e47756e53fb2818600fbc899d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 10:12:56 compute-0 podman[297188]: 2025-11-24 10:12:56.870615436 +0000 UTC m=+0.164892654 container init c3ea79a039087dce149d158080492793480ad1c5660b0cf8b10ae368fbfa9a82 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 10:12:56 compute-0 podman[297188]: 2025-11-24 10:12:56.876816123 +0000 UTC m=+0.171093251 container start c3ea79a039087dce149d158080492793480ad1c5660b0cf8b10ae368fbfa9a82 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 24 10:12:56 compute-0 podman[297188]: 2025-11-24 10:12:56.880004774 +0000 UTC m=+0.174281983 container attach c3ea79a039087dce149d158080492793480ad1c5660b0cf8b10ae368fbfa9a82 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_burnell, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 10:12:57 compute-0 cranky_burnell[297204]: {
Nov 24 10:12:57 compute-0 cranky_burnell[297204]:     "0": [
Nov 24 10:12:57 compute-0 cranky_burnell[297204]:         {
Nov 24 10:12:57 compute-0 cranky_burnell[297204]:             "devices": [
Nov 24 10:12:57 compute-0 cranky_burnell[297204]:                 "/dev/loop3"
Nov 24 10:12:57 compute-0 cranky_burnell[297204]:             ],
Nov 24 10:12:57 compute-0 cranky_burnell[297204]:             "lv_name": "ceph_lv0",
Nov 24 10:12:57 compute-0 cranky_burnell[297204]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 10:12:57 compute-0 cranky_burnell[297204]:             "lv_size": "21470642176",
Nov 24 10:12:57 compute-0 cranky_burnell[297204]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=84a084c3-61a7-5de7-8207-1f88efa59a64,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 24 10:12:57 compute-0 cranky_burnell[297204]:             "lv_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 10:12:57 compute-0 cranky_burnell[297204]:             "name": "ceph_lv0",
Nov 24 10:12:57 compute-0 cranky_burnell[297204]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 10:12:57 compute-0 cranky_burnell[297204]:             "tags": {
Nov 24 10:12:57 compute-0 cranky_burnell[297204]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 10:12:57 compute-0 cranky_burnell[297204]:                 "ceph.block_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 10:12:57 compute-0 cranky_burnell[297204]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 10:12:57 compute-0 cranky_burnell[297204]:                 "ceph.cluster_fsid": "84a084c3-61a7-5de7-8207-1f88efa59a64",
Nov 24 10:12:57 compute-0 cranky_burnell[297204]:                 "ceph.cluster_name": "ceph",
Nov 24 10:12:57 compute-0 cranky_burnell[297204]:                 "ceph.crush_device_class": "",
Nov 24 10:12:57 compute-0 cranky_burnell[297204]:                 "ceph.encrypted": "0",
Nov 24 10:12:57 compute-0 cranky_burnell[297204]:                 "ceph.osd_fsid": "4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c",
Nov 24 10:12:57 compute-0 cranky_burnell[297204]:                 "ceph.osd_id": "0",
Nov 24 10:12:57 compute-0 cranky_burnell[297204]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 10:12:57 compute-0 cranky_burnell[297204]:                 "ceph.type": "block",
Nov 24 10:12:57 compute-0 cranky_burnell[297204]:                 "ceph.vdo": "0",
Nov 24 10:12:57 compute-0 cranky_burnell[297204]:                 "ceph.with_tpm": "0"
Nov 24 10:12:57 compute-0 cranky_burnell[297204]:             },
Nov 24 10:12:57 compute-0 cranky_burnell[297204]:             "type": "block",
Nov 24 10:12:57 compute-0 cranky_burnell[297204]:             "vg_name": "ceph_vg0"
Nov 24 10:12:57 compute-0 cranky_burnell[297204]:         }
Nov 24 10:12:57 compute-0 cranky_burnell[297204]:     ]
Nov 24 10:12:57 compute-0 cranky_burnell[297204]: }
Nov 24 10:12:57 compute-0 systemd[1]: libpod-c3ea79a039087dce149d158080492793480ad1c5660b0cf8b10ae368fbfa9a82.scope: Deactivated successfully.
Nov 24 10:12:57 compute-0 podman[297188]: 2025-11-24 10:12:57.211707219 +0000 UTC m=+0.505984347 container died c3ea79a039087dce149d158080492793480ad1c5660b0cf8b10ae368fbfa9a82 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Nov 24 10:12:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-9dcfba65b30fa4b431fda6b4f7e4d3a797cc655e47756e53fb2818600fbc899d-merged.mount: Deactivated successfully.
Nov 24 10:12:57 compute-0 podman[297188]: 2025-11-24 10:12:57.255232903 +0000 UTC m=+0.549510031 container remove c3ea79a039087dce149d158080492793480ad1c5660b0cf8b10ae368fbfa9a82 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_burnell, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 10:12:57 compute-0 systemd[1]: libpod-conmon-c3ea79a039087dce149d158080492793480ad1c5660b0cf8b10ae368fbfa9a82.scope: Deactivated successfully.
Nov 24 10:12:57 compute-0 sudo[297082]: pam_unix(sudo:session): session closed for user root
Nov 24 10:12:57 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:12:57 compute-0 sudo[297226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:12:57 compute-0 sudo[297226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:12:57 compute-0 sudo[297226]: pam_unix(sudo:session): session closed for user root
Nov 24 10:12:57 compute-0 sudo[297251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- raw list --format json
Nov 24 10:12:57 compute-0 sudo[297251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:12:57 compute-0 ceph-mon[74331]: pgmap v1390: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 24 10:12:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:12:57.632Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:12:57 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:57 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:57 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:57.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:57 compute-0 podman[297319]: 2025-11-24 10:12:57.798839141 +0000 UTC m=+0.038707353 container create 7b1468230720ad6ac83bcdbd31132b489717fb22174d56fc8d7f08cad5a8b89c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_galileo, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 24 10:12:57 compute-0 systemd[1]: Started libpod-conmon-7b1468230720ad6ac83bcdbd31132b489717fb22174d56fc8d7f08cad5a8b89c.scope.
Nov 24 10:12:57 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:12:57 compute-0 podman[297319]: 2025-11-24 10:12:57.864781064 +0000 UTC m=+0.104649326 container init 7b1468230720ad6ac83bcdbd31132b489717fb22174d56fc8d7f08cad5a8b89c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_galileo, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 10:12:57 compute-0 podman[297319]: 2025-11-24 10:12:57.871388942 +0000 UTC m=+0.111257164 container start 7b1468230720ad6ac83bcdbd31132b489717fb22174d56fc8d7f08cad5a8b89c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 10:12:57 compute-0 podman[297319]: 2025-11-24 10:12:57.875476436 +0000 UTC m=+0.115344648 container attach 7b1468230720ad6ac83bcdbd31132b489717fb22174d56fc8d7f08cad5a8b89c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Nov 24 10:12:57 compute-0 systemd[1]: libpod-7b1468230720ad6ac83bcdbd31132b489717fb22174d56fc8d7f08cad5a8b89c.scope: Deactivated successfully.
Nov 24 10:12:57 compute-0 sweet_galileo[297335]: 167 167
Nov 24 10:12:57 compute-0 conmon[297335]: conmon 7b1468230720ad6ac83b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7b1468230720ad6ac83bcdbd31132b489717fb22174d56fc8d7f08cad5a8b89c.scope/container/memory.events
Nov 24 10:12:57 compute-0 podman[297319]: 2025-11-24 10:12:57.876663916 +0000 UTC m=+0.116532128 container died 7b1468230720ad6ac83bcdbd31132b489717fb22174d56fc8d7f08cad5a8b89c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_galileo, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 24 10:12:57 compute-0 podman[297319]: 2025-11-24 10:12:57.783141373 +0000 UTC m=+0.023009615 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:12:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b453b1fc82b5bc50f69132a863969291935e1c9b8721a4411f5469de4919f48-merged.mount: Deactivated successfully.
Nov 24 10:12:57 compute-0 podman[297319]: 2025-11-24 10:12:57.905526147 +0000 UTC m=+0.145394359 container remove 7b1468230720ad6ac83bcdbd31132b489717fb22174d56fc8d7f08cad5a8b89c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_galileo, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Nov 24 10:12:57 compute-0 systemd[1]: libpod-conmon-7b1468230720ad6ac83bcdbd31132b489717fb22174d56fc8d7f08cad5a8b89c.scope: Deactivated successfully.
Nov 24 10:12:58 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:58 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:12:58 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:12:58.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:12:58 compute-0 podman[297359]: 2025-11-24 10:12:58.056063906 +0000 UTC m=+0.043815283 container create 61c5a08bc0b040b092ff446345036c5865ed92ab04cb07474cf55fc16cf9b8a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_thompson, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 10:12:58 compute-0 systemd[1]: Started libpod-conmon-61c5a08bc0b040b092ff446345036c5865ed92ab04cb07474cf55fc16cf9b8a6.scope.
Nov 24 10:12:58 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:12:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f903e33c330f940e76dda70e2057c279306590596814000f6329c6e60a4e282/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 10:12:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f903e33c330f940e76dda70e2057c279306590596814000f6329c6e60a4e282/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 10:12:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f903e33c330f940e76dda70e2057c279306590596814000f6329c6e60a4e282/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 10:12:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f903e33c330f940e76dda70e2057c279306590596814000f6329c6e60a4e282/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 10:12:58 compute-0 podman[297359]: 2025-11-24 10:12:58.036427148 +0000 UTC m=+0.024178575 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:12:58 compute-0 podman[297359]: 2025-11-24 10:12:58.140313793 +0000 UTC m=+0.128065210 container init 61c5a08bc0b040b092ff446345036c5865ed92ab04cb07474cf55fc16cf9b8a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_thompson, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 24 10:12:58 compute-0 podman[297359]: 2025-11-24 10:12:58.146720685 +0000 UTC m=+0.134472072 container start 61c5a08bc0b040b092ff446345036c5865ed92ab04cb07474cf55fc16cf9b8a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Nov 24 10:12:58 compute-0 podman[297359]: 2025-11-24 10:12:58.14962257 +0000 UTC m=+0.137373977 container attach 61c5a08bc0b040b092ff446345036c5865ed92ab04cb07474cf55fc16cf9b8a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True)
Nov 24 10:12:58 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1391: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 24 10:12:58 compute-0 lvm[297451]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 10:12:58 compute-0 lvm[297451]: VG ceph_vg0 finished
Nov 24 10:12:58 compute-0 elated_thompson[297376]: {}
Nov 24 10:12:58 compute-0 systemd[1]: libpod-61c5a08bc0b040b092ff446345036c5865ed92ab04cb07474cf55fc16cf9b8a6.scope: Deactivated successfully.
Nov 24 10:12:58 compute-0 systemd[1]: libpod-61c5a08bc0b040b092ff446345036c5865ed92ab04cb07474cf55fc16cf9b8a6.scope: Consumed 1.036s CPU time.
Nov 24 10:12:58 compute-0 podman[297359]: 2025-11-24 10:12:58.831712932 +0000 UTC m=+0.819464309 container died 61c5a08bc0b040b092ff446345036c5865ed92ab04cb07474cf55fc16cf9b8a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_thompson, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True)
Nov 24 10:12:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f903e33c330f940e76dda70e2057c279306590596814000f6329c6e60a4e282-merged.mount: Deactivated successfully.
Nov 24 10:12:58 compute-0 podman[297359]: 2025-11-24 10:12:58.870635199 +0000 UTC m=+0.858386566 container remove 61c5a08bc0b040b092ff446345036c5865ed92ab04cb07474cf55fc16cf9b8a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 10:12:58 compute-0 systemd[1]: libpod-conmon-61c5a08bc0b040b092ff446345036c5865ed92ab04cb07474cf55fc16cf9b8a6.scope: Deactivated successfully.
Nov 24 10:12:58 compute-0 sudo[297251]: pam_unix(sudo:session): session closed for user root
Nov 24 10:12:58 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 10:12:58 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:12:58 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 10:12:58 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:12:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:12:58.980Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:12:58 compute-0 sudo[297467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 10:12:58 compute-0 sudo[297467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:12:58 compute-0 sudo[297467]: pam_unix(sudo:session): session closed for user root
Nov 24 10:12:59 compute-0 ceph-mon[74331]: pgmap v1391: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 24 10:12:59 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:12:59 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:12:59 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:12:59 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:12:59 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:12:59.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:13:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:12:59 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:13:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:12:59 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:13:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:12:59 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:13:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:13:00 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:13:00 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:00 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:00 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:00.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:00 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1392: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 24 10:13:00 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:13:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:13:00] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Nov 24 10:13:00 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:13:00] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Nov 24 10:13:01 compute-0 anacron[29927]: Job `cron.monthly' started
Nov 24 10:13:01 compute-0 anacron[29927]: Job `cron.monthly' terminated
Nov 24 10:13:01 compute-0 anacron[29927]: Normal exit (3 jobs run)
Nov 24 10:13:01 compute-0 ceph-mon[74331]: pgmap v1392: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 24 10:13:01 compute-0 nova_compute[257700]: 2025-11-24 10:13:01.729 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:13:01 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:01 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:01 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:01.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:01 compute-0 nova_compute[257700]: 2025-11-24 10:13:01.756 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:13:01 compute-0 podman[297497]: 2025-11-24 10:13:01.792802652 +0000 UTC m=+0.065254727 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 24 10:13:01 compute-0 podman[297498]: 2025-11-24 10:13:01.826761403 +0000 UTC m=+0.098360955 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 24 10:13:02 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:02 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:13:02 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:02.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:13:02 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:13:02 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1393: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 24 10:13:02 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/3040130363' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 10:13:02 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/3040130363' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 10:13:03 compute-0 ceph-mon[74331]: pgmap v1393: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 24 10:13:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:13:03.610Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:13:03 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:03 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:13:03 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:03.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:13:04 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:04 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:04 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:04.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:04 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1394: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 24 10:13:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:13:04 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:13:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:13:04 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:13:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:13:04 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:13:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:13:05 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:13:05 compute-0 ceph-mon[74331]: pgmap v1394: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 24 10:13:05 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:05 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:05 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:05.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:06 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:06 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:06 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:06.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:06 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1395: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:13:06 compute-0 nova_compute[257700]: 2025-11-24 10:13:06.733 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:13:06 compute-0 nova_compute[257700]: 2025-11-24 10:13:06.758 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:13:07 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:13:07 compute-0 ceph-mon[74331]: pgmap v1395: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:13:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:13:07.633Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:13:07 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:07 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:07 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:07.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:08 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:08 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:08 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:08.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:08 compute-0 podman[297551]: 2025-11-24 10:13:08.072841681 +0000 UTC m=+0.086076805 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 24 10:13:08 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1396: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:13:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:13:08.983Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 10:13:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:13:08.983Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 10:13:09 compute-0 ceph-mon[74331]: pgmap v1396: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:13:09 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:09 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:09 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:09.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:13:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:13:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:13:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:13:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:13:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:13:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:13:10 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:13:10 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:10 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:10 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:10.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:10 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1397: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:13:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:13:10] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Nov 24 10:13:10 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:13:10] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Nov 24 10:13:11 compute-0 sudo[297573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:13:11 compute-0 sudo[297573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:13:11 compute-0 sudo[297573]: pam_unix(sudo:session): session closed for user root
Nov 24 10:13:11 compute-0 ceph-mon[74331]: pgmap v1397: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:13:11 compute-0 nova_compute[257700]: 2025-11-24 10:13:11.735 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:13:11 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:11 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:13:11 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:11.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:13:11 compute-0 nova_compute[257700]: 2025-11-24 10:13:11.759 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:13:12 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:12 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:12 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:12.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:12 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:13:12 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1398: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:13:13 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:13:13.611Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 10:13:13 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:13:13.611Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:13:13 compute-0 ceph-mon[74331]: pgmap v1398: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:13:13 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:13 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:13 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:13.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:14 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:14 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:14 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:14.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:14 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1399: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:13:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:13:15 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:13:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:13:15 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:13:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:13:15 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:13:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:13:15 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:13:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:13:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:13:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:13:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:13:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:13:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:13:15 compute-0 ceph-mon[74331]: pgmap v1399: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:13:15 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:13:15 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:15 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:15 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:15.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:16 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:16 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:13:16 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:16.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:13:16 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1400: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:13:16 compute-0 nova_compute[257700]: 2025-11-24 10:13:16.738 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:13:16 compute-0 nova_compute[257700]: 2025-11-24 10:13:16.761 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:13:17 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:13:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:13:17.634Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:13:17 compute-0 ceph-mon[74331]: pgmap v1400: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:13:17 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:17 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:17 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:17.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:18 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:18 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:13:18 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:18.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:13:18 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1401: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:13:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:13:18.984Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:13:19 compute-0 ceph-mon[74331]: pgmap v1401: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:13:19 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:19 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:13:19 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:19.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:13:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:13:20 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:13:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:13:20 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:13:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:13:20 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:13:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:13:20 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:13:20 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:20 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:13:20 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:20.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:13:20 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1402: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:13:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:13:20.589 165073 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:13:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:13:20.589 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:13:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:13:20.589 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:13:20 compute-0 ceph-mon[74331]: pgmap v1402: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:13:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:13:20] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 24 10:13:20 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:13:20] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 24 10:13:21 compute-0 nova_compute[257700]: 2025-11-24 10:13:21.741 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:13:21 compute-0 nova_compute[257700]: 2025-11-24 10:13:21.763 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:13:21 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:21 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:21 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:21.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:22 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:22 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:22 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:22.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:22 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:13:22 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1403: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:13:23 compute-0 ceph-mon[74331]: pgmap v1403: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:13:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:13:23.613Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 10:13:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:13:23.613Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 10:13:23 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:23 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:23 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:23.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:24 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:24 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:24 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:24.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:24 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1404: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:13:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:13:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:13:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:13:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:13:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:13:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:13:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:13:25 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:13:25 compute-0 ceph-mon[74331]: pgmap v1404: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:13:25 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:25 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:25 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:25.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:26 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:26 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:26 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:26.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:26 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1405: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:13:26 compute-0 nova_compute[257700]: 2025-11-24 10:13:26.743 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:13:26 compute-0 nova_compute[257700]: 2025-11-24 10:13:26.764 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:13:27 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:13:27 compute-0 ceph-mon[74331]: pgmap v1405: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:13:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:13:27.635Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:13:27 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:27 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:27 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:27.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:28 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:28 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:28 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:28.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:28 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1406: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:13:28 compute-0 nova_compute[257700]: 2025-11-24 10:13:28.930 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:13:28 compute-0 nova_compute[257700]: 2025-11-24 10:13:28.931 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 10:13:28 compute-0 nova_compute[257700]: 2025-11-24 10:13:28.931 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 10:13:28 compute-0 nova_compute[257700]: 2025-11-24 10:13:28.943 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 10:13:28 compute-0 nova_compute[257700]: 2025-11-24 10:13:28.944 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:13:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:13:28.984Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:13:29 compute-0 ceph-mon[74331]: pgmap v1406: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:13:29 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:29 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:29 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:29.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:29 compute-0 nova_compute[257700]: 2025-11-24 10:13:29.930 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:13:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:13:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:13:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:13:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:13:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:13:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:13:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:13:30 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:13:30 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:30 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:30 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:30.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:30 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1407: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:13:30 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:13:30 compute-0 nova_compute[257700]: 2025-11-24 10:13:30.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:13:30 compute-0 nova_compute[257700]: 2025-11-24 10:13:30.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:13:30 compute-0 nova_compute[257700]: 2025-11-24 10:13:30.922 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 10:13:30 compute-0 nova_compute[257700]: 2025-11-24 10:13:30.922 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:13:30 compute-0 nova_compute[257700]: 2025-11-24 10:13:30.979 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:13:30 compute-0 nova_compute[257700]: 2025-11-24 10:13:30.980 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:13:30 compute-0 nova_compute[257700]: 2025-11-24 10:13:30.980 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:13:30 compute-0 nova_compute[257700]: 2025-11-24 10:13:30.980 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 10:13:30 compute-0 nova_compute[257700]: 2025-11-24 10:13:30.980 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:13:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:13:30] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 24 10:13:30 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:13:30] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 24 10:13:31 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:13:31 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/783884729' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:13:31 compute-0 nova_compute[257700]: 2025-11-24 10:13:31.397 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:13:31 compute-0 sudo[297640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:13:31 compute-0 sudo[297640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:13:31 compute-0 sudo[297640]: pam_unix(sudo:session): session closed for user root
Nov 24 10:13:31 compute-0 nova_compute[257700]: 2025-11-24 10:13:31.541 257704 WARNING nova.virt.libvirt.driver [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 10:13:31 compute-0 nova_compute[257700]: 2025-11-24 10:13:31.542 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4501MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 10:13:31 compute-0 nova_compute[257700]: 2025-11-24 10:13:31.542 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:13:31 compute-0 nova_compute[257700]: 2025-11-24 10:13:31.543 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:13:31 compute-0 ceph-mon[74331]: pgmap v1407: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:13:31 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/783884729' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:13:31 compute-0 nova_compute[257700]: 2025-11-24 10:13:31.705 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 10:13:31 compute-0 nova_compute[257700]: 2025-11-24 10:13:31.706 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 10:13:31 compute-0 nova_compute[257700]: 2025-11-24 10:13:31.744 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:13:31 compute-0 nova_compute[257700]: 2025-11-24 10:13:31.766 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:13:31 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:31 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:31 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:31.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:31 compute-0 nova_compute[257700]: 2025-11-24 10:13:31.790 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:13:32 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:32 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:32 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:32.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:13:32 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4283119851' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:13:32 compute-0 nova_compute[257700]: 2025-11-24 10:13:32.222 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:13:32 compute-0 nova_compute[257700]: 2025-11-24 10:13:32.228 257704 DEBUG nova.compute.provider_tree [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Inventory has not changed in ProviderTree for provider: a50ce3b5-7e9e-4263-a4aa-c35573ac7257 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 10:13:32 compute-0 nova_compute[257700]: 2025-11-24 10:13:32.241 257704 DEBUG nova.scheduler.client.report [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Inventory has not changed for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 10:13:32 compute-0 nova_compute[257700]: 2025-11-24 10:13:32.242 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 10:13:32 compute-0 nova_compute[257700]: 2025-11-24 10:13:32.242 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:13:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:13:32 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1408: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:13:32 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/4283119851' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:13:32 compute-0 podman[297690]: 2025-11-24 10:13:32.785931722 +0000 UTC m=+0.065557814 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3)
Nov 24 10:13:32 compute-0 podman[297691]: 2025-11-24 10:13:32.845976775 +0000 UTC m=+0.123509243 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 24 10:13:33 compute-0 nova_compute[257700]: 2025-11-24 10:13:33.242 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:13:33 compute-0 nova_compute[257700]: 2025-11-24 10:13:33.243 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:13:33 compute-0 ceph-mon[74331]: pgmap v1408: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:13:33 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:13:33.614Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:13:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:33.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:34 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:34 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:34 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:34.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:34 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1409: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:13:34 compute-0 nova_compute[257700]: 2025-11-24 10:13:34.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:13:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:13:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:13:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:13:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:13:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:13:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:13:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:13:35 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:13:35 compute-0 ceph-mon[74331]: pgmap v1409: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:13:35 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:35 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:13:35 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:35.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:13:36 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:36 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:13:36 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:36.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:13:36 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1410: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:13:36 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/1530069259' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:13:36 compute-0 nova_compute[257700]: 2025-11-24 10:13:36.746 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:13:36 compute-0 nova_compute[257700]: 2025-11-24 10:13:36.768 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:13:36 compute-0 nova_compute[257700]: 2025-11-24 10:13:36.922 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:13:37 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:13:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:13:37.635Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:13:37 compute-0 ceph-mon[74331]: pgmap v1410: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:13:37 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/4261055826' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:13:37 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:37 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:13:37 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:37.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:13:38 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:38 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:13:38 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:38.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:13:38 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1411: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:13:38 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/3050926301' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:13:38 compute-0 podman[297743]: 2025-11-24 10:13:38.780651734 +0000 UTC m=+0.060257810 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 24 10:13:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:13:38.986Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:13:39 compute-0 ceph-mon[74331]: pgmap v1411: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:13:39 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/4194534558' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:13:39 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:39 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:39 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:39.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:13:39 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:13:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:13:39 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:13:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:13:39 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:13:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:13:40 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:13:40 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:40 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:13:40 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:40.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:13:40 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1412: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:13:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:13:40] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 24 10:13:40 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:13:40] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 24 10:13:41 compute-0 ceph-mon[74331]: pgmap v1412: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:13:41 compute-0 nova_compute[257700]: 2025-11-24 10:13:41.750 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:13:41 compute-0 nova_compute[257700]: 2025-11-24 10:13:41.770 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:13:41 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:41 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:13:41 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:41.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:13:42 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:42 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:13:42 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:42.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:13:42 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:13:42 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1413: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:13:43 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:13:43.615Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:13:43 compute-0 ceph-mon[74331]: pgmap v1413: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:13:43 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:43 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:13:43 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:43.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:13:44 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:44 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:13:44 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:44.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:13:44 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1414: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:13:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:13:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:13:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:13:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:13:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:13:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:13:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:13:45 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:13:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:13:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:13:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Optimize plan auto_2025-11-24_10:13:45
Nov 24 10:13:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 10:13:45 compute-0 ceph-mgr[74626]: [balancer INFO root] do_upmap
Nov 24 10:13:45 compute-0 ceph-mgr[74626]: [balancer INFO root] pools ['default.rgw.control', 'images', 'cephfs.cephfs.meta', 'vms', '.rgw.root', 'default.rgw.meta', 'volumes', '.mgr', 'cephfs.cephfs.data', 'default.rgw.log', 'backups', '.nfs']
Nov 24 10:13:45 compute-0 ceph-mgr[74626]: [balancer INFO root] prepared 0/10 upmap changes
Nov 24 10:13:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:13:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:13:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:13:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:13:45 compute-0 ceph-mon[74331]: pgmap v1414: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:13:45 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:13:45 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:45 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:45 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:45.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 10:13:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:13:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 10:13:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:13:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 10:13:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:13:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:13:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:13:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:13:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:13:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 24 10:13:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:13:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 10:13:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:13:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:13:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:13:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 24 10:13:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:13:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 10:13:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:13:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:13:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:13:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 10:13:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:13:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 10:13:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 10:13:46 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:46 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:13:46 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:46.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:13:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 10:13:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 10:13:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 10:13:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 10:13:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 10:13:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 10:13:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 10:13:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 10:13:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 10:13:46 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1415: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:13:46 compute-0 nova_compute[257700]: 2025-11-24 10:13:46.752 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:13:46 compute-0 nova_compute[257700]: 2025-11-24 10:13:46.772 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:13:47 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:13:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:13:47.637Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:13:47 compute-0 ceph-mon[74331]: pgmap v1415: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:13:47 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:47 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:13:47 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:47.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:13:48 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:48 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:48 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:48.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:48 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1416: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:13:48 compute-0 ceph-mon[74331]: pgmap v1416: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:13:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:13:48.986Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:13:49 compute-0 sshd-session[297771]: Invalid user afa from 101.47.161.217 port 58590
Nov 24 10:13:49 compute-0 sshd-session[297771]: Received disconnect from 101.47.161.217 port 58590:11: Bye Bye [preauth]
Nov 24 10:13:49 compute-0 sshd-session[297771]: Disconnected from invalid user afa 101.47.161.217 port 58590 [preauth]
Nov 24 10:13:49 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:49 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:13:49 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:49.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:13:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:13:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:13:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:13:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:13:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:13:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:13:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:13:50 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:13:50 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:50 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:13:50 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:50.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:13:50 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1417: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:13:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:13:50] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 24 10:13:50 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:13:50] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Nov 24 10:13:51 compute-0 ceph-mon[74331]: pgmap v1417: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:13:51 compute-0 sudo[297776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:13:51 compute-0 sudo[297776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:13:51 compute-0 sudo[297776]: pam_unix(sudo:session): session closed for user root
Nov 24 10:13:51 compute-0 nova_compute[257700]: 2025-11-24 10:13:51.755 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:13:51 compute-0 nova_compute[257700]: 2025-11-24 10:13:51.773 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:13:51 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:51 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:13:51 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:51.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:13:52 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:52 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:13:52 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:52.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:13:52 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:13:52 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1418: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:13:52 compute-0 sshd-session[297793]: Invalid user ftpuser from 36.255.3.203 port 56829
Nov 24 10:13:52 compute-0 sshd-session[297793]: Received disconnect from 36.255.3.203 port 56829:11: Bye Bye [preauth]
Nov 24 10:13:52 compute-0 sshd-session[297793]: Disconnected from invalid user ftpuser 36.255.3.203 port 56829 [preauth]
Nov 24 10:13:53 compute-0 ceph-mon[74331]: pgmap v1418: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:13:53 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:13:53.616Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:13:53 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:53 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:13:53 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:53.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:13:54 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:54 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:54 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:54.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:54 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1419: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:13:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:13:54 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:13:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:13:54 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:13:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:13:54 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:13:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:13:55 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:13:55 compute-0 ceph-mon[74331]: pgmap v1419: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:13:55 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:55 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:13:55 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:55.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:13:56 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:56 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:13:56 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:56.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:13:56 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1420: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:13:56 compute-0 sshd[191849]: Timeout before authentication for connection from 14.215.126.91 to 38.129.56.124, pid = 296478
Nov 24 10:13:56 compute-0 nova_compute[257700]: 2025-11-24 10:13:56.774 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 10:13:56 compute-0 nova_compute[257700]: 2025-11-24 10:13:56.776 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 10:13:56 compute-0 nova_compute[257700]: 2025-11-24 10:13:56.776 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 24 10:13:56 compute-0 nova_compute[257700]: 2025-11-24 10:13:56.776 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 24 10:13:56 compute-0 nova_compute[257700]: 2025-11-24 10:13:56.820 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:13:56 compute-0 nova_compute[257700]: 2025-11-24 10:13:56.820 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 24 10:13:57 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:13:57 compute-0 ceph-mon[74331]: pgmap v1420: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:13:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:13:57.637Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:13:57 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:57 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:57 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:57.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:58 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:58 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:13:58 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:13:58.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:13:58 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1421: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:13:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:13:58.988Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 10:13:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:13:58.988Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 10:13:59 compute-0 sudo[297811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:13:59 compute-0 sudo[297811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:13:59 compute-0 sudo[297811]: pam_unix(sudo:session): session closed for user root
Nov 24 10:13:59 compute-0 sudo[297836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 10:13:59 compute-0 sudo[297836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:13:59 compute-0 ceph-mon[74331]: pgmap v1421: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:13:59 compute-0 sudo[297836]: pam_unix(sudo:session): session closed for user root
Nov 24 10:13:59 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:13:59 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:13:59 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:13:59.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:13:59 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1422: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:13:59 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 10:13:59 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:13:59 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 10:13:59 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:14:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:14:00 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:14:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:14:00 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:14:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:14:00 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:14:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:14:00 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:14:00 compute-0 sudo[297894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:14:00 compute-0 sudo[297894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:14:00 compute-0 sudo[297894]: pam_unix(sudo:session): session closed for user root
Nov 24 10:14:00 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:00 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:00 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:00.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:00 compute-0 sudo[297919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 24 10:14:00 compute-0 sudo[297919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:14:00 compute-0 podman[297984]: 2025-11-24 10:14:00.488926447 +0000 UTC m=+0.045952256 container create d1db6585792d834eb4d52fe4e3268e4d48a1aec576911f6eddc88d31427261aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_germain, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 10:14:00 compute-0 systemd[1]: Started libpod-conmon-d1db6585792d834eb4d52fe4e3268e4d48a1aec576911f6eddc88d31427261aa.scope.
Nov 24 10:14:00 compute-0 podman[297984]: 2025-11-24 10:14:00.466869532 +0000 UTC m=+0.023895441 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:14:00 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:14:00 compute-0 podman[297984]: 2025-11-24 10:14:00.592209779 +0000 UTC m=+0.149235598 container init d1db6585792d834eb4d52fe4e3268e4d48a1aec576911f6eddc88d31427261aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_germain, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Nov 24 10:14:00 compute-0 podman[297984]: 2025-11-24 10:14:00.604742569 +0000 UTC m=+0.161768378 container start d1db6585792d834eb4d52fe4e3268e4d48a1aec576911f6eddc88d31427261aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_germain, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Nov 24 10:14:00 compute-0 podman[297984]: 2025-11-24 10:14:00.607810984 +0000 UTC m=+0.164836863 container attach d1db6585792d834eb4d52fe4e3268e4d48a1aec576911f6eddc88d31427261aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_germain, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 10:14:00 compute-0 suspicious_germain[297999]: 167 167
Nov 24 10:14:00 compute-0 systemd[1]: libpod-d1db6585792d834eb4d52fe4e3268e4d48a1aec576911f6eddc88d31427261aa.scope: Deactivated successfully.
Nov 24 10:14:00 compute-0 podman[297984]: 2025-11-24 10:14:00.613604607 +0000 UTC m=+0.170630416 container died d1db6585792d834eb4d52fe4e3268e4d48a1aec576911f6eddc88d31427261aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_germain, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 10:14:00 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:14:00 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 10:14:00 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:14:00 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:14:00 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 10:14:00 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 10:14:00 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:14:00 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:14:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a4e348edd96c826f633a08957e273852f0806deb09c3d006cc4374f6ae17788-merged.mount: Deactivated successfully.
Nov 24 10:14:00 compute-0 podman[297984]: 2025-11-24 10:14:00.6618612 +0000 UTC m=+0.218887019 container remove d1db6585792d834eb4d52fe4e3268e4d48a1aec576911f6eddc88d31427261aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_germain, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 10:14:00 compute-0 systemd[1]: libpod-conmon-d1db6585792d834eb4d52fe4e3268e4d48a1aec576911f6eddc88d31427261aa.scope: Deactivated successfully.
Nov 24 10:14:00 compute-0 podman[298023]: 2025-11-24 10:14:00.821491764 +0000 UTC m=+0.045190558 container create 201271295a36dc47c8d899e49c85b6161bceecd4b99bb74cb11d8582985e443b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_poincare, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 10:14:00 compute-0 systemd[1]: Started libpod-conmon-201271295a36dc47c8d899e49c85b6161bceecd4b99bb74cb11d8582985e443b.scope.
Nov 24 10:14:00 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:14:00 compute-0 podman[298023]: 2025-11-24 10:14:00.803550811 +0000 UTC m=+0.027249565 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:14:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/470b623b3d6d33cc380f0c6a00838c9c8e1c81f960b235732ee777f61b0e9995/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 10:14:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/470b623b3d6d33cc380f0c6a00838c9c8e1c81f960b235732ee777f61b0e9995/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 10:14:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/470b623b3d6d33cc380f0c6a00838c9c8e1c81f960b235732ee777f61b0e9995/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 10:14:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/470b623b3d6d33cc380f0c6a00838c9c8e1c81f960b235732ee777f61b0e9995/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 10:14:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/470b623b3d6d33cc380f0c6a00838c9c8e1c81f960b235732ee777f61b0e9995/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 10:14:00 compute-0 podman[298023]: 2025-11-24 10:14:00.925714349 +0000 UTC m=+0.149413203 container init 201271295a36dc47c8d899e49c85b6161bceecd4b99bb74cb11d8582985e443b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_poincare, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Nov 24 10:14:00 compute-0 podman[298023]: 2025-11-24 10:14:00.935034829 +0000 UTC m=+0.158733573 container start 201271295a36dc47c8d899e49c85b6161bceecd4b99bb74cb11d8582985e443b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_poincare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 24 10:14:00 compute-0 podman[298023]: 2025-11-24 10:14:00.938507984 +0000 UTC m=+0.162206788 container attach 201271295a36dc47c8d899e49c85b6161bceecd4b99bb74cb11d8582985e443b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_poincare, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Nov 24 10:14:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:14:00] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 24 10:14:00 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:14:00] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 24 10:14:01 compute-0 upbeat_poincare[298039]: --> passed data devices: 0 physical, 1 LVM
Nov 24 10:14:01 compute-0 upbeat_poincare[298039]: --> All data devices are unavailable
Nov 24 10:14:01 compute-0 systemd[1]: libpod-201271295a36dc47c8d899e49c85b6161bceecd4b99bb74cb11d8582985e443b.scope: Deactivated successfully.
Nov 24 10:14:01 compute-0 podman[298023]: 2025-11-24 10:14:01.34144701 +0000 UTC m=+0.565145774 container died 201271295a36dc47c8d899e49c85b6161bceecd4b99bb74cb11d8582985e443b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default)
Nov 24 10:14:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-470b623b3d6d33cc380f0c6a00838c9c8e1c81f960b235732ee777f61b0e9995-merged.mount: Deactivated successfully.
Nov 24 10:14:01 compute-0 podman[298023]: 2025-11-24 10:14:01.391269621 +0000 UTC m=+0.614968405 container remove 201271295a36dc47c8d899e49c85b6161bceecd4b99bb74cb11d8582985e443b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_poincare, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Nov 24 10:14:01 compute-0 systemd[1]: libpod-conmon-201271295a36dc47c8d899e49c85b6161bceecd4b99bb74cb11d8582985e443b.scope: Deactivated successfully.
Nov 24 10:14:01 compute-0 sudo[297919]: pam_unix(sudo:session): session closed for user root
Nov 24 10:14:01 compute-0 sudo[298065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:14:01 compute-0 sudo[298065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:14:01 compute-0 sudo[298065]: pam_unix(sudo:session): session closed for user root
Nov 24 10:14:01 compute-0 sudo[298090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- lvm list --format json
Nov 24 10:14:01 compute-0 sudo[298090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:14:01 compute-0 ceph-mon[74331]: pgmap v1422: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:14:01 compute-0 nova_compute[257700]: 2025-11-24 10:14:01.820 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:14:01 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:01 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:01 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:01.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:01 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1423: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Nov 24 10:14:02 compute-0 podman[298159]: 2025-11-24 10:14:02.066070574 +0000 UTC m=+0.036979176 container create 88038c1f8439e8458c1c65ae59ad16cff03a42910e950ac95fd2dbf73811a791 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_brown, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 10:14:02 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:02 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:02 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:02.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:02 compute-0 systemd[1]: Started libpod-conmon-88038c1f8439e8458c1c65ae59ad16cff03a42910e950ac95fd2dbf73811a791.scope.
Nov 24 10:14:02 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:14:02 compute-0 podman[298159]: 2025-11-24 10:14:02.050754735 +0000 UTC m=+0.021663347 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:14:02 compute-0 podman[298159]: 2025-11-24 10:14:02.156043336 +0000 UTC m=+0.126951938 container init 88038c1f8439e8458c1c65ae59ad16cff03a42910e950ac95fd2dbf73811a791 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 10:14:02 compute-0 podman[298159]: 2025-11-24 10:14:02.16589599 +0000 UTC m=+0.136804592 container start 88038c1f8439e8458c1c65ae59ad16cff03a42910e950ac95fd2dbf73811a791 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 10:14:02 compute-0 podman[298159]: 2025-11-24 10:14:02.169414156 +0000 UTC m=+0.140322788 container attach 88038c1f8439e8458c1c65ae59ad16cff03a42910e950ac95fd2dbf73811a791 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_brown, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 10:14:02 compute-0 unruffled_brown[298175]: 167 167
Nov 24 10:14:02 compute-0 systemd[1]: libpod-88038c1f8439e8458c1c65ae59ad16cff03a42910e950ac95fd2dbf73811a791.scope: Deactivated successfully.
Nov 24 10:14:02 compute-0 conmon[298175]: conmon 88038c1f8439e8458c1c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-88038c1f8439e8458c1c65ae59ad16cff03a42910e950ac95fd2dbf73811a791.scope/container/memory.events
Nov 24 10:14:02 compute-0 podman[298159]: 2025-11-24 10:14:02.175067996 +0000 UTC m=+0.145976628 container died 88038c1f8439e8458c1c65ae59ad16cff03a42910e950ac95fd2dbf73811a791 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 24 10:14:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-14d0976450c176c9ed909d72e479d840ee35b58dfb7a7a4a5d292c65ed92fd9b-merged.mount: Deactivated successfully.
Nov 24 10:14:02 compute-0 podman[298159]: 2025-11-24 10:14:02.224067366 +0000 UTC m=+0.194975978 container remove 88038c1f8439e8458c1c65ae59ad16cff03a42910e950ac95fd2dbf73811a791 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_brown, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 10:14:02 compute-0 systemd[1]: libpod-conmon-88038c1f8439e8458c1c65ae59ad16cff03a42910e950ac95fd2dbf73811a791.scope: Deactivated successfully.
Nov 24 10:14:02 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:14:02 compute-0 podman[298203]: 2025-11-24 10:14:02.416153843 +0000 UTC m=+0.044932762 container create ea67b0ed7b5ffdf169e499bf548e346bfc99494add1daf130d742ac9f2952e9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_banach, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Nov 24 10:14:02 compute-0 systemd[1]: Started libpod-conmon-ea67b0ed7b5ffdf169e499bf548e346bfc99494add1daf130d742ac9f2952e9c.scope.
Nov 24 10:14:02 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:14:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49d9d64d69d111ed851fb962d458666320ac650ec63d2bd650e0190f3a7527c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 10:14:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49d9d64d69d111ed851fb962d458666320ac650ec63d2bd650e0190f3a7527c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 10:14:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49d9d64d69d111ed851fb962d458666320ac650ec63d2bd650e0190f3a7527c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 10:14:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49d9d64d69d111ed851fb962d458666320ac650ec63d2bd650e0190f3a7527c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 10:14:02 compute-0 podman[298203]: 2025-11-24 10:14:02.395138144 +0000 UTC m=+0.023917093 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:14:02 compute-0 podman[298203]: 2025-11-24 10:14:02.501088391 +0000 UTC m=+0.129867350 container init ea67b0ed7b5ffdf169e499bf548e346bfc99494add1daf130d742ac9f2952e9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_banach, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 10:14:02 compute-0 podman[298203]: 2025-11-24 10:14:02.511516419 +0000 UTC m=+0.140295318 container start ea67b0ed7b5ffdf169e499bf548e346bfc99494add1daf130d742ac9f2952e9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_banach, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True)
Nov 24 10:14:02 compute-0 podman[298203]: 2025-11-24 10:14:02.515272361 +0000 UTC m=+0.144051280 container attach ea67b0ed7b5ffdf169e499bf548e346bfc99494add1daf130d742ac9f2952e9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 10:14:02 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/1342479791' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 10:14:02 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/1342479791' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 10:14:02 compute-0 boring_banach[298219]: {
Nov 24 10:14:02 compute-0 boring_banach[298219]:     "0": [
Nov 24 10:14:02 compute-0 boring_banach[298219]:         {
Nov 24 10:14:02 compute-0 boring_banach[298219]:             "devices": [
Nov 24 10:14:02 compute-0 boring_banach[298219]:                 "/dev/loop3"
Nov 24 10:14:02 compute-0 boring_banach[298219]:             ],
Nov 24 10:14:02 compute-0 boring_banach[298219]:             "lv_name": "ceph_lv0",
Nov 24 10:14:02 compute-0 boring_banach[298219]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 10:14:02 compute-0 boring_banach[298219]:             "lv_size": "21470642176",
Nov 24 10:14:02 compute-0 boring_banach[298219]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=84a084c3-61a7-5de7-8207-1f88efa59a64,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 24 10:14:02 compute-0 boring_banach[298219]:             "lv_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 10:14:02 compute-0 boring_banach[298219]:             "name": "ceph_lv0",
Nov 24 10:14:02 compute-0 boring_banach[298219]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 10:14:02 compute-0 boring_banach[298219]:             "tags": {
Nov 24 10:14:02 compute-0 boring_banach[298219]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 10:14:02 compute-0 boring_banach[298219]:                 "ceph.block_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 10:14:02 compute-0 boring_banach[298219]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 10:14:02 compute-0 boring_banach[298219]:                 "ceph.cluster_fsid": "84a084c3-61a7-5de7-8207-1f88efa59a64",
Nov 24 10:14:02 compute-0 boring_banach[298219]:                 "ceph.cluster_name": "ceph",
Nov 24 10:14:02 compute-0 boring_banach[298219]:                 "ceph.crush_device_class": "",
Nov 24 10:14:02 compute-0 boring_banach[298219]:                 "ceph.encrypted": "0",
Nov 24 10:14:02 compute-0 boring_banach[298219]:                 "ceph.osd_fsid": "4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c",
Nov 24 10:14:02 compute-0 boring_banach[298219]:                 "ceph.osd_id": "0",
Nov 24 10:14:02 compute-0 boring_banach[298219]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 10:14:02 compute-0 boring_banach[298219]:                 "ceph.type": "block",
Nov 24 10:14:02 compute-0 boring_banach[298219]:                 "ceph.vdo": "0",
Nov 24 10:14:02 compute-0 boring_banach[298219]:                 "ceph.with_tpm": "0"
Nov 24 10:14:02 compute-0 boring_banach[298219]:             },
Nov 24 10:14:02 compute-0 boring_banach[298219]:             "type": "block",
Nov 24 10:14:02 compute-0 boring_banach[298219]:             "vg_name": "ceph_vg0"
Nov 24 10:14:02 compute-0 boring_banach[298219]:         }
Nov 24 10:14:02 compute-0 boring_banach[298219]:     ]
Nov 24 10:14:02 compute-0 boring_banach[298219]: }
Nov 24 10:14:02 compute-0 systemd[1]: libpod-ea67b0ed7b5ffdf169e499bf548e346bfc99494add1daf130d742ac9f2952e9c.scope: Deactivated successfully.
Nov 24 10:14:02 compute-0 podman[298203]: 2025-11-24 10:14:02.832019957 +0000 UTC m=+0.460798886 container died ea67b0ed7b5ffdf169e499bf548e346bfc99494add1daf130d742ac9f2952e9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True)
Nov 24 10:14:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-49d9d64d69d111ed851fb962d458666320ac650ec63d2bd650e0190f3a7527c5-merged.mount: Deactivated successfully.
Nov 24 10:14:02 compute-0 podman[298203]: 2025-11-24 10:14:02.88109438 +0000 UTC m=+0.509873319 container remove ea67b0ed7b5ffdf169e499bf548e346bfc99494add1daf130d742ac9f2952e9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_banach, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 10:14:02 compute-0 systemd[1]: libpod-conmon-ea67b0ed7b5ffdf169e499bf548e346bfc99494add1daf130d742ac9f2952e9c.scope: Deactivated successfully.
Nov 24 10:14:02 compute-0 sudo[298090]: pam_unix(sudo:session): session closed for user root
Nov 24 10:14:02 compute-0 podman[298229]: 2025-11-24 10:14:02.95763356 +0000 UTC m=+0.088996939 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 24 10:14:03 compute-0 sudo[298280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:14:03 compute-0 sudo[298280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:14:03 compute-0 podman[298237]: 2025-11-24 10:14:03.023118058 +0000 UTC m=+0.138776869 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller)
Nov 24 10:14:03 compute-0 sudo[298280]: pam_unix(sudo:session): session closed for user root
Nov 24 10:14:03 compute-0 sudo[298310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- raw list --format json
Nov 24 10:14:03 compute-0 sudo[298310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:14:03 compute-0 podman[298378]: 2025-11-24 10:14:03.589141732 +0000 UTC m=+0.049238057 container create 0f49e1473059a57f46faadbdf8ba5432deaf8652d2c688adbfd37ef86d03bbea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_mendel, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid)
Nov 24 10:14:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:14:03.617Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:14:03 compute-0 systemd[1]: Started libpod-conmon-0f49e1473059a57f46faadbdf8ba5432deaf8652d2c688adbfd37ef86d03bbea.scope.
Nov 24 10:14:03 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:14:03 compute-0 podman[298378]: 2025-11-24 10:14:03.569958499 +0000 UTC m=+0.030054864 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:14:03 compute-0 podman[298378]: 2025-11-24 10:14:03.674910492 +0000 UTC m=+0.135006827 container init 0f49e1473059a57f46faadbdf8ba5432deaf8652d2c688adbfd37ef86d03bbea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_mendel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 10:14:03 compute-0 podman[298378]: 2025-11-24 10:14:03.681694919 +0000 UTC m=+0.141791234 container start 0f49e1473059a57f46faadbdf8ba5432deaf8652d2c688adbfd37ef86d03bbea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_mendel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 10:14:03 compute-0 podman[298378]: 2025-11-24 10:14:03.684735625 +0000 UTC m=+0.144831960 container attach 0f49e1473059a57f46faadbdf8ba5432deaf8652d2c688adbfd37ef86d03bbea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_mendel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 10:14:03 compute-0 ceph-mon[74331]: pgmap v1423: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Nov 24 10:14:03 compute-0 distracted_mendel[298396]: 167 167
Nov 24 10:14:03 compute-0 systemd[1]: libpod-0f49e1473059a57f46faadbdf8ba5432deaf8652d2c688adbfd37ef86d03bbea.scope: Deactivated successfully.
Nov 24 10:14:03 compute-0 sshd-session[298195]: Received disconnect from 83.229.122.23 port 37772:11: Bye Bye [preauth]
Nov 24 10:14:03 compute-0 conmon[298396]: conmon 0f49e1473059a57f46fa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0f49e1473059a57f46faadbdf8ba5432deaf8652d2c688adbfd37ef86d03bbea.scope/container/memory.events
Nov 24 10:14:03 compute-0 podman[298378]: 2025-11-24 10:14:03.690127447 +0000 UTC m=+0.150223762 container died 0f49e1473059a57f46faadbdf8ba5432deaf8652d2c688adbfd37ef86d03bbea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_mendel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 24 10:14:03 compute-0 sshd-session[298195]: Disconnected from authenticating user root 83.229.122.23 port 37772 [preauth]
Nov 24 10:14:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ce00a71cec0d5392ccc62d9ef408ee5d0ec61b50538b2c67aeb50c11b7e1c2a-merged.mount: Deactivated successfully.
Nov 24 10:14:03 compute-0 podman[298378]: 2025-11-24 10:14:03.730821703 +0000 UTC m=+0.190918018 container remove 0f49e1473059a57f46faadbdf8ba5432deaf8652d2c688adbfd37ef86d03bbea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True)
Nov 24 10:14:03 compute-0 systemd[1]: libpod-conmon-0f49e1473059a57f46faadbdf8ba5432deaf8652d2c688adbfd37ef86d03bbea.scope: Deactivated successfully.
Nov 24 10:14:03 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:03 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:03 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:03.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:03 compute-0 podman[298420]: 2025-11-24 10:14:03.911197819 +0000 UTC m=+0.043010833 container create 21f75866cc3912f02e3ffb764316fe8b2ba0d9c04cd39d6336ac6357099d2751 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_hopper, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Nov 24 10:14:03 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1424: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:14:03 compute-0 systemd[1]: Started libpod-conmon-21f75866cc3912f02e3ffb764316fe8b2ba0d9c04cd39d6336ac6357099d2751.scope.
Nov 24 10:14:03 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:14:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f4b50afd04ec26b0080e9fead66fd12e8157edc3aaa608bfd68a590c57cccda/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 10:14:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f4b50afd04ec26b0080e9fead66fd12e8157edc3aaa608bfd68a590c57cccda/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 10:14:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f4b50afd04ec26b0080e9fead66fd12e8157edc3aaa608bfd68a590c57cccda/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 10:14:03 compute-0 podman[298420]: 2025-11-24 10:14:03.892664541 +0000 UTC m=+0.024477585 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:14:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f4b50afd04ec26b0080e9fead66fd12e8157edc3aaa608bfd68a590c57cccda/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 10:14:04 compute-0 podman[298420]: 2025-11-24 10:14:04.016206653 +0000 UTC m=+0.148019667 container init 21f75866cc3912f02e3ffb764316fe8b2ba0d9c04cd39d6336ac6357099d2751 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_hopper, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 10:14:04 compute-0 podman[298420]: 2025-11-24 10:14:04.026318374 +0000 UTC m=+0.158131378 container start 21f75866cc3912f02e3ffb764316fe8b2ba0d9c04cd39d6336ac6357099d2751 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_hopper, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Nov 24 10:14:04 compute-0 podman[298420]: 2025-11-24 10:14:04.030498497 +0000 UTC m=+0.162311511 container attach 21f75866cc3912f02e3ffb764316fe8b2ba0d9c04cd39d6336ac6357099d2751 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_hopper, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Nov 24 10:14:04 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:04 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:14:04 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:04.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:14:04 compute-0 lvm[298512]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 10:14:04 compute-0 lvm[298512]: VG ceph_vg0 finished
Nov 24 10:14:04 compute-0 naughty_hopper[298437]: {}
Nov 24 10:14:04 compute-0 systemd[1]: libpod-21f75866cc3912f02e3ffb764316fe8b2ba0d9c04cd39d6336ac6357099d2751.scope: Deactivated successfully.
Nov 24 10:14:04 compute-0 systemd[1]: libpod-21f75866cc3912f02e3ffb764316fe8b2ba0d9c04cd39d6336ac6357099d2751.scope: Consumed 1.149s CPU time.
Nov 24 10:14:04 compute-0 podman[298420]: 2025-11-24 10:14:04.75713694 +0000 UTC m=+0.888949934 container died 21f75866cc3912f02e3ffb764316fe8b2ba0d9c04cd39d6336ac6357099d2751 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_hopper, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 24 10:14:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f4b50afd04ec26b0080e9fead66fd12e8157edc3aaa608bfd68a590c57cccda-merged.mount: Deactivated successfully.
Nov 24 10:14:04 compute-0 podman[298420]: 2025-11-24 10:14:04.795925897 +0000 UTC m=+0.927738891 container remove 21f75866cc3912f02e3ffb764316fe8b2ba0d9c04cd39d6336ac6357099d2751 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Nov 24 10:14:04 compute-0 systemd[1]: libpod-conmon-21f75866cc3912f02e3ffb764316fe8b2ba0d9c04cd39d6336ac6357099d2751.scope: Deactivated successfully.
Nov 24 10:14:04 compute-0 sudo[298310]: pam_unix(sudo:session): session closed for user root
Nov 24 10:14:04 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 10:14:04 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:14:04 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Nov 24 10:14:04 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:14:04.852623) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 10:14:04 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Nov 24 10:14:04 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763979244852691, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 992, "num_deletes": 251, "total_data_size": 1615744, "memory_usage": 1649008, "flush_reason": "Manual Compaction"}
Nov 24 10:14:04 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Nov 24 10:14:04 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 10:14:04 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763979244865682, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 1591187, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38407, "largest_seqno": 39398, "table_properties": {"data_size": 1586358, "index_size": 2353, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10689, "raw_average_key_size": 19, "raw_value_size": 1576631, "raw_average_value_size": 2925, "num_data_blocks": 103, "num_entries": 539, "num_filter_entries": 539, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763979162, "oldest_key_time": 1763979162, "file_creation_time": 1763979244, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "42aa12d2-c531-4ddc-8c4c-bc0b5971346b", "db_session_id": "RORHLERH15LC1QL8D0I4", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Nov 24 10:14:04 compute-0 ceph-mon[74331]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 13110 microseconds, and 7312 cpu microseconds.
Nov 24 10:14:04 compute-0 ceph-mon[74331]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 10:14:04 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:14:04.865735) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 1591187 bytes OK
Nov 24 10:14:04 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:14:04.865758) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Nov 24 10:14:04 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:14:04.868537) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Nov 24 10:14:04 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:14:04.868606) EVENT_LOG_v1 {"time_micros": 1763979244868598, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 10:14:04 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:14:04.868628) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 10:14:04 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 1611141, prev total WAL file size 1647778, number of live WAL files 2.
Nov 24 10:14:04 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 10:14:04 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:14:04.869336) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Nov 24 10:14:04 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 10:14:04 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(1553KB)], [83(14MB)]
Nov 24 10:14:04 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763979244869367, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 16440307, "oldest_snapshot_seqno": -1}
Nov 24 10:14:04 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:14:04 compute-0 sudo[298530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 10:14:04 compute-0 sudo[298530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:14:04 compute-0 sudo[298530]: pam_unix(sudo:session): session closed for user root
Nov 24 10:14:04 compute-0 ceph-mon[74331]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 6919 keys, 14252137 bytes, temperature: kUnknown
Nov 24 10:14:04 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763979244959647, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 14252137, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14207399, "index_size": 26322, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17349, "raw_key_size": 182472, "raw_average_key_size": 26, "raw_value_size": 14084307, "raw_average_value_size": 2035, "num_data_blocks": 1029, "num_entries": 6919, "num_filter_entries": 6919, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763976305, "oldest_key_time": 0, "file_creation_time": 1763979244, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "42aa12d2-c531-4ddc-8c4c-bc0b5971346b", "db_session_id": "RORHLERH15LC1QL8D0I4", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Nov 24 10:14:04 compute-0 ceph-mon[74331]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 10:14:04 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:14:04.959940) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 14252137 bytes
Nov 24 10:14:04 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:14:04.961320) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 181.9 rd, 157.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 14.2 +0.0 blob) out(13.6 +0.0 blob), read-write-amplify(19.3) write-amplify(9.0) OK, records in: 7435, records dropped: 516 output_compression: NoCompression
Nov 24 10:14:04 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:14:04.961356) EVENT_LOG_v1 {"time_micros": 1763979244961341, "job": 48, "event": "compaction_finished", "compaction_time_micros": 90365, "compaction_time_cpu_micros": 31487, "output_level": 6, "num_output_files": 1, "total_output_size": 14252137, "num_input_records": 7435, "num_output_records": 6919, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 10:14:04 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 10:14:04 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763979244961953, "job": 48, "event": "table_file_deletion", "file_number": 85}
Nov 24 10:14:04 compute-0 ceph-mon[74331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 10:14:04 compute-0 ceph-mon[74331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763979244966357, "job": 48, "event": "table_file_deletion", "file_number": 83}
Nov 24 10:14:04 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:14:04.869265) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:14:04 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:14:04.966426) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:14:04 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:14:04.966431) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:14:04 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:14:04.966433) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:14:04 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:14:04.966435) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:14:04 compute-0 ceph-mon[74331]: rocksdb: (Original Log Time 2025/11/24-10:14:04.966436) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 10:14:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:14:04 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:14:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:14:05 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:14:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:14:05 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:14:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:14:05 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:14:05 compute-0 ceph-mon[74331]: pgmap v1424: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:14:05 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:14:05 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:14:05 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:05 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:05 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:05.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:05 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1425: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:14:06 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:06 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:06 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:06.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:06 compute-0 nova_compute[257700]: 2025-11-24 10:14:06.822 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 10:14:06 compute-0 nova_compute[257700]: 2025-11-24 10:14:06.825 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 10:14:06 compute-0 nova_compute[257700]: 2025-11-24 10:14:06.825 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 24 10:14:06 compute-0 nova_compute[257700]: 2025-11-24 10:14:06.825 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 24 10:14:06 compute-0 nova_compute[257700]: 2025-11-24 10:14:06.826 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:14:06 compute-0 nova_compute[257700]: 2025-11-24 10:14:06.826 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 24 10:14:07 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:14:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:14:07.638Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:14:07 compute-0 ceph-mon[74331]: pgmap v1425: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:14:07 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:07 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:14:07 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:07.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:14:07 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1426: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Nov 24 10:14:08 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:08 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:08 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:08.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:14:08.989Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:14:09 compute-0 ceph-mon[74331]: pgmap v1426: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Nov 24 10:14:09 compute-0 podman[298561]: 2025-11-24 10:14:09.827025717 +0000 UTC m=+0.096980938 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 24 10:14:09 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:09 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:09 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:09.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:09 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1427: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:14:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:14:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:14:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:14:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:14:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:14:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:14:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:14:10 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:14:10 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:10 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:14:10 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:10.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:14:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:14:10] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 24 10:14:10 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:14:10] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Nov 24 10:14:11 compute-0 sudo[298581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:14:11 compute-0 sudo[298581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:14:11 compute-0 sudo[298581]: pam_unix(sudo:session): session closed for user root
Nov 24 10:14:11 compute-0 ceph-mon[74331]: pgmap v1427: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:14:11 compute-0 nova_compute[257700]: 2025-11-24 10:14:11.827 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 10:14:11 compute-0 nova_compute[257700]: 2025-11-24 10:14:11.829 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 10:14:11 compute-0 nova_compute[257700]: 2025-11-24 10:14:11.829 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 24 10:14:11 compute-0 nova_compute[257700]: 2025-11-24 10:14:11.829 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 24 10:14:11 compute-0 nova_compute[257700]: 2025-11-24 10:14:11.830 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:14:11 compute-0 nova_compute[257700]: 2025-11-24 10:14:11.830 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 24 10:14:11 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:11 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:14:11 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:11.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:14:11 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1428: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:14:12 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:12 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:14:12 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:12.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:14:12 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:14:13 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:14:13.619Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 10:14:13 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:14:13.620Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 10:14:13 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:14:13.620Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:14:13 compute-0 ceph-mon[74331]: pgmap v1428: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:14:13 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:13 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:13 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:13.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:13 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1429: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:14:14 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:14 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:14 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:14.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:14 compute-0 ceph-mon[74331]: pgmap v1429: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:14:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:14:14 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:14:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:14:14 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:14:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:14:14 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:14:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:14:15 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:14:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:14:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:14:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:14:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:14:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:14:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:14:15 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:14:15 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:15 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:15 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:15.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:15 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1430: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:14:16 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:16 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:14:16 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:16.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:14:16 compute-0 ceph-mon[74331]: pgmap v1430: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:14:16 compute-0 nova_compute[257700]: 2025-11-24 10:14:16.831 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 10:14:17 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:14:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:14:17.639Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:14:17 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:17 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:17 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:17.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:17 compute-0 sshd-session[298557]: error: kex_exchange_identification: read: Connection timed out
Nov 24 10:14:17 compute-0 sshd-session[298557]: banner exchange: Connection from 120.52.12.202 port 59336: Connection timed out
Nov 24 10:14:17 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1431: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:14:18 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:18 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:18 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:18.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:14:18.990Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:14:19 compute-0 ceph-mon[74331]: pgmap v1431: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:14:19 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:19 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:14:19 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:19.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:14:19 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1432: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:14:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:14:19 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:14:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:14:19 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:14:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:14:19 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:14:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:14:20 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:14:20 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:20 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:20 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:20.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:14:20.590 165073 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:14:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:14:20.591 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:14:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:14:20.591 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:14:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:14:20] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Nov 24 10:14:20 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:14:20] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Nov 24 10:14:21 compute-0 ceph-mon[74331]: pgmap v1432: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:14:21 compute-0 nova_compute[257700]: 2025-11-24 10:14:21.833 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:14:21 compute-0 nova_compute[257700]: 2025-11-24 10:14:21.835 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:14:21 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:21 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:14:21 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:21.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:14:21 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1433: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:14:22 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:22 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:14:22 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:22.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:14:22 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:14:23 compute-0 ceph-mon[74331]: pgmap v1433: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:14:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:14:23.621Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:14:23 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:23 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:23 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:23.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:23 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1434: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:14:24 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:24 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:24 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:24.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:14:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:14:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:14:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:14:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:14:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:14:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:14:25 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:14:25 compute-0 ceph-mon[74331]: pgmap v1434: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:14:25 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:25 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:25 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:25.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:25 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1435: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:14:26 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:26 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:26 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:26.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:26 compute-0 nova_compute[257700]: 2025-11-24 10:14:26.835 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:14:27 compute-0 ceph-mon[74331]: pgmap v1435: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:14:27 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:14:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:14:27.640Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:14:27 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:27 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.002000047s ======
Nov 24 10:14:27 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:27.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Nov 24 10:14:27 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1436: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:14:28 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:28 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:14:28 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:28.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:14:28 compute-0 nova_compute[257700]: 2025-11-24 10:14:28.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:14:28 compute-0 nova_compute[257700]: 2025-11-24 10:14:28.922 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 10:14:28 compute-0 nova_compute[257700]: 2025-11-24 10:14:28.922 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 10:14:28 compute-0 nova_compute[257700]: 2025-11-24 10:14:28.937 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 10:14:28 compute-0 nova_compute[257700]: 2025-11-24 10:14:28.938 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:14:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:14:28.991Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:14:29 compute-0 ceph-mon[74331]: pgmap v1436: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:14:29 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:29 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:14:29 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:29.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:14:29 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1437: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:14:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:14:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:14:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:14:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:14:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:14:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:14:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:14:30 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:14:30 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:30 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:14:30 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:30.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:14:30 compute-0 nova_compute[257700]: 2025-11-24 10:14:30.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:14:30 compute-0 nova_compute[257700]: 2025-11-24 10:14:30.957 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:14:30 compute-0 nova_compute[257700]: 2025-11-24 10:14:30.957 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:14:30 compute-0 nova_compute[257700]: 2025-11-24 10:14:30.957 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:14:30 compute-0 nova_compute[257700]: 2025-11-24 10:14:30.957 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 10:14:30 compute-0 nova_compute[257700]: 2025-11-24 10:14:30.958 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:14:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:14:30] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Nov 24 10:14:30 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:14:30] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Nov 24 10:14:31 compute-0 ceph-mon[74331]: pgmap v1437: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:14:31 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:14:31 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:14:31 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/552573425' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:14:31 compute-0 nova_compute[257700]: 2025-11-24 10:14:31.412 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:14:31 compute-0 nova_compute[257700]: 2025-11-24 10:14:31.600 257704 WARNING nova.virt.libvirt.driver [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 10:14:31 compute-0 nova_compute[257700]: 2025-11-24 10:14:31.602 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4492MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 10:14:31 compute-0 nova_compute[257700]: 2025-11-24 10:14:31.602 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:14:31 compute-0 nova_compute[257700]: 2025-11-24 10:14:31.602 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:14:31 compute-0 nova_compute[257700]: 2025-11-24 10:14:31.669 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 10:14:31 compute-0 nova_compute[257700]: 2025-11-24 10:14:31.670 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 10:14:31 compute-0 nova_compute[257700]: 2025-11-24 10:14:31.795 257704 DEBUG nova.scheduler.client.report [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Refreshing inventories for resource provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 24 10:14:31 compute-0 sudo[298648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:14:31 compute-0 sudo[298648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:14:31 compute-0 sudo[298648]: pam_unix(sudo:session): session closed for user root
Nov 24 10:14:31 compute-0 nova_compute[257700]: 2025-11-24 10:14:31.828 257704 DEBUG nova.scheduler.client.report [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Updating ProviderTree inventory for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 24 10:14:31 compute-0 nova_compute[257700]: 2025-11-24 10:14:31.829 257704 DEBUG nova.compute.provider_tree [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Updating inventory in ProviderTree for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 24 10:14:31 compute-0 nova_compute[257700]: 2025-11-24 10:14:31.837 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:14:31 compute-0 nova_compute[257700]: 2025-11-24 10:14:31.843 257704 DEBUG nova.scheduler.client.report [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Refreshing aggregate associations for resource provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 24 10:14:31 compute-0 nova_compute[257700]: 2025-11-24 10:14:31.878 257704 DEBUG nova.scheduler.client.report [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Refreshing trait associations for resource provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257, traits: COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_F16C,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SATA,COMPUTE_ACCELERATORS,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,HW_CPU_X86_SHA,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSSE3,HW_CPU_X86_AVX,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AESNI,HW_CPU_X86_BMI,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_MMX,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 24 10:14:31 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:31 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:31 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:31.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:31 compute-0 nova_compute[257700]: 2025-11-24 10:14:31.907 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:14:31 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1438: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:14:32 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/552573425' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:14:32 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:32 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:14:32 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:32.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:14:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:14:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:14:32 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1896160492' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:14:32 compute-0 nova_compute[257700]: 2025-11-24 10:14:32.400 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:14:32 compute-0 nova_compute[257700]: 2025-11-24 10:14:32.406 257704 DEBUG nova.compute.provider_tree [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Inventory has not changed in ProviderTree for provider: a50ce3b5-7e9e-4263-a4aa-c35573ac7257 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 10:14:32 compute-0 nova_compute[257700]: 2025-11-24 10:14:32.421 257704 DEBUG nova.scheduler.client.report [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Inventory has not changed for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 10:14:32 compute-0 nova_compute[257700]: 2025-11-24 10:14:32.422 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 10:14:32 compute-0 nova_compute[257700]: 2025-11-24 10:14:32.422 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.820s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:14:33 compute-0 ceph-mon[74331]: pgmap v1438: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:14:33 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1896160492' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:14:33 compute-0 nova_compute[257700]: 2025-11-24 10:14:33.418 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:14:33 compute-0 nova_compute[257700]: 2025-11-24 10:14:33.419 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:14:33 compute-0 nova_compute[257700]: 2025-11-24 10:14:33.419 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:14:33 compute-0 nova_compute[257700]: 2025-11-24 10:14:33.420 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 10:14:33 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:14:33.622Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:14:33 compute-0 podman[298698]: 2025-11-24 10:14:33.828167785 +0000 UTC m=+0.089791859 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 24 10:14:33 compute-0 podman[298697]: 2025-11-24 10:14:33.830506263 +0000 UTC m=+0.101711424 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 10:14:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:33.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:33 compute-0 nova_compute[257700]: 2025-11-24 10:14:33.922 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:14:33 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1439: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:14:34 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:34 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:14:34 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:34.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:14:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:14:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:14:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:14:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:14:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:14:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:14:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:14:35 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:14:35 compute-0 ceph-mon[74331]: pgmap v1439: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:14:35 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:35 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:35 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:35.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:35 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1440: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:14:36 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:36 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:14:36 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:36.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:14:36 compute-0 nova_compute[257700]: 2025-11-24 10:14:36.839 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:14:36 compute-0 nova_compute[257700]: 2025-11-24 10:14:36.841 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:14:36 compute-0 nova_compute[257700]: 2025-11-24 10:14:36.920 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:14:37 compute-0 ceph-mon[74331]: pgmap v1440: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:14:37 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/2311438900' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:14:37 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:14:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:14:37.641Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:14:37 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:37 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:14:37 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:37.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:14:37 compute-0 nova_compute[257700]: 2025-11-24 10:14:37.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:14:37 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1441: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:14:38 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:38 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:38 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:38.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:38 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/1121257676' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:14:38 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:14:38.992Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:14:39 compute-0 ceph-mon[74331]: pgmap v1441: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:14:39 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:39 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:14:39 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:39.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:14:39 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1442: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:14:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:14:40 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:14:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:14:40 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:14:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:14:40 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:14:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:14:40 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:14:40 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:40 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:40 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:40.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:40 compute-0 podman[298750]: 2025-11-24 10:14:40.791509454 +0000 UTC m=+0.064907746 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 24 10:14:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:14:40] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Nov 24 10:14:40 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:14:40] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Nov 24 10:14:41 compute-0 ceph-mon[74331]: pgmap v1442: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:14:41 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/1493371527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:14:41 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:14:41 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2401290326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:14:41 compute-0 nova_compute[257700]: 2025-11-24 10:14:41.842 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 10:14:41 compute-0 nova_compute[257700]: 2025-11-24 10:14:41.844 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 10:14:41 compute-0 nova_compute[257700]: 2025-11-24 10:14:41.844 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 24 10:14:41 compute-0 nova_compute[257700]: 2025-11-24 10:14:41.844 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 24 10:14:41 compute-0 nova_compute[257700]: 2025-11-24 10:14:41.890 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:14:41 compute-0 nova_compute[257700]: 2025-11-24 10:14:41.891 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 24 10:14:41 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:41 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:41 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:41.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:41 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1443: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:14:42 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:42 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:14:42 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:42.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:14:42 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/2401290326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:14:42 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:14:43 compute-0 ceph-mon[74331]: pgmap v1443: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:14:43 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:14:43.623Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:14:43 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:43 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:14:43 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:43.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:14:43 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1444: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:14:44 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:44 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:14:44 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:44.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:14:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:14:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:14:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:14:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:14:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:14:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:14:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:14:45 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:14:45 compute-0 ceph-mon[74331]: pgmap v1444: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:14:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:14:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:14:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Optimize plan auto_2025-11-24_10:14:45
Nov 24 10:14:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 10:14:45 compute-0 ceph-mgr[74626]: [balancer INFO root] do_upmap
Nov 24 10:14:45 compute-0 ceph-mgr[74626]: [balancer INFO root] pools ['.rgw.root', 'images', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr', 'vms', '.nfs', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups']
Nov 24 10:14:45 compute-0 ceph-mgr[74626]: [balancer INFO root] prepared 0/10 upmap changes
Nov 24 10:14:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:14:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:14:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:14:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:14:45 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:45 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:45 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:45.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:45 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1445: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:14:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 10:14:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:14:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 10:14:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:14:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 10:14:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:14:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:14:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:14:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:14:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:14:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 24 10:14:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:14:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 10:14:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:14:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:14:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:14:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 24 10:14:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:14:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 10:14:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:14:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:14:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:14:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 10:14:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:14:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 10:14:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 10:14:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 10:14:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 10:14:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 10:14:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 10:14:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 10:14:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 10:14:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 10:14:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 10:14:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 10:14:46 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:46 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:46 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:46.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:46 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:14:46 compute-0 nova_compute[257700]: 2025-11-24 10:14:46.891 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 10:14:47 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:14:47 compute-0 ceph-mon[74331]: pgmap v1445: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:14:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:14:47.642Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:14:47 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:47 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:47 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:47.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:47 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1446: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:14:48 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:48 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:48 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:48.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:48 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:14:48.994Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:14:49 compute-0 ceph-mon[74331]: pgmap v1446: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:14:49 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:49 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:49 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:49.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:49 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1447: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:14:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:14:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:14:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:14:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:14:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:14:49 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:14:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:14:50 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:14:50 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:50 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:14:50 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:50.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:14:50 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:14:50] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Nov 24 10:14:50 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:14:50] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Nov 24 10:14:51 compute-0 ceph-mon[74331]: pgmap v1447: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:14:51 compute-0 sudo[298780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:14:51 compute-0 sudo[298780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:14:51 compute-0 sudo[298780]: pam_unix(sudo:session): session closed for user root
Nov 24 10:14:51 compute-0 nova_compute[257700]: 2025-11-24 10:14:51.893 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:14:51 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:51 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:51 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:51.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:51 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1448: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:14:52 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:52 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:52 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:52.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:52 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:14:53 compute-0 ceph-mon[74331]: pgmap v1448: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:14:53 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:14:53.624Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 10:14:53 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:14:53.625Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:14:53 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:53 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:53 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:53.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:53 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1449: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:14:54 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:54 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:54 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:54.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:14:54 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:14:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:14:54 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:14:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:14:54 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:14:55 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:14:55 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:14:55 compute-0 ceph-mon[74331]: pgmap v1449: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:14:55 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:55 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:55 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:55.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:55 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1450: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:14:56 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:56 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:56 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:56.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:56 compute-0 nova_compute[257700]: 2025-11-24 10:14:56.895 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 10:14:56 compute-0 nova_compute[257700]: 2025-11-24 10:14:56.897 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 10:14:56 compute-0 nova_compute[257700]: 2025-11-24 10:14:56.897 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 24 10:14:56 compute-0 nova_compute[257700]: 2025-11-24 10:14:56.897 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 24 10:14:56 compute-0 nova_compute[257700]: 2025-11-24 10:14:56.929 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:14:56 compute-0 nova_compute[257700]: 2025-11-24 10:14:56.930 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 24 10:14:57 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:14:57 compute-0 ceph-mon[74331]: pgmap v1450: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:14:57 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:14:57.642Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:14:57 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:57 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:57 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:57.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:57 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1451: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:14:58 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:58 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:58 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:14:58.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:58 compute-0 sshd-session[298810]: Received disconnect from 36.255.3.203 port 40751:11: Bye Bye [preauth]
Nov 24 10:14:58 compute-0 sshd-session[298810]: Disconnected from authenticating user root 36.255.3.203 port 40751 [preauth]
Nov 24 10:14:58 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:14:58.995Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:14:59 compute-0 ceph-mon[74331]: pgmap v1451: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:14:59 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:14:59 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:14:59 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:14:59.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:14:59 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1452: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:15:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:14:59 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:15:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:14:59 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:15:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:14:59 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:15:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:15:00 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:15:00 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:15:00 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:00 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:00.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:00 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:15:00] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Nov 24 10:15:00 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:15:00] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Nov 24 10:15:01 compute-0 ceph-mon[74331]: pgmap v1452: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:15:01 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:15:01 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 24 10:15:01 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2187607288' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 10:15:01 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 24 10:15:01 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2187607288' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 10:15:01 compute-0 nova_compute[257700]: 2025-11-24 10:15:01.931 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 10:15:01 compute-0 nova_compute[257700]: 2025-11-24 10:15:01.932 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 10:15:01 compute-0 nova_compute[257700]: 2025-11-24 10:15:01.932 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 24 10:15:01 compute-0 nova_compute[257700]: 2025-11-24 10:15:01.933 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 24 10:15:01 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:15:01 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:01 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:01.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:01 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1453: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:15:01 compute-0 nova_compute[257700]: 2025-11-24 10:15:01.996 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:15:01 compute-0 nova_compute[257700]: 2025-11-24 10:15:01.996 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 24 10:15:02 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:15:02 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:02 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:02.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:02 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:15:02 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/2187607288' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 10:15:02 compute-0 ceph-mon[74331]: from='client.? 192.168.122.10:0/2187607288' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 10:15:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:15:03.626Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 10:15:03 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:15:03.626Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:15:03 compute-0 ceph-mon[74331]: pgmap v1453: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:15:03 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:15:03 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:03 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:03.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:03 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1454: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:15:04 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:15:04 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:04 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:04.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:04 compute-0 podman[298820]: 2025-11-24 10:15:04.783564724 +0000 UTC m=+0.056209600 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 24 10:15:04 compute-0 podman[298821]: 2025-11-24 10:15:04.835951128 +0000 UTC m=+0.104165475 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 10:15:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:15:04 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:15:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:15:04 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:15:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:15:04 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:15:05 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:15:05 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:15:05 compute-0 sudo[298864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:15:05 compute-0 sudo[298864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:15:05 compute-0 sudo[298864]: pam_unix(sudo:session): session closed for user root
Nov 24 10:15:05 compute-0 sudo[298889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Nov 24 10:15:05 compute-0 sudo[298889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:15:05 compute-0 ceph-mon[74331]: pgmap v1454: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:15:05 compute-0 podman[298988]: 2025-11-24 10:15:05.915287645 +0000 UTC m=+0.077773523 container exec 926e81c0f890a1c1ac5ebf5b0a3fc7d39273a3029701ecf933d5ab782a4c6bc4 (image=quay.io/ceph/ceph:v19, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 10:15:05 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:15:05 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:05 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:05.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:05 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1455: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:15:06 compute-0 podman[298988]: 2025-11-24 10:15:06.004703563 +0000 UTC m=+0.167189421 container exec_died 926e81c0f890a1c1ac5ebf5b0a3fc7d39273a3029701ecf933d5ab782a4c6bc4 (image=quay.io/ceph/ceph:v19, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mon-compute-0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid)
Nov 24 10:15:06 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:15:06 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:06 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:06.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:06 compute-0 podman[299125]: 2025-11-24 10:15:06.717672258 +0000 UTC m=+0.074202954 container exec c1042f9aaa96d1cc7323d0bb263b746783ae7f616fd1b71ffa56027caf075582 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 10:15:06 compute-0 podman[299125]: 2025-11-24 10:15:06.754746414 +0000 UTC m=+0.111277110 container exec_died c1042f9aaa96d1cc7323d0bb263b746783ae7f616fd1b71ffa56027caf075582 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 10:15:06 compute-0 nova_compute[257700]: 2025-11-24 10:15:06.997 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 10:15:07 compute-0 nova_compute[257700]: 2025-11-24 10:15:06.999 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:15:07 compute-0 nova_compute[257700]: 2025-11-24 10:15:07.000 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 24 10:15:07 compute-0 nova_compute[257700]: 2025-11-24 10:15:07.000 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 24 10:15:07 compute-0 nova_compute[257700]: 2025-11-24 10:15:07.001 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 24 10:15:07 compute-0 nova_compute[257700]: 2025-11-24 10:15:07.003 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:15:07 compute-0 podman[299198]: 2025-11-24 10:15:07.075214052 +0000 UTC m=+0.078299036 container exec a8ff859c0ee484e58c6aaf58e6d722a3faffb91c2dea80441e79254f2043cb44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 10:15:07 compute-0 podman[299198]: 2025-11-24 10:15:07.099793928 +0000 UTC m=+0.102878902 container exec_died a8ff859c0ee484e58c6aaf58e6d722a3faffb91c2dea80441e79254f2043cb44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1)
Nov 24 10:15:07 compute-0 podman[299262]: 2025-11-24 10:15:07.340495925 +0000 UTC m=+0.063055559 container exec 6c3a81d73f056383702bf60c1dab3f213ae48261b4107ee30655cbadd5ed4114 (image=quay.io/ceph/haproxy:2.3, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf)
Nov 24 10:15:07 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:15:07 compute-0 podman[299262]: 2025-11-24 10:15:07.355679521 +0000 UTC m=+0.078239095 container exec_died 6c3a81d73f056383702bf60c1dab3f213ae48261b4107ee30655cbadd5ed4114 (image=quay.io/ceph/haproxy:2.3, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-haproxy-nfs-cephfs-compute-0-jzeayf)
Nov 24 10:15:07 compute-0 podman[299328]: 2025-11-24 10:15:07.625092147 +0000 UTC m=+0.065363616 container exec da5e2e82794b556dfcd8ea30635453752d519b3ce5ab3e77ac09ab6f644d0021 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-0-mglptr, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, name=keepalived, release=1793, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Nov 24 10:15:07 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:15:07.643Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:15:07 compute-0 ceph-mon[74331]: pgmap v1455: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:15:07 compute-0 podman[299328]: 2025-11-24 10:15:07.665471545 +0000 UTC m=+0.105743014 container exec_died da5e2e82794b556dfcd8ea30635453752d519b3ce5ab3e77ac09ab6f644d0021 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-keepalived-nfs-cephfs-compute-0-mglptr, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, version=2.2.4, architecture=x86_64, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, distribution-scope=public, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, vcs-type=git, name=keepalived, release=1793)
Nov 24 10:15:07 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 10:15:07 compute-0 ceph-mon[74331]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Cumulative writes: 8902 writes, 39K keys, 8902 commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s
                                           Cumulative WAL: 8902 writes, 8902 syncs, 1.00 writes per sync, written: 0.07 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1568 writes, 7482 keys, 1568 commit groups, 1.0 writes per commit group, ingest: 11.53 MB, 0.02 MB/s
                                           Interval WAL: 1568 writes, 1568 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0    108.9      0.57              0.19        24    0.024       0      0       0.0       0.0
                                             L6      1/0   13.59 MB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   4.8    168.8    145.2      2.07              0.75        23    0.090    138K    13K       0.0       0.0
                                            Sum      1/0   13.59 MB   0.0      0.3     0.1      0.3       0.4      0.1       0.0   5.8    132.2    137.4      2.65              0.93        47    0.056    138K    13K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.2    142.1    142.3      0.65              0.28        12    0.054     43K   3585       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   0.0    168.8    145.2      2.07              0.75        23    0.090    138K    13K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0    109.6      0.57              0.19        23    0.025       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.6      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.061, interval 0.011
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.35 GB write, 0.12 MB/s write, 0.34 GB read, 0.12 MB/s read, 2.6 seconds
                                           Interval compaction: 0.09 GB write, 0.15 MB/s write, 0.09 GB read, 0.15 MB/s read, 0.6 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b87797d350#2 capacity: 304.00 MB usage: 32.88 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000371 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1955,31.83 MB,10.4695%) FilterBlock(48,407.42 KB,0.130879%) IndexBlock(48,672.86 KB,0.216148%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 24 10:15:07 compute-0 podman[299394]: 2025-11-24 10:15:07.938819408 +0000 UTC m=+0.069252332 container exec 333e8d52ac14c1ad2562a9b1108149f074ce2b54eb58b09f4ec22c7b717459e6 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 10:15:07 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:15:07 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:07 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:07.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:07 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1456: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:15:07 compute-0 podman[299394]: 2025-11-24 10:15:07.979545674 +0000 UTC m=+0.109978618 container exec_died 333e8d52ac14c1ad2562a9b1108149f074ce2b54eb58b09f4ec22c7b717459e6 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 10:15:08 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:15:08 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000025s ======
Nov 24 10:15:08 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:08.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 24 10:15:08 compute-0 podman[299469]: 2025-11-24 10:15:08.299841578 +0000 UTC m=+0.084566421 container exec 64e58e60bc23a7d57cc9d528e4c0a82e4df02b33e046975aeb8ef22ad0995bf2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 24 10:15:08 compute-0 podman[299469]: 2025-11-24 10:15:08.480503641 +0000 UTC m=+0.265228454 container exec_died 64e58e60bc23a7d57cc9d528e4c0a82e4df02b33e046975aeb8ef22ad0995bf2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 24 10:15:08 compute-0 podman[299583]: 2025-11-24 10:15:08.864721133 +0000 UTC m=+0.056014115 container exec 10beeaa631829ec8676854498a3516687cc150842a3e976767e7a8406d406beb (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 10:15:08 compute-0 podman[299583]: 2025-11-24 10:15:08.905662925 +0000 UTC m=+0.096955947 container exec_died 10beeaa631829ec8676854498a3516687cc150842a3e976767e7a8406d406beb (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 24 10:15:08 compute-0 sudo[298889]: pam_unix(sudo:session): session closed for user root
Nov 24 10:15:08 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 10:15:08 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:15:08 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:15:08.996Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:15:09 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 10:15:09 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:15:09 compute-0 sudo[299623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:15:09 compute-0 sudo[299623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:15:09 compute-0 sudo[299623]: pam_unix(sudo:session): session closed for user root
Nov 24 10:15:09 compute-0 sudo[299648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Nov 24 10:15:09 compute-0 sudo[299648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:15:09 compute-0 ceph-mon[74331]: pgmap v1456: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:15:09 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:15:09 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:15:09 compute-0 sudo[299648]: pam_unix(sudo:session): session closed for user root
Nov 24 10:15:09 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1457: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:15:09 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1458: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 24 10:15:09 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 24 10:15:09 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:15:09 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 24 10:15:09 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:15:09 compute-0 sudo[299705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:15:09 compute-0 sudo[299705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:15:09 compute-0 sudo[299705]: pam_unix(sudo:session): session closed for user root
Nov 24 10:15:09 compute-0 sudo[299730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 24 10:15:09 compute-0 sudo[299730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:15:09 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:15:09 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:09 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:09.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:15:09 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:15:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:15:10 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:15:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:15:10 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:15:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:15:10 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:15:10 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:15:10 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:10 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:10.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:10 compute-0 podman[299796]: 2025-11-24 10:15:10.416363448 +0000 UTC m=+0.066182346 container create ed9951f0a5333c319b7bf878062679cfd89d178aa8c0c5550435bc5a1d08d9fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_kalam, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Nov 24 10:15:10 compute-0 systemd[1]: Started libpod-conmon-ed9951f0a5333c319b7bf878062679cfd89d178aa8c0c5550435bc5a1d08d9fa.scope.
Nov 24 10:15:10 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:15:10 compute-0 podman[299796]: 2025-11-24 10:15:10.389407623 +0000 UTC m=+0.039226641 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:15:10 compute-0 podman[299796]: 2025-11-24 10:15:10.49736577 +0000 UTC m=+0.147184688 container init ed9951f0a5333c319b7bf878062679cfd89d178aa8c0c5550435bc5a1d08d9fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 10:15:10 compute-0 podman[299796]: 2025-11-24 10:15:10.510821833 +0000 UTC m=+0.160640741 container start ed9951f0a5333c319b7bf878062679cfd89d178aa8c0c5550435bc5a1d08d9fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_kalam, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True)
Nov 24 10:15:10 compute-0 podman[299796]: 2025-11-24 10:15:10.514350679 +0000 UTC m=+0.164169597 container attach ed9951f0a5333c319b7bf878062679cfd89d178aa8c0c5550435bc5a1d08d9fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_kalam, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 10:15:10 compute-0 recursing_kalam[299813]: 167 167
Nov 24 10:15:10 compute-0 systemd[1]: libpod-ed9951f0a5333c319b7bf878062679cfd89d178aa8c0c5550435bc5a1d08d9fa.scope: Deactivated successfully.
Nov 24 10:15:10 compute-0 podman[299796]: 2025-11-24 10:15:10.520008839 +0000 UTC m=+0.169827737 container died ed9951f0a5333c319b7bf878062679cfd89d178aa8c0c5550435bc5a1d08d9fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_kalam, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 10:15:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e584bf43ef52ec9b01f53f4bd63f44ec80262a8f67d1c686611c7dbca6741fa-merged.mount: Deactivated successfully.
Nov 24 10:15:10 compute-0 podman[299796]: 2025-11-24 10:15:10.559802813 +0000 UTC m=+0.209621711 container remove ed9951f0a5333c319b7bf878062679cfd89d178aa8c0c5550435bc5a1d08d9fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_kalam, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 10:15:10 compute-0 systemd[1]: libpod-conmon-ed9951f0a5333c319b7bf878062679cfd89d178aa8c0c5550435bc5a1d08d9fa.scope: Deactivated successfully.
Nov 24 10:15:10 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:15:10 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 10:15:10 compute-0 ceph-mon[74331]: pgmap v1457: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Nov 24 10:15:10 compute-0 ceph-mon[74331]: pgmap v1458: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 24 10:15:10 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:15:10 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:15:10 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 10:15:10 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 10:15:10 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:15:10 compute-0 podman[299837]: 2025-11-24 10:15:10.725966837 +0000 UTC m=+0.051481172 container create 8c78102d35c97d3017a0e9cbdb7ffe733b5bedc1dc9c4a97400b49226548b796 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_shaw, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 10:15:10 compute-0 systemd[1]: Started libpod-conmon-8c78102d35c97d3017a0e9cbdb7ffe733b5bedc1dc9c4a97400b49226548b796.scope.
Nov 24 10:15:10 compute-0 podman[299837]: 2025-11-24 10:15:10.70418297 +0000 UTC m=+0.029697275 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:15:10 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:15:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccd3887051120752abaf2ab7d4029e3c0d0153a320b436e784fcf2f68d629668/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 10:15:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccd3887051120752abaf2ab7d4029e3c0d0153a320b436e784fcf2f68d629668/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 10:15:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccd3887051120752abaf2ab7d4029e3c0d0153a320b436e784fcf2f68d629668/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 10:15:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccd3887051120752abaf2ab7d4029e3c0d0153a320b436e784fcf2f68d629668/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 10:15:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccd3887051120752abaf2ab7d4029e3c0d0153a320b436e784fcf2f68d629668/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 10:15:10 compute-0 podman[299837]: 2025-11-24 10:15:10.832228133 +0000 UTC m=+0.157742438 container init 8c78102d35c97d3017a0e9cbdb7ffe733b5bedc1dc9c4a97400b49226548b796 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_shaw, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 10:15:10 compute-0 podman[299837]: 2025-11-24 10:15:10.853734005 +0000 UTC m=+0.179248320 container start 8c78102d35c97d3017a0e9cbdb7ffe733b5bedc1dc9c4a97400b49226548b796 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_shaw, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 10:15:10 compute-0 podman[299837]: 2025-11-24 10:15:10.862489851 +0000 UTC m=+0.188004156 container attach 8c78102d35c97d3017a0e9cbdb7ffe733b5bedc1dc9c4a97400b49226548b796 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 10:15:10 compute-0 podman[299856]: 2025-11-24 10:15:10.899437523 +0000 UTC m=+0.074086831 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 10:15:10 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:15:10] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Nov 24 10:15:10 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:15:10] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Nov 24 10:15:11 compute-0 agitated_shaw[299853]: --> passed data devices: 0 physical, 1 LVM
Nov 24 10:15:11 compute-0 agitated_shaw[299853]: --> All data devices are unavailable
Nov 24 10:15:11 compute-0 systemd[1]: libpod-8c78102d35c97d3017a0e9cbdb7ffe733b5bedc1dc9c4a97400b49226548b796.scope: Deactivated successfully.
Nov 24 10:15:11 compute-0 podman[299837]: 2025-11-24 10:15:11.271121657 +0000 UTC m=+0.596635952 container died 8c78102d35c97d3017a0e9cbdb7ffe733b5bedc1dc9c4a97400b49226548b796 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 10:15:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-ccd3887051120752abaf2ab7d4029e3c0d0153a320b436e784fcf2f68d629668-merged.mount: Deactivated successfully.
Nov 24 10:15:11 compute-0 podman[299837]: 2025-11-24 10:15:11.325933881 +0000 UTC m=+0.651448166 container remove 8c78102d35c97d3017a0e9cbdb7ffe733b5bedc1dc9c4a97400b49226548b796 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 10:15:11 compute-0 systemd[1]: libpod-conmon-8c78102d35c97d3017a0e9cbdb7ffe733b5bedc1dc9c4a97400b49226548b796.scope: Deactivated successfully.
Nov 24 10:15:11 compute-0 sudo[299730]: pam_unix(sudo:session): session closed for user root
Nov 24 10:15:11 compute-0 sudo[299898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:15:11 compute-0 sudo[299898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:15:11 compute-0 sudo[299898]: pam_unix(sudo:session): session closed for user root
Nov 24 10:15:11 compute-0 sudo[299923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- lvm list --format json
Nov 24 10:15:11 compute-0 sudo[299923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:15:11 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1459: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 24 10:15:11 compute-0 ceph-mon[74331]: pgmap v1459: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 24 10:15:11 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:15:11 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:15:11 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:11.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:15:11 compute-0 sudo[299979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:15:11 compute-0 sudo[299979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:15:11 compute-0 sudo[299979]: pam_unix(sudo:session): session closed for user root
Nov 24 10:15:12 compute-0 nova_compute[257700]: 2025-11-24 10:15:11.999 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:15:12 compute-0 nova_compute[257700]: 2025-11-24 10:15:12.002 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:15:12 compute-0 podman[300016]: 2025-11-24 10:15:12.083513207 +0000 UTC m=+0.059148082 container create 3c9f91337ea51a5da67d7a48e10a0e64979a96c58c471c330919d8d65bc137ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_perlman, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 10:15:12 compute-0 systemd[1]: Started libpod-conmon-3c9f91337ea51a5da67d7a48e10a0e64979a96c58c471c330919d8d65bc137ea.scope.
Nov 24 10:15:12 compute-0 podman[300016]: 2025-11-24 10:15:12.055356842 +0000 UTC m=+0.030991727 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:15:12 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:15:12 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:15:12 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:12 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:12.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:12 compute-0 podman[300016]: 2025-11-24 10:15:12.179443987 +0000 UTC m=+0.155078882 container init 3c9f91337ea51a5da67d7a48e10a0e64979a96c58c471c330919d8d65bc137ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_perlman, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 10:15:12 compute-0 podman[300016]: 2025-11-24 10:15:12.187453145 +0000 UTC m=+0.163088010 container start 3c9f91337ea51a5da67d7a48e10a0e64979a96c58c471c330919d8d65bc137ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_perlman, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 10:15:12 compute-0 podman[300016]: 2025-11-24 10:15:12.191850124 +0000 UTC m=+0.167485029 container attach 3c9f91337ea51a5da67d7a48e10a0e64979a96c58c471c330919d8d65bc137ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_perlman, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 10:15:12 compute-0 zealous_perlman[300033]: 167 167
Nov 24 10:15:12 compute-0 systemd[1]: libpod-3c9f91337ea51a5da67d7a48e10a0e64979a96c58c471c330919d8d65bc137ea.scope: Deactivated successfully.
Nov 24 10:15:12 compute-0 conmon[300033]: conmon 3c9f91337ea51a5da67d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3c9f91337ea51a5da67d7a48e10a0e64979a96c58c471c330919d8d65bc137ea.scope/container/memory.events
Nov 24 10:15:12 compute-0 podman[300016]: 2025-11-24 10:15:12.195069364 +0000 UTC m=+0.170704229 container died 3c9f91337ea51a5da67d7a48e10a0e64979a96c58c471c330919d8d65bc137ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Nov 24 10:15:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-93799a03a9f0f5bd6a1e94eb8283bdb5b4785e004bdf7c588c092ee2ca4a862d-merged.mount: Deactivated successfully.
Nov 24 10:15:12 compute-0 podman[300016]: 2025-11-24 10:15:12.240368083 +0000 UTC m=+0.216002948 container remove 3c9f91337ea51a5da67d7a48e10a0e64979a96c58c471c330919d8d65bc137ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 10:15:12 compute-0 systemd[1]: libpod-conmon-3c9f91337ea51a5da67d7a48e10a0e64979a96c58c471c330919d8d65bc137ea.scope: Deactivated successfully.
Nov 24 10:15:12 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:15:12 compute-0 podman[300059]: 2025-11-24 10:15:12.462209793 +0000 UTC m=+0.062330820 container create eb1b47edc0562277a238a2151fa6aa2aa7ec6eefe58e242645295acf8c061450 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_shirley, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 10:15:12 compute-0 systemd[1]: Started libpod-conmon-eb1b47edc0562277a238a2151fa6aa2aa7ec6eefe58e242645295acf8c061450.scope.
Nov 24 10:15:12 compute-0 podman[300059]: 2025-11-24 10:15:12.431249988 +0000 UTC m=+0.031371105 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:15:12 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:15:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51cb700baf3d381b883ce0a52b89d080e878ce718b8aa6e2cc4de3510357a3e5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 10:15:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51cb700baf3d381b883ce0a52b89d080e878ce718b8aa6e2cc4de3510357a3e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 10:15:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51cb700baf3d381b883ce0a52b89d080e878ce718b8aa6e2cc4de3510357a3e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 10:15:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51cb700baf3d381b883ce0a52b89d080e878ce718b8aa6e2cc4de3510357a3e5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 10:15:12 compute-0 podman[300059]: 2025-11-24 10:15:12.577209685 +0000 UTC m=+0.177330752 container init eb1b47edc0562277a238a2151fa6aa2aa7ec6eefe58e242645295acf8c061450 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 10:15:12 compute-0 podman[300059]: 2025-11-24 10:15:12.585858259 +0000 UTC m=+0.185979296 container start eb1b47edc0562277a238a2151fa6aa2aa7ec6eefe58e242645295acf8c061450 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_shirley, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Nov 24 10:15:12 compute-0 podman[300059]: 2025-11-24 10:15:12.589665892 +0000 UTC m=+0.189786939 container attach eb1b47edc0562277a238a2151fa6aa2aa7ec6eefe58e242645295acf8c061450 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_shirley, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Nov 24 10:15:12 compute-0 clever_shirley[300075]: {
Nov 24 10:15:12 compute-0 clever_shirley[300075]:     "0": [
Nov 24 10:15:12 compute-0 clever_shirley[300075]:         {
Nov 24 10:15:12 compute-0 clever_shirley[300075]:             "devices": [
Nov 24 10:15:12 compute-0 clever_shirley[300075]:                 "/dev/loop3"
Nov 24 10:15:12 compute-0 clever_shirley[300075]:             ],
Nov 24 10:15:12 compute-0 clever_shirley[300075]:             "lv_name": "ceph_lv0",
Nov 24 10:15:12 compute-0 clever_shirley[300075]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 10:15:12 compute-0 clever_shirley[300075]:             "lv_size": "21470642176",
Nov 24 10:15:12 compute-0 clever_shirley[300075]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=84a084c3-61a7-5de7-8207-1f88efa59a64,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 24 10:15:12 compute-0 clever_shirley[300075]:             "lv_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 10:15:12 compute-0 clever_shirley[300075]:             "name": "ceph_lv0",
Nov 24 10:15:12 compute-0 clever_shirley[300075]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 10:15:12 compute-0 clever_shirley[300075]:             "tags": {
Nov 24 10:15:12 compute-0 clever_shirley[300075]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 10:15:12 compute-0 clever_shirley[300075]:                 "ceph.block_uuid": "G8n7Oh-MWY0-vfhI-ADXB-NBXZ-BWE2-95qJf5",
Nov 24 10:15:12 compute-0 clever_shirley[300075]:                 "ceph.cephx_lockbox_secret": "",
Nov 24 10:15:12 compute-0 clever_shirley[300075]:                 "ceph.cluster_fsid": "84a084c3-61a7-5de7-8207-1f88efa59a64",
Nov 24 10:15:12 compute-0 clever_shirley[300075]:                 "ceph.cluster_name": "ceph",
Nov 24 10:15:12 compute-0 clever_shirley[300075]:                 "ceph.crush_device_class": "",
Nov 24 10:15:12 compute-0 clever_shirley[300075]:                 "ceph.encrypted": "0",
Nov 24 10:15:12 compute-0 clever_shirley[300075]:                 "ceph.osd_fsid": "4f7ff0c1-3b52-4bb3-bad4-c6fdc271c50c",
Nov 24 10:15:12 compute-0 clever_shirley[300075]:                 "ceph.osd_id": "0",
Nov 24 10:15:12 compute-0 clever_shirley[300075]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 24 10:15:12 compute-0 clever_shirley[300075]:                 "ceph.type": "block",
Nov 24 10:15:12 compute-0 clever_shirley[300075]:                 "ceph.vdo": "0",
Nov 24 10:15:12 compute-0 clever_shirley[300075]:                 "ceph.with_tpm": "0"
Nov 24 10:15:12 compute-0 clever_shirley[300075]:             },
Nov 24 10:15:12 compute-0 clever_shirley[300075]:             "type": "block",
Nov 24 10:15:12 compute-0 clever_shirley[300075]:             "vg_name": "ceph_vg0"
Nov 24 10:15:12 compute-0 clever_shirley[300075]:         }
Nov 24 10:15:12 compute-0 clever_shirley[300075]:     ]
Nov 24 10:15:12 compute-0 clever_shirley[300075]: }
Nov 24 10:15:12 compute-0 systemd[1]: libpod-eb1b47edc0562277a238a2151fa6aa2aa7ec6eefe58e242645295acf8c061450.scope: Deactivated successfully.
Nov 24 10:15:12 compute-0 podman[300059]: 2025-11-24 10:15:12.908064899 +0000 UTC m=+0.508185976 container died eb1b47edc0562277a238a2151fa6aa2aa7ec6eefe58e242645295acf8c061450 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_shirley, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 10:15:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-51cb700baf3d381b883ce0a52b89d080e878ce718b8aa6e2cc4de3510357a3e5-merged.mount: Deactivated successfully.
Nov 24 10:15:12 compute-0 podman[300059]: 2025-11-24 10:15:12.974160372 +0000 UTC m=+0.574281409 container remove eb1b47edc0562277a238a2151fa6aa2aa7ec6eefe58e242645295acf8c061450 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 10:15:12 compute-0 systemd[1]: libpod-conmon-eb1b47edc0562277a238a2151fa6aa2aa7ec6eefe58e242645295acf8c061450.scope: Deactivated successfully.
Nov 24 10:15:13 compute-0 sudo[299923]: pam_unix(sudo:session): session closed for user root
Nov 24 10:15:13 compute-0 sudo[300094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 24 10:15:13 compute-0 sudo[300094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:15:13 compute-0 sudo[300094]: pam_unix(sudo:session): session closed for user root
Nov 24 10:15:13 compute-0 sudo[300119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/84a084c3-61a7-5de7-8207-1f88efa59a64/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 84a084c3-61a7-5de7-8207-1f88efa59a64 -- raw list --format json
Nov 24 10:15:13 compute-0 sudo[300119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:15:13 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:15:13.627Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:15:13 compute-0 podman[300185]: 2025-11-24 10:15:13.683082416 +0000 UTC m=+0.048315074 container create 4c557ad1878a9db17f6674bd0b9d184e9aaccfe52c44f350a1df14653db27092 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_driscoll, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 10:15:13 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1460: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 24 10:15:13 compute-0 systemd[1]: Started libpod-conmon-4c557ad1878a9db17f6674bd0b9d184e9aaccfe52c44f350a1df14653db27092.scope.
Nov 24 10:15:13 compute-0 podman[300185]: 2025-11-24 10:15:13.660009277 +0000 UTC m=+0.025241915 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:15:13 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:15:13 compute-0 podman[300185]: 2025-11-24 10:15:13.782184235 +0000 UTC m=+0.147416873 container init 4c557ad1878a9db17f6674bd0b9d184e9aaccfe52c44f350a1df14653db27092 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_driscoll, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 24 10:15:13 compute-0 podman[300185]: 2025-11-24 10:15:13.794811217 +0000 UTC m=+0.160043875 container start 4c557ad1878a9db17f6674bd0b9d184e9aaccfe52c44f350a1df14653db27092 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_driscoll, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 24 10:15:13 compute-0 podman[300185]: 2025-11-24 10:15:13.800432256 +0000 UTC m=+0.165664874 container attach 4c557ad1878a9db17f6674bd0b9d184e9aaccfe52c44f350a1df14653db27092 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_driscoll, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 10:15:13 compute-0 gallant_driscoll[300201]: 167 167
Nov 24 10:15:13 compute-0 systemd[1]: libpod-4c557ad1878a9db17f6674bd0b9d184e9aaccfe52c44f350a1df14653db27092.scope: Deactivated successfully.
Nov 24 10:15:13 compute-0 podman[300185]: 2025-11-24 10:15:13.806022764 +0000 UTC m=+0.171255472 container died 4c557ad1878a9db17f6674bd0b9d184e9aaccfe52c44f350a1df14653db27092 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_driscoll, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Nov 24 10:15:13 compute-0 ceph-mon[74331]: pgmap v1460: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 24 10:15:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-dfb169efd9868b24c992ba97824a1343457ebdb46d41ce522673a160904e6a2a-merged.mount: Deactivated successfully.
Nov 24 10:15:13 compute-0 podman[300185]: 2025-11-24 10:15:13.851681042 +0000 UTC m=+0.216913660 container remove 4c557ad1878a9db17f6674bd0b9d184e9aaccfe52c44f350a1df14653db27092 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_driscoll, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 10:15:13 compute-0 systemd[1]: libpod-conmon-4c557ad1878a9db17f6674bd0b9d184e9aaccfe52c44f350a1df14653db27092.scope: Deactivated successfully.
Nov 24 10:15:13 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:15:13 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:13 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:13.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:14 compute-0 podman[300226]: 2025-11-24 10:15:14.093771243 +0000 UTC m=+0.071348423 container create e21ad882a5b7a85b7fa2b300a50ee3f948c88a4cc78e75472919f6abac3dee70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_thompson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Nov 24 10:15:14 compute-0 systemd[1]: Started libpod-conmon-e21ad882a5b7a85b7fa2b300a50ee3f948c88a4cc78e75472919f6abac3dee70.scope.
Nov 24 10:15:14 compute-0 podman[300226]: 2025-11-24 10:15:14.068414357 +0000 UTC m=+0.045991637 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 24 10:15:14 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:15:14 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:14 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:14.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:14 compute-0 systemd[1]: Started libcrun container.
Nov 24 10:15:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9ad7d7222ab451e3b537981a25fa96548c617c7de77d2fc10bc3a66211efdfa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 10:15:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9ad7d7222ab451e3b537981a25fa96548c617c7de77d2fc10bc3a66211efdfa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 10:15:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9ad7d7222ab451e3b537981a25fa96548c617c7de77d2fc10bc3a66211efdfa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 10:15:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9ad7d7222ab451e3b537981a25fa96548c617c7de77d2fc10bc3a66211efdfa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 10:15:14 compute-0 podman[300226]: 2025-11-24 10:15:14.196647145 +0000 UTC m=+0.174224335 container init e21ad882a5b7a85b7fa2b300a50ee3f948c88a4cc78e75472919f6abac3dee70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 10:15:14 compute-0 podman[300226]: 2025-11-24 10:15:14.208727983 +0000 UTC m=+0.186305153 container start e21ad882a5b7a85b7fa2b300a50ee3f948c88a4cc78e75472919f6abac3dee70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_thompson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 24 10:15:14 compute-0 podman[300226]: 2025-11-24 10:15:14.213220384 +0000 UTC m=+0.190797774 container attach e21ad882a5b7a85b7fa2b300a50ee3f948c88a4cc78e75472919f6abac3dee70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_thompson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Nov 24 10:15:14 compute-0 lvm[300317]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 10:15:14 compute-0 lvm[300317]: VG ceph_vg0 finished
Nov 24 10:15:14 compute-0 focused_thompson[300242]: {}
Nov 24 10:15:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:15:14 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:15:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:15:15 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:15:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:15:15 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:15:15 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:15:15 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:15:15 compute-0 systemd[1]: libpod-e21ad882a5b7a85b7fa2b300a50ee3f948c88a4cc78e75472919f6abac3dee70.scope: Deactivated successfully.
Nov 24 10:15:15 compute-0 systemd[1]: libpod-e21ad882a5b7a85b7fa2b300a50ee3f948c88a4cc78e75472919f6abac3dee70.scope: Consumed 1.349s CPU time.
Nov 24 10:15:15 compute-0 podman[300226]: 2025-11-24 10:15:15.007569529 +0000 UTC m=+0.985146719 container died e21ad882a5b7a85b7fa2b300a50ee3f948c88a4cc78e75472919f6abac3dee70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 10:15:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9ad7d7222ab451e3b537981a25fa96548c617c7de77d2fc10bc3a66211efdfa-merged.mount: Deactivated successfully.
Nov 24 10:15:15 compute-0 podman[300226]: 2025-11-24 10:15:15.059500562 +0000 UTC m=+1.037077752 container remove e21ad882a5b7a85b7fa2b300a50ee3f948c88a4cc78e75472919f6abac3dee70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Nov 24 10:15:15 compute-0 systemd[1]: libpod-conmon-e21ad882a5b7a85b7fa2b300a50ee3f948c88a4cc78e75472919f6abac3dee70.scope: Deactivated successfully.
Nov 24 10:15:15 compute-0 sudo[300119]: pam_unix(sudo:session): session closed for user root
Nov 24 10:15:15 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 24 10:15:15 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:15:15 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 24 10:15:15 compute-0 ceph-mon[74331]: log_channel(audit) log [INF] : from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:15:15 compute-0 sudo[300334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 24 10:15:15 compute-0 sudo[300334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:15:15 compute-0 sudo[300334]: pam_unix(sudo:session): session closed for user root
Nov 24 10:15:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:15:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:15:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:15:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:15:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:15:15 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:15:15 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1461: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 24 10:15:15 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:15:15 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:15:15 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:15.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:15:16 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:15:16 compute-0 ceph-mon[74331]: from='mgr.14715 ' entity='mgr.compute-0.mauvni' 
Nov 24 10:15:16 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:15:16 compute-0 ceph-mon[74331]: pgmap v1461: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 24 10:15:16 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:15:16 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:15:16 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:16.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:15:17 compute-0 nova_compute[257700]: 2025-11-24 10:15:17.002 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:15:17 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:15:17 compute-0 sshd-session[300361]: Accepted publickey for zuul from 192.168.122.10 port 43582 ssh2: ECDSA SHA256:MeSde0OmmlmFVnLWx/OKNxgeUUFhxUB3MA0eUyH5QEE
Nov 24 10:15:17 compute-0 systemd-logind[822]: New session 59 of user zuul.
Nov 24 10:15:17 compute-0 systemd[1]: Started Session 59 of User zuul.
Nov 24 10:15:17 compute-0 sshd-session[300361]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 24 10:15:17 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:15:17.644Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:15:17 compute-0 sudo[300366]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Nov 24 10:15:17 compute-0 sudo[300366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 24 10:15:17 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1462: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 24 10:15:17 compute-0 ceph-mon[74331]: pgmap v1462: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 24 10:15:17 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:15:17 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:15:17 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:17.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:15:18 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:15:18 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:15:18 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:18.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:15:18 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:15:18.997Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:15:19 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1463: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 24 10:15:19 compute-0 ceph-mon[74331]: pgmap v1463: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 24 10:15:19 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:15:19 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:15:19 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:19.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:15:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:15:19 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:15:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:15:19 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:15:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:15:19 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:15:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:15:20 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:15:20 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.28031 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:20 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:15:20 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:15:20 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:20.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:15:20 compute-0 sshd-session[300401]: Received disconnect from 83.229.122.23 port 49712:11: Bye Bye [preauth]
Nov 24 10:15:20 compute-0 sshd-session[300401]: Disconnected from authenticating user root 83.229.122.23 port 49712 [preauth]
Nov 24 10:15:20 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.26536 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:20 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.18744 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:15:20.592 165073 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:15:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:15:20.592 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:15:20 compute-0 ovn_metadata_agent[165067]: 2025-11-24 10:15:20.592 165073 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:15:20 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.28043 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:20 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.26548 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:20 compute-0 ceph-mon[74331]: from='client.28031 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:20 compute-0 ceph-mon[74331]: from='client.26536 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:20 compute-0 ceph-mon[74331]: from='client.18744 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:20 compute-0 ceph-mon[74331]: from='client.28043 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:20 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.18756 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:20 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:15:20] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Nov 24 10:15:20 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:15:20] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Nov 24 10:15:21 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1464: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:15:21 compute-0 ceph-mon[74331]: from='client.26548 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:21 compute-0 ceph-mon[74331]: from='client.18756 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:21 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/3403419153' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 24 10:15:21 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/382031394' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 24 10:15:21 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3055159028' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 24 10:15:21 compute-0 ceph-mon[74331]: pgmap v1464: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:15:21 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:15:21 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:21 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:21.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:22 compute-0 nova_compute[257700]: 2025-11-24 10:15:22.005 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 10:15:22 compute-0 nova_compute[257700]: 2025-11-24 10:15:22.008 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 10:15:22 compute-0 nova_compute[257700]: 2025-11-24 10:15:22.008 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 24 10:15:22 compute-0 nova_compute[257700]: 2025-11-24 10:15:22.008 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 24 10:15:22 compute-0 nova_compute[257700]: 2025-11-24 10:15:22.026 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:15:22 compute-0 nova_compute[257700]: 2025-11-24 10:15:22.027 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 24 10:15:22 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:15:22 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:22 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:22.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:22 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:15:23 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:15:23.629Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:15:23 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1465: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:15:23 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:15:23 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:23 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:23.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:24 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:15:24 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:24 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:24.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:24 compute-0 ceph-mon[74331]: pgmap v1465: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:15:24 compute-0 ovs-vsctl[300690]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 24 10:15:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:15:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:15:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:15:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:15:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:15:24 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:15:25 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:15:25 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:15:25 compute-0 virtqemud[257224]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 24 10:15:25 compute-0 virtqemud[257224]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 24 10:15:25 compute-0 virtqemud[257224]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 24 10:15:25 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1466: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:15:25 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:15:25 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:15:25 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:25.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:15:26 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:15:26 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:26 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:26.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:26 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.26563 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:27 compute-0 nova_compute[257700]: 2025-11-24 10:15:27.028 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 10:15:27 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.26575 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:27 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:15:27.645Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:15:27 compute-0 ceph-mds[96241]: mds.cephfs.compute-0.cibmfe asok_command: cache status {prefix=cache status} (starting...)
Nov 24 10:15:27 compute-0 ceph-mds[96241]: mds.cephfs.compute-0.cibmfe Can't run that command on an inactive MDS!
Nov 24 10:15:27 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:15:27 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Nov 24 10:15:27 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 24 10:15:27 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1467: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:15:27 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.26581 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:27 compute-0 ceph-mds[96241]: mds.cephfs.compute-0.cibmfe asok_command: client ls {prefix=client ls} (starting...)
Nov 24 10:15:27 compute-0 ceph-mds[96241]: mds.cephfs.compute-0.cibmfe Can't run that command on an inactive MDS!
Nov 24 10:15:27 compute-0 ceph-mon[74331]: pgmap v1466: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:15:27 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/1857747192' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 24 10:15:27 compute-0 ceph-mon[74331]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 24 10:15:27 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:15:27 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:27 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:27.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:27 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.18774 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:27 compute-0 lvm[301128]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 10:15:27 compute-0 lvm[301128]: VG ceph_vg0 finished
Nov 24 10:15:28 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.28070 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:28 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:15:28 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:15:28 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:28.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:15:28 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.26599 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:28 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.18783 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:28 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.28079 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:28 compute-0 ceph-mds[96241]: mds.cephfs.compute-0.cibmfe asok_command: damage ls {prefix=damage ls} (starting...)
Nov 24 10:15:28 compute-0 ceph-mds[96241]: mds.cephfs.compute-0.cibmfe Can't run that command on an inactive MDS!
Nov 24 10:15:28 compute-0 ceph-mds[96241]: mds.cephfs.compute-0.cibmfe asok_command: dump loads {prefix=dump loads} (starting...)
Nov 24 10:15:28 compute-0 ceph-mds[96241]: mds.cephfs.compute-0.cibmfe Can't run that command on an inactive MDS!
Nov 24 10:15:28 compute-0 ceph-mds[96241]: mds.cephfs.compute-0.cibmfe asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 24 10:15:28 compute-0 ceph-mds[96241]: mds.cephfs.compute-0.cibmfe Can't run that command on an inactive MDS!
Nov 24 10:15:28 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Nov 24 10:15:28 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1164800559' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 24 10:15:28 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.18813 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:28 compute-0 ceph-mds[96241]: mds.cephfs.compute-0.cibmfe asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 24 10:15:28 compute-0 ceph-mds[96241]: mds.cephfs.compute-0.cibmfe Can't run that command on an inactive MDS!
Nov 24 10:15:28 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.28100 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:28 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Nov 24 10:15:28 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 24 10:15:28 compute-0 ceph-mon[74331]: from='client.26563 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:28 compute-0 ceph-mon[74331]: from='client.26575 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:28 compute-0 ceph-mon[74331]: pgmap v1467: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:15:28 compute-0 ceph-mon[74331]: from='client.26581 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:28 compute-0 ceph-mon[74331]: from='client.18774 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:28 compute-0 ceph-mon[74331]: from='client.28070 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:28 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/1973484189' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:15:28 compute-0 ceph-mon[74331]: from='client.26599 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:28 compute-0 ceph-mon[74331]: from='client.18783 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:28 compute-0 ceph-mon[74331]: from='client.28079 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:28 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/576920121' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 24 10:15:28 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/2643511279' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 24 10:15:28 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1164800559' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 24 10:15:28 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/1465121536' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 24 10:15:28 compute-0 ceph-mon[74331]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 24 10:15:28 compute-0 nova_compute[257700]: 2025-11-24 10:15:28.920 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:15:28 compute-0 nova_compute[257700]: 2025-11-24 10:15:28.921 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 24 10:15:28 compute-0 nova_compute[257700]: 2025-11-24 10:15:28.921 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 24 10:15:28 compute-0 nova_compute[257700]: 2025-11-24 10:15:28.946 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 24 10:15:28 compute-0 nova_compute[257700]: 2025-11-24 10:15:28.946 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:15:28 compute-0 ceph-mds[96241]: mds.cephfs.compute-0.cibmfe asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 24 10:15:28 compute-0 ceph-mds[96241]: mds.cephfs.compute-0.cibmfe Can't run that command on an inactive MDS!
Nov 24 10:15:28 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:15:28.999Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:15:29 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 24 10:15:29 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1884848928' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:15:29 compute-0 ceph-mds[96241]: mds.cephfs.compute-0.cibmfe asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 24 10:15:29 compute-0 ceph-mds[96241]: mds.cephfs.compute-0.cibmfe Can't run that command on an inactive MDS!
Nov 24 10:15:29 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.18828 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:29 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.28121 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:29 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.26629 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:29 compute-0 ceph-mds[96241]: mds.cephfs.compute-0.cibmfe asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 24 10:15:29 compute-0 ceph-mds[96241]: mds.cephfs.compute-0.cibmfe Can't run that command on an inactive MDS!
Nov 24 10:15:29 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0)
Nov 24 10:15:29 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4054669363' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 24 10:15:29 compute-0 ceph-mds[96241]: mds.cephfs.compute-0.cibmfe asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 24 10:15:29 compute-0 ceph-mds[96241]: mds.cephfs.compute-0.cibmfe Can't run that command on an inactive MDS!
Nov 24 10:15:29 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Nov 24 10:15:29 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1760821900' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 24 10:15:29 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.26641 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:29 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1468: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:15:29 compute-0 ceph-mds[96241]: mds.cephfs.compute-0.cibmfe asok_command: ops {prefix=ops} (starting...)
Nov 24 10:15:29 compute-0 ceph-mds[96241]: mds.cephfs.compute-0.cibmfe Can't run that command on an inactive MDS!
Nov 24 10:15:29 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Nov 24 10:15:29 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2421036100' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 24 10:15:29 compute-0 ceph-mon[74331]: from='client.18813 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:29 compute-0 ceph-mon[74331]: from='client.28100 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:29 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/2880740983' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 24 10:15:29 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1884848928' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:15:29 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/2551530414' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 24 10:15:29 compute-0 ceph-mon[74331]: from='client.18828 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:29 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/3317630471' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 10:15:29 compute-0 ceph-mon[74331]: from='client.28121 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:29 compute-0 ceph-mon[74331]: from='client.26629 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:29 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/4054669363' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 24 10:15:29 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/4254001673' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 24 10:15:29 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1760821900' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 24 10:15:29 compute-0 ceph-mon[74331]: from='client.26641 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:29 compute-0 ceph-mon[74331]: pgmap v1468: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:15:29 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/2434305700' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 24 10:15:29 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/2452454843' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 24 10:15:29 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:15:29 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:29 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:29.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:15:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:15:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:15:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:15:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:15:29 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:15:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:15:30 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:15:30 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Nov 24 10:15:30 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1934793391' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 24 10:15:30 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Nov 24 10:15:30 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 24 10:15:30 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:15:30 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:30 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:30.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:30 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.18864 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:30 compute-0 ceph-mds[96241]: mds.cephfs.compute-0.cibmfe asok_command: session ls {prefix=session ls} (starting...)
Nov 24 10:15:30 compute-0 ceph-mds[96241]: mds.cephfs.compute-0.cibmfe Can't run that command on an inactive MDS!
Nov 24 10:15:30 compute-0 ceph-mds[96241]: mds.cephfs.compute-0.cibmfe asok_command: status {prefix=status} (starting...)
Nov 24 10:15:30 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Nov 24 10:15:30 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/609377997' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 24 10:15:30 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.28166 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:30 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.18885 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:30 compute-0 nova_compute[257700]: 2025-11-24 10:15:30.920 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:15:30 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2421036100' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 24 10:15:30 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/2676757163' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 24 10:15:30 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1934793391' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 24 10:15:30 compute-0 ceph-mon[74331]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 24 10:15:30 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/1888070657' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 24 10:15:30 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/1049383097' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 24 10:15:30 compute-0 ceph-mon[74331]: from='client.18864 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:30 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/2514280768' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 24 10:15:30 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/368086163' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 10:15:30 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:15:30 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/609377997' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 24 10:15:30 compute-0 ceph-mon[74331]: from='client.28166 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:30 compute-0 ceph-mon[74331]: from='client.18885 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:30 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/566080117' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 24 10:15:30 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/3079237173' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 24 10:15:30 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/1028027921' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 24 10:15:30 compute-0 nova_compute[257700]: 2025-11-24 10:15:30.948 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:15:30 compute-0 nova_compute[257700]: 2025-11-24 10:15:30.949 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:15:30 compute-0 nova_compute[257700]: 2025-11-24 10:15:30.949 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:15:30 compute-0 nova_compute[257700]: 2025-11-24 10:15:30.949 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 24 10:15:30 compute-0 nova_compute[257700]: 2025-11-24 10:15:30.949 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:15:30 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:15:30] "GET /metrics HTTP/1.1" 200 48382 "" "Prometheus/2.51.0"
Nov 24 10:15:30 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:15:30] "GET /metrics HTTP/1.1" 200 48382 "" "Prometheus/2.51.0"
Nov 24 10:15:31 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.26695 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:31 compute-0 ceph-mgr[74626]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 24 10:15:31 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T10:15:31.064+0000 7fac1dd94640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 24 10:15:31 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.28187 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:31 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Nov 24 10:15:31 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1808703927' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 24 10:15:31 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:15:31 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2915968578' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:15:31 compute-0 nova_compute[257700]: 2025-11-24 10:15:31.395 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:15:31 compute-0 nova_compute[257700]: 2025-11-24 10:15:31.547 257704 WARNING nova.virt.libvirt.driver [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 24 10:15:31 compute-0 nova_compute[257700]: 2025-11-24 10:15:31.548 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4401MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 24 10:15:31 compute-0 nova_compute[257700]: 2025-11-24 10:15:31.548 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 24 10:15:31 compute-0 nova_compute[257700]: 2025-11-24 10:15:31.549 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 24 10:15:31 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Nov 24 10:15:31 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 24 10:15:31 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Nov 24 10:15:31 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3904833836' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 24 10:15:31 compute-0 nova_compute[257700]: 2025-11-24 10:15:31.608 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 24 10:15:31 compute-0 nova_compute[257700]: 2025-11-24 10:15:31.608 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 24 10:15:31 compute-0 nova_compute[257700]: 2025-11-24 10:15:31.623 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 24 10:15:31 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.26725 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:31 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1469: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:15:31 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3687570939' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 24 10:15:31 compute-0 ceph-mon[74331]: from='client.26695 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:31 compute-0 ceph-mon[74331]: from='client.28187 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:31 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1808703927' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 24 10:15:31 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/3221653421' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 24 10:15:31 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/3817870708' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 24 10:15:31 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2915968578' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:15:31 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/1357936568' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 24 10:15:31 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2214528181' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 10:15:31 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/549447527' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 24 10:15:31 compute-0 ceph-mon[74331]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 24 10:15:31 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3904833836' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 24 10:15:31 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/2964773270' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 10:15:31 compute-0 ceph-mon[74331]: from='client.26725 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:31 compute-0 ceph-mon[74331]: pgmap v1469: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:15:31 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Nov 24 10:15:31 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/349487416' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 24 10:15:31 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:15:31 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:31 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:31.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:31 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.18951 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:31 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T10:15:31.993+0000 7fac1dd94640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 24 10:15:31 compute-0 ceph-mgr[74626]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 24 10:15:32 compute-0 nova_compute[257700]: 2025-11-24 10:15:32.031 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:15:32 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.26743 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:32 compute-0 sudo[301863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 24 10:15:32 compute-0 sudo[301863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 24 10:15:32 compute-0 sudo[301863]: pam_unix(sudo:session): session closed for user root
Nov 24 10:15:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 24 10:15:32 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3496901554' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:15:32 compute-0 nova_compute[257700]: 2025-11-24 10:15:32.108 257704 DEBUG oslo_concurrency.processutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 24 10:15:32 compute-0 nova_compute[257700]: 2025-11-24 10:15:32.114 257704 DEBUG nova.compute.provider_tree [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Inventory has not changed in ProviderTree for provider: a50ce3b5-7e9e-4263-a4aa-c35573ac7257 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 24 10:15:32 compute-0 nova_compute[257700]: 2025-11-24 10:15:32.129 257704 DEBUG nova.scheduler.client.report [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Inventory has not changed for provider a50ce3b5-7e9e-4263-a4aa-c35573ac7257 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 24 10:15:32 compute-0 nova_compute[257700]: 2025-11-24 10:15:32.131 257704 DEBUG nova.compute.resource_tracker [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 24 10:15:32 compute-0 nova_compute[257700]: 2025-11-24 10:15:32.131 257704 DEBUG oslo_concurrency.lockutils [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.583s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 24 10:15:32 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:15:32 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:32 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:32.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Nov 24 10:15:32 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1834471110' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 24 10:15:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Nov 24 10:15:32 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1389903690' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 24 10:15:32 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.28259 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:32 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: 2025-11-24T10:15:32.460+0000 7fac1dd94640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 24 10:15:32 compute-0 ceph-mgr[74626]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 24 10:15:32 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.26758 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:15:32 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.18990 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:32 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.26773 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:32 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Nov 24 10:15:32 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3912695799' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 24 10:15:32 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.28289 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:33 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/349487416' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 24 10:15:33 compute-0 ceph-mon[74331]: from='client.18951 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:33 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/315873122' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 24 10:15:33 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/3861438894' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 24 10:15:33 compute-0 ceph-mon[74331]: from='client.26743 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:33 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3496901554' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:15:33 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/173568047' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 24 10:15:33 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1834471110' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 24 10:15:33 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1389903690' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 24 10:15:33 compute-0 ceph-mon[74331]: from='client.28259 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:33 compute-0 ceph-mon[74331]: from='client.26758 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:33 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/631828033' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 24 10:15:33 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/3371992669' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 24 10:15:33 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/4241696948' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 24 10:15:33 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3912695799' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 24 10:15:33 compute-0 sshd-session[301349]: Received disconnect from 45.78.217.27 port 58042:11: Bye Bye [preauth]
Nov 24 10:15:33 compute-0 sshd-session[301349]: Disconnected from authenticating user root 45.78.217.27 port 58042 [preauth]
Nov 24 10:15:33 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.19014 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:33 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.26788 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:33 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.28298 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Nov 24 10:15:33 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1000993820' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 24 10:15:33 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.19035 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:33 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:15:33.630Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 10:15:33 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:15:33.631Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 10:15:33 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:15:33.631Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:15:33 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.26809 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:33 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1470: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:15:33 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.28319 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:33 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Nov 24 10:15:33 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1674425236' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 24 10:15:33 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:15:33 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:15:33 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:33.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:15:34 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.19056 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:34.274311+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4096000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:35.274465+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4096000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:36.274651+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4096000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:37.274803+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001246 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4096000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:38.275000+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4096000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:39.275168+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85966848 unmapped: 4087808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:40.275310+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85966848 unmapped: 4087808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:41.275462+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85966848 unmapped: 4087808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:42.275573+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001246 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85966848 unmapped: 4087808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:43.275700+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85966848 unmapped: 4087808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:44.275878+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85966848 unmapped: 4087808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:45.276029+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85966848 unmapped: 4087808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:46.276250+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85966848 unmapped: 4087808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:47.276376+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001246 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85966848 unmapped: 4087808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:48.276629+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85966848 unmapped: 4087808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:49.276833+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 4079616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:50.276990+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 4079616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:51.277126+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 4079616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:52.277280+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001246 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 4079616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:53.277405+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 4079616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:54.277555+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 4079616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:55.277723+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 4079616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:56.277871+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 4079616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:57.278000+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001246 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 4079616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:58.278219+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 4079616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:42:59.278363+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 4079616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:00.278545+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 4079616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:01.278704+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 4079616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:02.278865+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001246 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 4079616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:03.279017+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d21f33800 session 0x558d214be960
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 4079616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:04.279148+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 4079616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:05.279311+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 4079616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:06.279451+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 4079616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:07.279611+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001246 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 4079616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:08.279769+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85983232 unmapped: 4071424 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:09.279949+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85983232 unmapped: 4071424 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:10.280167+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85983232 unmapped: 4071424 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:11.280345+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85983232 unmapped: 4071424 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:12.280515+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001246 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85983232 unmapped: 4071424 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:13.280736+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85983232 unmapped: 4071424 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:14.280955+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 47.571327209s of 47.577346802s, submitted: 2
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85991424 unmapped: 4063232 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:15.281147+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85991424 unmapped: 4063232 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:16.281271+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85991424 unmapped: 4063232 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:17.281454+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001378 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85991424 unmapped: 4063232 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:18.281614+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85991424 unmapped: 4063232 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:19.281862+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85991424 unmapped: 4063232 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:20.282024+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85991424 unmapped: 4063232 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:21.282621+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85991424 unmapped: 4063232 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:22.282746+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001378 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85991424 unmapped: 4063232 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:23.282872+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85991424 unmapped: 4063232 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:24.282995+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85991424 unmapped: 4063232 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:25.283151+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85991424 unmapped: 4063232 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:26.283252+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:27.283439+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85991424 unmapped: 4063232 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001378 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:28.283606+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85991424 unmapped: 4063232 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:29.283732+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85991424 unmapped: 4063232 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:30.283859+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85991424 unmapped: 4063232 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:31.284040+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85991424 unmapped: 4063232 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:32.284214+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 4055040 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.832866669s of 17.840095520s, submitted: 2
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001246 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:33.284384+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 4055040 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:34.284575+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 4055040 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:35.284714+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 4055040 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:36.284899+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 4055040 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:37.285041+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 4055040 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001246 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:38.285191+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 4055040 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:39.285438+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 4055040 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:40.285617+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 4055040 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:41.285985+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 4055040 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:42.286451+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 4055040 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001246 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:43.286573+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 4055040 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:44.286682+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 4055040 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:45.287061+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 4055040 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d23882800 session 0x558d22f345a0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:46.287151+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 4055040 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:47.287320+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 4055040 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001246 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:48.287739+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 4046848 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:49.288203+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 4046848 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:50.288389+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 4046848 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:51.288510+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 4046848 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:52.288679+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 4046848 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001246 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:53.289006+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 4046848 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:54.289259+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 4046848 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:55.289587+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 4046848 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:56.289718+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 4046848 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d22223400
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.166109085s of 24.170095444s, submitted: 1
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:57.289981+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 4046848 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001378 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:58.290167+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 4046848 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:43:59.290350+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 4046848 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:00.290468+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 4046848 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:01.290579+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 4046848 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:02.290705+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 4046848 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882c00
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002890 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:03.290844+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 4022272 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:04.291016+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 4022272 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:05.291256+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 4022272 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:06.291406+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 4022272 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:07.291558+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 4014080 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002890 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:08.291776+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 4014080 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:09.291945+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 4014080 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:10.292067+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 4014080 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:11.292196+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 4005888 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:12.292368+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 4005888 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002890 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:13.292531+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 4005888 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:14.292716+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 4005888 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.972208023s of 17.985364914s, submitted: 2
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:15.292838+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 4005888 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:16.292994+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 4005888 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:17.293131+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 4005888 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d21f33000 session 0x558d24261680
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d209d9c00 session 0x558d23ab1680
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:18.293362+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002758 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 4005888 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:19.293507+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 4005888 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:20.293648+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 4005888 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:21.293979+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 4005888 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:22.294157+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3997696 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:23.294323+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002758 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3997696 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:24.294488+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3997696 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:25.294628+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3997696 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:26.294765+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3997696 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:27.294884+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3997696 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:28.295038+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002758 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 3997696 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23883000
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.542468071s of 14.546203613s, submitted: 1
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:29.295151+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86065152 unmapped: 3989504 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:30.295266+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86065152 unmapped: 3989504 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:31.295370+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86065152 unmapped: 3989504 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:32.295493+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86065152 unmapped: 3989504 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:33.295645+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004402 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86065152 unmapped: 3989504 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:34.295874+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86065152 unmapped: 3989504 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:35.296051+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86065152 unmapped: 3989504 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:36.296210+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86065152 unmapped: 3989504 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:37.296350+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86065152 unmapped: 3989504 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:38.296518+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1003811 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 3981312 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:39.296663+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 3981312 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:40.296808+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 3981312 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:41.296960+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 3981312 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:42.297075+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 3981312 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:43.297154+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1003811 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 3981312 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:44.297329+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 3981312 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.668762207s of 15.679323196s, submitted: 3
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:45.297487+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 3981312 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:46.297643+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 3981312 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:47.297774+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 3981312 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:48.297989+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1003679 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86081536 unmapped: 3973120 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d21d78000 session 0x558d23642b40
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d23882000 session 0x558d238714a0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:49.298132+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86081536 unmapped: 3973120 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:50.298253+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86081536 unmapped: 3973120 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:51.298368+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86081536 unmapped: 3973120 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:52.299394+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86081536 unmapped: 3973120 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:53.299533+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1003679 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86081536 unmapped: 3973120 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:54.299679+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86081536 unmapped: 3973120 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:55.299820+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86081536 unmapped: 3973120 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:56.299991+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86081536 unmapped: 3973120 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:57.300186+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86081536 unmapped: 3973120 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:58.300423+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1003679 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86081536 unmapped: 3973120 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:44:59.300553+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d209d9c00
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.809399605s of 14.812178612s, submitted: 1
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86081536 unmapped: 3973120 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:00.300696+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86081536 unmapped: 3973120 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:01.300913+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 3964928 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:02.301059+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21f33000
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 3956736 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:03.301201+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005323 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 3956736 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:04.301371+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 3956736 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:05.301649+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21f33800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 3956736 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:06.301785+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 3956736 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:07.301924+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 3956736 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:08.302088+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006835 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 3956736 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:09.302274+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 3956736 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:10.302448+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 3956736 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:11.302678+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 3956736 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:12.302803+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 3956736 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:13.302929+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006244 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.976356506s of 13.989496231s, submitted: 4
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 3948544 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:14.303086+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 3948544 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:15.303237+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 3948544 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:16.303422+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 3948544 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:17.303549+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 3948544 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:18.303697+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006112 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 3940352 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:19.303829+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 3940352 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:20.303945+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 3940352 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:21.304155+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 3940352 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:22.304360+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 3940352 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:23.304518+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006112 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 3940352 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:24.304639+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 3940352 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d21f33800 session 0x558d24ad8780
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d209d9c00 session 0x558d232aa960
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:25.304776+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 3940352 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d23882c00 session 0x558d24ab7680
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d22223400 session 0x558d231c8f00
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:26.305164+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 3940352 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:27.305291+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 3940352 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:28.305440+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006112 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 3940352 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-mon[74331]: from='client.18990 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:34 compute-0 ceph-mon[74331]: from='client.26773 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:34 compute-0 ceph-mon[74331]: from='client.28289 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:34 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/4157263929' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 24 10:15:34 compute-0 ceph-mon[74331]: from='client.19014 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:34 compute-0 ceph-mon[74331]: from='client.26788 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:34 compute-0 ceph-mon[74331]: from='client.28298 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:34 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1000993820' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 24 10:15:34 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/934442744' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 24 10:15:34 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/3889485254' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 24 10:15:34 compute-0 ceph-mon[74331]: from='client.19035 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:34 compute-0 ceph-mon[74331]: from='client.26809 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:34 compute-0 ceph-mon[74331]: pgmap v1470: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:15:34 compute-0 ceph-mon[74331]: from='client.28319 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:34 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1674425236' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 24 10:15:34 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/2610082563' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 10:15:34 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/4268995238' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:29.305557+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 3940352 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:30.305691+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 3940352 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:31.305837+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 3940352 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:32.305977+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 3940352 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:33.306127+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006112 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 3940352 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:34.306263+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 3940352 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:35.306391+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 3940352 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d209d9c00
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 22.232669830s of 22.235353470s, submitted: 1
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:36.306506+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 3940352 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21f33800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:37.306665+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 3940352 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:38.306863+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006376 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 3940352 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:39.306981+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 3940352 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:40.307147+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 3940352 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:41.307286+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86114304 unmapped: 3940352 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882000
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:42.307412+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86122496 unmapped: 3932160 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:43.307532+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006376 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86122496 unmapped: 3932160 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:44.307696+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86122496 unmapped: 3932160 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:45.307920+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86122496 unmapped: 3932160 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d21f33000 session 0x558d242512c0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d23883000 session 0x558d2158e000
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:46.308077+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86122496 unmapped: 3932160 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:47.308220+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86122496 unmapped: 3932160 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.302981377s of 12.310188293s, submitted: 2
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:48.308397+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005785 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86122496 unmapped: 3932160 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:49.308521+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86122496 unmapped: 3932160 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:50.308654+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86122496 unmapped: 3932160 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:51.308783+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86130688 unmapped: 3923968 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:52.308901+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86130688 unmapped: 3923968 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:53.309032+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005521 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86130688 unmapped: 3923968 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:54.309162+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86130688 unmapped: 3923968 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:55.309285+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86130688 unmapped: 3923968 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:56.309407+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d21f33800 session 0x558d215dd4a0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86130688 unmapped: 3923968 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882c00
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:57.309618+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86130688 unmapped: 3923968 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:58.309809+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005653 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86130688 unmapped: 3923968 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:45:59.309940+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86130688 unmapped: 3923968 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:00.310065+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86130688 unmapped: 3923968 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:01.310209+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86130688 unmapped: 3923968 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:02.310361+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86130688 unmapped: 3923968 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:03.310477+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005653 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86130688 unmapped: 3923968 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 9181 writes, 36K keys, 9181 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 9181 writes, 2009 syncs, 4.57 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 720 writes, 1140 keys, 720 commit groups, 1.0 writes per commit group, ingest: 0.38 MB, 0.00 MB/s
                                           Interval WAL: 720 writes, 336 syncs, 2.14 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd2f30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd2f30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd2f30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558d1fbd3350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:04.310685+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86130688 unmapped: 3923968 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:05.310818+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86130688 unmapped: 3923968 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:06.310928+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86130688 unmapped: 3923968 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:07.311074+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.203287125s of 19.217010498s, submitted: 4
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86122496 unmapped: 3932160 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:08.311277+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005785 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86122496 unmapped: 3932160 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:09.311388+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86122496 unmapped: 3932160 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:10.311508+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86122496 unmapped: 3932160 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:11.311621+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86122496 unmapped: 3932160 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:12.311767+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86122496 unmapped: 3932160 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:13.311880+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005785 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86122496 unmapped: 3932160 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:14.311983+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86130688 unmapped: 3923968 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23883400
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23883800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:15.312085+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86147072 unmapped: 3907584 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:16.312228+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d23882000 session 0x558d23bda1e0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d209d9c00 session 0x558d24ad90e0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86147072 unmapped: 3907584 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:17.312374+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86147072 unmapped: 3907584 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:18.312942+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007165 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86147072 unmapped: 3907584 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:19.313474+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86147072 unmapped: 3907584 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:20.313978+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86147072 unmapped: 3907584 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:21.314188+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.968050957s of 14.455158234s, submitted: 3
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86147072 unmapped: 3907584 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:22.315190+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86147072 unmapped: 3907584 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:23.315415+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007033 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86147072 unmapped: 3907584 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:24.316491+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86147072 unmapped: 3907584 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:25.316613+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86147072 unmapped: 3907584 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:26.316947+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86147072 unmapped: 3907584 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:27.317331+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86147072 unmapped: 3907584 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:28.317499+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007033 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86147072 unmapped: 3907584 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:29.317624+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86147072 unmapped: 3907584 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:30.317752+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86147072 unmapped: 3907584 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d209d9c00
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:31.317938+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86147072 unmapped: 3907584 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:32.318089+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86147072 unmapped: 3907584 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:33.318252+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007165 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86147072 unmapped: 3907584 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:34.318382+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.520938873s of 12.531864166s, submitted: 3
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86147072 unmapped: 3907584 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:35.318519+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86147072 unmapped: 3907584 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:36.318853+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86155264 unmapped: 3899392 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:37.319178+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 3891200 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:38.319318+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008677 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 3891200 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:39.319506+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 3891200 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:40.319649+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 3891200 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:41.319763+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 3891200 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:42.319886+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 3891200 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:43.320012+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007954 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 3891200 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:44.320132+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 3891200 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:45.320314+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 3891200 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:46.320503+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 3891200 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:47.320653+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 3891200 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:48.320852+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007954 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 3891200 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:49.321002+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 3891200 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:50.321167+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 3891200 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:51.321364+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 3891200 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:52.321514+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 3891200 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:53.321674+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007954 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 3891200 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:54.321756+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 3891200 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:55.321872+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 3891200 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:56.321994+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 3891200 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:57.322123+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 3891200 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:58.322266+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007954 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 3891200 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:46:59.322380+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 3891200 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:00.322506+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 3891200 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:01.322629+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 3891200 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:02.322750+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread fragmentation_score=0.000032 took=0.000042s
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 3891200 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:03.322903+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007954 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86163456 unmapped: 3891200 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:04.323029+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 3883008 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:05.323147+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 3883008 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:06.323278+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 3883008 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:07.323434+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 3883008 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:08.323611+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007954 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 3883008 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:09.323753+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 3883008 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:10.323908+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 3883008 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:11.324055+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 3883008 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:12.324172+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 3883008 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:13.324299+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007954 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 3883008 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:14.324437+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 3883008 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:15.324559+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 3883008 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:16.324677+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 3883008 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:17.324817+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 3883008 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:18.325043+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007954 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 3883008 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:19.325208+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 3883008 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:20.325335+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86171648 unmapped: 3883008 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:21.325472+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 47.280517578s of 47.298473358s, submitted: 3
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [0,0,1])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 86220800 unmapped: 3833856 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:22.325616+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 2596864 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:23.325799+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007954 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87621632 unmapped: 3481600 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:24.326181+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87621632 unmapped: 3481600 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:25.326557+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87621632 unmapped: 3481600 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:26.326926+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87621632 unmapped: 3481600 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:27.327123+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87621632 unmapped: 3481600 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:28.327543+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007954 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87621632 unmapped: 3481600 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:29.327980+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87621632 unmapped: 3481600 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:30.328244+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87621632 unmapped: 3481600 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:31.328596+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87621632 unmapped: 3481600 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:32.328790+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87621632 unmapped: 3481600 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:33.329053+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007954 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87621632 unmapped: 3481600 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:34.329306+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d209d9c00 session 0x558d214b94a0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87621632 unmapped: 3481600 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:35.329540+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87621632 unmapped: 3481600 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:36.329690+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87621632 unmapped: 3481600 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:37.329839+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87621632 unmapped: 3481600 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:38.329998+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007954 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87621632 unmapped: 3481600 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:39.330182+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87621632 unmapped: 3481600 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:40.330307+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87621632 unmapped: 3481600 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:41.330486+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87621632 unmapped: 3481600 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:42.330641+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87621632 unmapped: 3481600 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:43.330834+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007954 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87621632 unmapped: 3481600 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:44.330996+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87621632 unmapped: 3481600 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:45.331147+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21f33000
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 22.958734512s of 24.189737320s, submitted: 354
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87629824 unmapped: 3473408 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:46.331305+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87629824 unmapped: 3473408 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:47.331480+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87629824 unmapped: 3473408 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:48.331688+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011110 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87629824 unmapped: 3473408 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:49.331843+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87629824 unmapped: 3473408 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:50.332046+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:51.332251+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87629824 unmapped: 3473408 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:52.332372+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87629824 unmapped: 3473408 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:53.332486+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87629824 unmapped: 3473408 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011110 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:54.332632+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87629824 unmapped: 3473408 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:55.332771+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87629824 unmapped: 3473408 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:56.333240+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87629824 unmapped: 3473408 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:57.333559+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87629824 unmapped: 3473408 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:58.333903+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87629824 unmapped: 3473408 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010519 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:47:59.334194+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87629824 unmapped: 3473408 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:00.334361+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87629824 unmapped: 3473408 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.056202888s of 15.251289368s, submitted: 4
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:01.334649+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87638016 unmapped: 3465216 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:02.334804+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87638016 unmapped: 3465216 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:03.335058+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87638016 unmapped: 3465216 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010387 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:04.335338+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87638016 unmapped: 3465216 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:05.335563+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87638016 unmapped: 3465216 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:06.335762+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87638016 unmapped: 3465216 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:07.335967+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87638016 unmapped: 3465216 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:08.336184+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87638016 unmapped: 3465216 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d23883800 session 0x558d22276960
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d23882800 session 0x558d222625a0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010387 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:09.336395+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87638016 unmapped: 3465216 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:10.336623+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87638016 unmapped: 3465216 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:11.336868+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87638016 unmapped: 3465216 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:12.336975+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87638016 unmapped: 3465216 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d21f33000 session 0x558d245c45a0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:13.337112+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87638016 unmapped: 3465216 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010387 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:14.337287+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87638016 unmapped: 3465216 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:15.337506+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87638016 unmapped: 3465216 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:16.337649+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87638016 unmapped: 3465216 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:17.337787+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87638016 unmapped: 3465216 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:18.337981+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87638016 unmapped: 3465216 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010387 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:19.338137+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87638016 unmapped: 3465216 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21f33800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.436895370s of 18.440626144s, submitted: 1
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:20.338324+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87638016 unmapped: 3465216 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:21.338474+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87638016 unmapped: 3465216 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:22.338660+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87638016 unmapped: 3465216 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:23.338801+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87638016 unmapped: 3465216 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882000
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010651 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:24.338937+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87646208 unmapped: 3457024 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:25.339087+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87646208 unmapped: 3457024 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23883000
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:26.339392+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87646208 unmapped: 3457024 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:27.339521+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87646208 unmapped: 3457024 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:28.339687+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87646208 unmapped: 3457024 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010060 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:29.339788+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87646208 unmapped: 3457024 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23883c00
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.091918945s of 10.104517937s, submitted: 3
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:30.339901+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87654400 unmapped: 3448832 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:31.340028+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 87654400 unmapped: 3448832 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:32.340215+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:33.340308+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010849 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:34.340414+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:35.340581+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:36.340703+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:37.340831+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:38.341045+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010849 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:39.341171+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:40.341321+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.158608437s of 11.199194908s, submitted: 4
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:41.341463+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:42.341651+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:43.341855+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010717 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:44.342048+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:45.342216+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:46.342345+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:47.342477+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:48.342659+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:49.342801+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010717 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:50.342953+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d23882000 session 0x558d24abb4a0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d23883c00 session 0x558d214bcb40
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:51.343089+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:52.343250+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:53.343376+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:54.343505+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010717 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:55.343615+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:56.343729+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:57.343880+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:58.344031+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:48:59.344353+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010717 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:00.344622+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:01.344819+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d209d9c00
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.965158463s of 20.967866898s, submitted: 1
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:02.344993+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:03.345179+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:04.345725+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010849 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:05.346065+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:06.346295+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:07.346578+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21f33000
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:08.346874+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:09.347403+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010849 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:10.347815+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:11.348163+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:12.348477+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:13.348651+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.069068909s of 12.073123932s, submitted: 1
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:14.348796+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010258 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:15.348924+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:16.349070+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:17.349447+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:18.349765+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d23883400 session 0x558d238701e0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d23882c00 session 0x558d22f345a0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:19.349910+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010126 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:20.350057+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:21.350179+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:22.350311+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:23.350420+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:24.350562+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010126 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:25.350742+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:26.350952+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:27.351176+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:28.351432+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.483628273s of 15.492028236s, submitted: 2
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:29.351567+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010258 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:30.351695+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:31.351836+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 2400256 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:32.351970+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:33.352150+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:34.352287+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011770 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:35.352441+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:36.352628+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:37.352779+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:38.353001+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:39.353166+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011770 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:40.353296+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:41.353463+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.622843742s of 12.630350113s, submitted: 2
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:42.353639+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:43.353772+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:44.353921+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011638 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:45.354155+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:46.354474+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:47.354655+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:48.354850+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:49.355044+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011638 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:50.355162+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:51.355287+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:52.355659+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:53.355809+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:54.355937+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011638 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d21f33000 session 0x558d221db4a0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d209d9c00 session 0x558d232efa40
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:55.356074+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:56.356234+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:57.356430+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:58.356643+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:49:59.356809+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011638 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:00.356924+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:01.357076+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:02.357159+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:03.357350+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:04.357480+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011638 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:05.357609+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882c00
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.092479706s of 24.096317291s, submitted: 1
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:06.357764+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:07.357904+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:08.358190+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:09.358329+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011770 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:10.358474+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:11.358596+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23883400
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:12.358757+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:13.358954+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:14.359158+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013282 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:15.359346+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:16.359563+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:17.359725+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.254719734s of 12.262475967s, submitted: 2
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:18.359930+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:19.360073+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012691 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:20.360215+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88711168 unmapped: 2392064 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:21.361246+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 2383872 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:22.361378+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 2383872 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:23.361782+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 2383872 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:24.362004+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012559 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 2383872 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:25.362171+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 2383872 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:26.362324+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 2383872 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:27.362428+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 2383872 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:28.362633+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 2383872 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:29.362780+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012559 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 2383872 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:30.362941+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 2383872 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:31.363238+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 2383872 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:32.363441+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 2383872 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:33.363616+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 2383872 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:34.363763+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012559 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 2383872 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:35.363920+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 2383872 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:36.364322+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:37.364459+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:38.364611+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:39.364734+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012559 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:40.374601+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:41.374741+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:42.374908+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:43.375060+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:44.375211+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012559 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:45.375339+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:46.375536+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:47.375720+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:48.375890+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:49.376026+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012559 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:50.376160+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:51.376333+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:52.376480+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:53.376601+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:54.376716+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012559 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:55.376835+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:56.376996+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:57.377170+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:58.377421+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:50:59.377623+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012559 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:00.377781+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:01.377917+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:02.378199+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:03.378345+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:04.378508+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012559 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:05.378637+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:06.378765+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:07.378952+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:08.379127+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:09.379242+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012559 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:10.379465+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:11.379668+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:12.379787+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:13.379933+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:14.380648+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012559 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:15.381008+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:16.381482+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:17.381698+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:18.381909+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:19.382382+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012559 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:20.382581+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:21.382752+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:22.382969+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:23.383351+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:24.383827+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012559 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:25.384075+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:26.384585+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:27.384813+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:28.385166+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:29.385363+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012559 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d23882800 session 0x558d214b9a40
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:30.385646+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:31.385900+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:32.386169+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d23883400 session 0x558d245c4d20
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d23882c00 session 0x558d245d8960
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:33.386365+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:34.386526+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012559 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:35.386729+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:36.386948+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:37.387114+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:38.387272+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:39.387407+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012559 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:40.387640+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23883c00
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 82.736328125s of 82.741722107s, submitted: 2
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:41.387778+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:42.387914+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:43.388027+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:44.388213+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012691 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:45.388378+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 2375680 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:46.388494+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23883800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 2367488 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:47.388615+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 2367488 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:48.388704+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 2367488 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:49.388785+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1014203 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 2367488 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:50.388925+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 2367488 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:51.389051+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 2367488 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:52.389187+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 2367488 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:53.389312+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 2367488 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.260792732s of 13.267044067s, submitted: 2
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:54.389397+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1014071 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 2367488 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:55.389517+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 2367488 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:56.389627+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 2367488 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:57.389747+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 2367488 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:58.390059+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 2367488 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:51:59.390158+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1014071 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 2367488 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:00.390304+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 2359296 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:01.390433+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 2359296 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:02.390554+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 2359296 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:03.390700+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 2359296 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:04.390866+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013939 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 2359296 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:05.391000+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 2359296 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:06.391163+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 2359296 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:07.391385+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d23883000 session 0x558d245c4b40
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 ms_handle_reset con 0x558d21f33800 session 0x558d24251e00
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 2359296 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fc644000/0x0/0x4ffc00000, data 0x1086f8/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:08.391582+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 2359296 heap: 91103232 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:09.391697+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _renew_subs
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.101872444s of 15.108276367s, submitted: 2
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21f33800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _renew_subs
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1022869 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 97148928 unmapped: 2342912 heap: 99491840 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:10.391868+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _renew_subs
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 153 ms_handle_reset con 0x558d21f33800 session 0x558d245c50e0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88858624 unmapped: 19030016 heap: 107888640 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:11.392002+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 27344896 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:12.392132+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 153 handle_osd_map epochs [153,154], i have 153, src has [1,154]
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 154 heartbeat osd_stat(store_statfs(0x4fb1c6000/0x0/0x4ffc00000, data 0x157ea5e/0x1643000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [0,1])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 154 ms_handle_reset con 0x558d23882800 session 0x558d24ad9a40
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88956928 unmapped: 27328512 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:13.392264+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 154 handle_osd_map epochs [154,155], i have 154, src has [1,155]
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fb1c5000/0x0/0x4ffc00000, data 0x1580b66/0x1646000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88973312 unmapped: 27312128 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:14.392377+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171994 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88973312 unmapped: 27312128 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:15.392486+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 27279360 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:16.392584+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:17.392694+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 27279360 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fb1c1000/0x0/0x4ffc00000, data 0x1582b38/0x1649000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:18.392832+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 27279360 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882c00
Nov 24 10:15:34 compute-0 nova_compute[257700]: 2025-11-24 10:15:34.132 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:15:34 compute-0 nova_compute[257700]: 2025-11-24 10:15:34.133 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:15:34 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.26827 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:19.392965+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 27279360 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1172126 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:20.393193+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 27279360 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fb1c1000/0x0/0x4ffc00000, data 0x1582b38/0x1649000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:21.393371+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 27279360 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:22.393497+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 27279360 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:23.393619+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 27279360 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fb1c1000/0x0/0x4ffc00000, data 0x1582b38/0x1649000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:24.393746+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 27279360 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fb1c1000/0x0/0x4ffc00000, data 0x1582b38/0x1649000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1172126 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:25.393876+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 27279360 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fb1c1000/0x0/0x4ffc00000, data 0x1582b38/0x1649000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:26.394040+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 27279360 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:27.394220+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 27279360 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:28.394434+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 27279360 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fb1c1000/0x0/0x4ffc00000, data 0x1582b38/0x1649000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:29.394615+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 27279360 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1172126 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:30.394793+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 27279360 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:31.395040+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 27279360 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:32.395272+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 27279360 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fb1c1000/0x0/0x4ffc00000, data 0x1582b38/0x1649000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:33.395398+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 27279360 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:34.395536+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 27279360 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.825229645s of 25.062850952s, submitted: 69
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170222 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:35.395662+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 27410432 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:36.395801+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 27410432 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fb1c3000/0x0/0x4ffc00000, data 0x1582b38/0x1649000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:37.395967+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 27410432 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fb1c3000/0x0/0x4ffc00000, data 0x1582b38/0x1649000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:38.396205+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 27410432 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:39.396370+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 27410432 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170222 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:40.396543+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 27410432 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fb1c3000/0x0/0x4ffc00000, data 0x1582b38/0x1649000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:41.396665+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 27410432 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:42.396796+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 27410432 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:43.396954+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 27410432 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fb1c3000/0x0/0x4ffc00000, data 0x1582b38/0x1649000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:44.397125+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 27410432 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170222 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:45.397316+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 27410432 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:46.397478+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 27410432 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fb1c3000/0x0/0x4ffc00000, data 0x1582b38/0x1649000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:47.397762+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 27410432 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fb1c3000/0x0/0x4ffc00000, data 0x1582b38/0x1649000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:48.399322+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 27410432 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:49.400060+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 27410432 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170222 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:50.400747+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 27410432 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:51.400989+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 27410432 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:52.401405+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 27410432 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:53.402158+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 27410432 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fb1c3000/0x0/0x4ffc00000, data 0x1582b38/0x1649000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:54.402877+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88866816 unmapped: 27418624 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fb1c3000/0x0/0x4ffc00000, data 0x1582b38/0x1649000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170222 data_alloc: 218103808 data_used: 184320
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:55.403390+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88866816 unmapped: 27418624 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:56.403574+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88866816 unmapped: 27418624 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:57.403718+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88866816 unmapped: 27418624 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:58.403925+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23883000
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.014596939s of 24.018394470s, submitted: 1
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 155 ms_handle_reset con 0x558d23883000 session 0x558d24ab6780
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23883400
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 27410432 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fb1c3000/0x0/0x4ffc00000, data 0x1582b38/0x1649000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 155 ms_handle_reset con 0x558d23883400 session 0x558d24ab7e00
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:52:59.404054+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 96878592 unmapped: 19406848 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 155 handle_osd_map epochs [155,156], i have 155, src has [1,156]
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193444 data_alloc: 218103808 data_used: 7000064
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:00.404503+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 96878592 unmapped: 19406848 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336c000
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _renew_subs
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 156 handle_osd_map epochs [157,157], i have 156, src has [1,157]
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 157 ms_handle_reset con 0x558d2336c000 session 0x558d24abab40
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:01.404627+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 96919552 unmapped: 19365888 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 157 heartbeat osd_stat(store_statfs(0x4fadbc000/0x0/0x4ffc00000, data 0x1986d64/0x1a4f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:02.404936+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 96919552 unmapped: 19365888 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:03.405225+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 157 handle_osd_map epochs [157,158], i have 157, src has [1,158]
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 96919552 unmapped: 19365888 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:04.405520+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 96919552 unmapped: 19365888 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1227084 data_alloc: 218103808 data_used: 7000064
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:05.405780+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 96919552 unmapped: 19365888 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fadb8000/0x0/0x4ffc00000, data 0x1988d36/0x1a52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:06.406006+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 96919552 unmapped: 19365888 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:07.406156+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fadb8000/0x0/0x4ffc00000, data 0x1988d36/0x1a52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21f33800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21f33800 session 0x558d243a7e00
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 97230848 unmapped: 19054592 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:08.406314+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 97230848 unmapped: 19054592 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:09.406500+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 97230848 unmapped: 19054592 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fad95000/0x0/0x4ffc00000, data 0x19acd59/0x1a77000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23883000
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:10.406677+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258577 data_alloc: 234881024 data_used: 11198464
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101122048 unmapped: 15163392 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fad95000/0x0/0x4ffc00000, data 0x19acd59/0x1a77000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fad95000/0x0/0x4ffc00000, data 0x19acd59/0x1a77000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:11.406835+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101122048 unmapped: 15163392 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:12.407004+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fad95000/0x0/0x4ffc00000, data 0x19acd59/0x1a77000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101122048 unmapped: 15163392 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:13.407166+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fad95000/0x0/0x4ffc00000, data 0x19acd59/0x1a77000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101122048 unmapped: 15163392 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:14.407368+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101122048 unmapped: 15163392 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:15.407501+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258577 data_alloc: 234881024 data_used: 11198464
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101122048 unmapped: 15163392 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:16.407692+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101122048 unmapped: 15163392 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fad95000/0x0/0x4ffc00000, data 0x19acd59/0x1a77000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:17.407825+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101122048 unmapped: 15163392 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:18.408026+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101122048 unmapped: 15163392 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:19.408163+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101122048 unmapped: 15163392 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:20.408320+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258577 data_alloc: 234881024 data_used: 11198464
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101580800 unmapped: 14704640 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 22.118104935s of 22.209205627s, submitted: 39
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:21.408458+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101179392 unmapped: 15106048 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fad33000/0x0/0x4ffc00000, data 0x1a0ed59/0x1ad9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:22.408598+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 15065088 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:23.408725+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 15065088 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:24.408865+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 15065088 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:25.409010+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1269699 data_alloc: 234881024 data_used: 11309056
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 15065088 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:26.409150+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fad2f000/0x0/0x4ffc00000, data 0x1a12d59/0x1add000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 15065088 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:27.409273+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 15065088 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:28.409459+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 15065088 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:29.409577+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 15065088 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:30.409688+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1269699 data_alloc: 234881024 data_used: 11309056
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 15065088 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:31.409861+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 15065088 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fad2f000/0x0/0x4ffc00000, data 0x1a12d59/0x1add000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:32.410001+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 15065088 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:33.410141+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 15065088 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:34.410291+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 15065088 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:35.410442+0000)
Nov 24 10:15:34 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.28334 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1269699 data_alloc: 234881024 data_used: 11309056
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 15065088 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 nova_compute[257700]: 2025-11-24 10:15:34.151 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:15:34 compute-0 nova_compute[257700]: 2025-11-24 10:15:34.151 257704 DEBUG nova.compute.manager [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:36.410634+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 15065088 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:37.410782+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fad2f000/0x0/0x4ffc00000, data 0x1a12d59/0x1add000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 15065088 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:38.410947+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 15065088 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fad2f000/0x0/0x4ffc00000, data 0x1a12d59/0x1add000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:39.411073+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 15065088 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:40.411200+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1269699 data_alloc: 234881024 data_used: 11309056
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 15065088 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:41.411354+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 15065088 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fad2f000/0x0/0x4ffc00000, data 0x1a12d59/0x1add000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:42.411520+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 15065088 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:43.411665+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101220352 unmapped: 15065088 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23883400
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 23.617031097s of 23.632219315s, submitted: 3
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23883400 session 0x558d231c9680
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:44.411781+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336c400
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fad2f000/0x0/0x4ffc00000, data 0x1a12d59/0x1add000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336c400 session 0x558d231bcd20
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101769216 unmapped: 14516224 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336c800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:45.411899+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336c800 session 0x558d23bc8780
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1309974 data_alloc: 234881024 data_used: 11833344
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336cc00
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336cc00 session 0x558d215c6960
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101883904 unmapped: 14401536 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:46.412053+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101883904 unmapped: 14401536 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:47.412210+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101883904 unmapped: 14401536 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa822000/0x0/0x4ffc00000, data 0x1f1edbb/0x1fea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:48.412395+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101883904 unmapped: 14401536 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:49.412543+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101883904 unmapped: 14401536 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa822000/0x0/0x4ffc00000, data 0x1f1edbb/0x1fea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:50.412970+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1309990 data_alloc: 234881024 data_used: 11833344
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101916672 unmapped: 14368768 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa822000/0x0/0x4ffc00000, data 0x1f1edbb/0x1fea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:51.413772+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 101916672 unmapped: 14368768 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21f33800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:52.414632+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 105340928 unmapped: 10944512 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa822000/0x0/0x4ffc00000, data 0x1f1edbb/0x1fea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:53.414798+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 9420800 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:54.415356+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 9420800 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:55.415530+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1345710 data_alloc: 234881024 data_used: 17121280
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 9420800 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa822000/0x0/0x4ffc00000, data 0x1f1edbb/0x1fea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:56.416184+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 9388032 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:57.416351+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 9388032 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:58.416583+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 9388032 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:53:59.416791+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 9388032 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:00.416927+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1345710 data_alloc: 234881024 data_used: 17121280
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 9388032 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:01.417092+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 106897408 unmapped: 9388032 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa822000/0x0/0x4ffc00000, data 0x1f1edbb/0x1fea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:02.417466+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 106930176 unmapped: 9355264 heap: 116285440 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:03.417699+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.244815826s of 19.362913132s, submitted: 31
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 115105792 unmapped: 4325376 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:04.417897+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 112279552 unmapped: 7151616 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:05.418083+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1423254 data_alloc: 234881024 data_used: 18010112
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111017984 unmapped: 8413184 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8db7000/0x0/0x4ffc00000, data 0x27e9dbb/0x28b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:06.418342+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8db7000/0x0/0x4ffc00000, data 0x27e9dbb/0x28b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111017984 unmapped: 8413184 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:07.418497+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111017984 unmapped: 8413184 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:08.418718+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111017984 unmapped: 8413184 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:09.418914+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8db7000/0x0/0x4ffc00000, data 0x27e9dbb/0x28b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111017984 unmapped: 8413184 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:10.419163+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8db7000/0x0/0x4ffc00000, data 0x27e9dbb/0x28b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1423558 data_alloc: 234881024 data_used: 18018304
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111026176 unmapped: 8404992 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:11.419399+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21f33800 session 0x558d21df4780
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111026176 unmapped: 8404992 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336c400
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:12.419613+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336c400 session 0x558d243a6000
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 107732992 unmapped: 11698176 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:13.419897+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 107732992 unmapped: 11698176 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:14.420182+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 107732992 unmapped: 11698176 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:15.420369+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1277852 data_alloc: 234881024 data_used: 11833344
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 107732992 unmapped: 11698176 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:16.420524+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f93a4000/0x0/0x4ffc00000, data 0x1a12d59/0x1add000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 107732992 unmapped: 11698176 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:17.420828+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23883000 session 0x558d24abbc20
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23882800 session 0x558d23870b40
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 107732992 unmapped: 11698176 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336c800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:18.421047+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.526162148s of 14.800761223s, submitted: 109
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336c800 session 0x558d22262000
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:19.421179+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:20.421320+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1211406 data_alloc: 218103808 data_used: 7524352
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:21.421444+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa019000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:22.421653+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:23.421784+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:24.421981+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:25.422136+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1211406 data_alloc: 218103808 data_used: 7524352
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:26.422265+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:27.422415+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa019000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:28.422610+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa019000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:29.422732+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:30.422853+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1211406 data_alloc: 218103808 data_used: 7524352
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:31.422963+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:32.423051+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:33.423198+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:34.423265+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa019000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:35.423373+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1211406 data_alloc: 218103808 data_used: 7524352
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa019000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:36.423547+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:37.423640+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:38.423785+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:39.423942+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:40.424123+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1211406 data_alloc: 218103808 data_used: 7524352
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa019000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:41.424243+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:42.424410+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa019000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:43.424545+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:44.424679+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:45.424802+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1211406 data_alloc: 218103808 data_used: 7524352
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa019000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:46.424962+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:47.425094+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:48.425285+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:49.425462+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:50.425617+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1211406 data_alloc: 218103808 data_used: 7524352
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:51.425792+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104726528 unmapped: 14704640 heap: 119431168 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa019000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21f33800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 33.755134583s of 33.798049927s, submitted: 15
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21f33800 session 0x558d242505a0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:52.425940+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104734720 unmapped: 24215552 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:53.426131+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104734720 unmapped: 24215552 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:54.426328+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f99be000/0x0/0x4ffc00000, data 0x1be4d36/0x1cae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104734720 unmapped: 24215552 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:55.427333+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258560 data_alloc: 218103808 data_used: 7000064
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104734720 unmapped: 24215552 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:56.427509+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336c400
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336c400 session 0x558d242605a0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104751104 unmapped: 24199168 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:57.427630+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104751104 unmapped: 24199168 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:58.427846+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 105603072 unmapped: 23347200 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:54:59.428219+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 105603072 unmapped: 23347200 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f99bd000/0x0/0x4ffc00000, data 0x1be4d59/0x1caf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:00.428514+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1305477 data_alloc: 234881024 data_used: 11505664
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 105603072 unmapped: 23347200 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:01.429019+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f99bd000/0x0/0x4ffc00000, data 0x1be4d59/0x1caf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 105603072 unmapped: 23347200 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:02.429627+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 105603072 unmapped: 23347200 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:03.430155+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 105603072 unmapped: 23347200 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:04.430358+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 105603072 unmapped: 23347200 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:05.430534+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1305477 data_alloc: 234881024 data_used: 11505664
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 105603072 unmapped: 23347200 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:06.430818+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 105603072 unmapped: 23347200 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:07.431036+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f99bd000/0x0/0x4ffc00000, data 0x1be4d59/0x1caf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 105603072 unmapped: 23347200 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:08.431386+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.502574921s of 16.565576553s, submitted: 10
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108593152 unmapped: 20357120 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:09.431621+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:10.431823+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1364337 data_alloc: 234881024 data_used: 12042240
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:11.432052+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9308000/0x0/0x4ffc00000, data 0x2299d59/0x2364000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:12.432367+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9308000/0x0/0x4ffc00000, data 0x2299d59/0x2364000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:13.432660+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9308000/0x0/0x4ffc00000, data 0x2299d59/0x2364000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:14.432797+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:15.432945+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1364337 data_alloc: 234881024 data_used: 12042240
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9308000/0x0/0x4ffc00000, data 0x2299d59/0x2364000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:16.433191+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:17.433370+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:18.433619+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9308000/0x0/0x4ffc00000, data 0x2299d59/0x2364000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:19.433900+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:20.434069+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1364337 data_alloc: 234881024 data_used: 12042240
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:21.434240+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9308000/0x0/0x4ffc00000, data 0x2299d59/0x2364000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:22.434457+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:23.434686+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:24.434900+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:25.435041+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1364337 data_alloc: 234881024 data_used: 12042240
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:26.435239+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9308000/0x0/0x4ffc00000, data 0x2299d59/0x2364000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:27.435373+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:28.435535+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:29.435708+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:30.435876+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1364337 data_alloc: 234881024 data_used: 12042240
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:31.436039+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9308000/0x0/0x4ffc00000, data 0x2299d59/0x2364000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:32.436198+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:33.436350+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:34.436482+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:35.436595+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1364337 data_alloc: 234881024 data_used: 12042240
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:36.436717+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:37.436844+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9308000/0x0/0x4ffc00000, data 0x2299d59/0x2364000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 20283392 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:38.437073+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23882800 session 0x558d231bdc20
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23883000
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 30.224287033s of 30.359544754s, submitted: 50
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 103981056 unmapped: 24969216 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:39.437275+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23883000 session 0x558d236583c0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 103989248 unmapped: 24961024 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:40.437407+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219187 data_alloc: 218103808 data_used: 4837376
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 103989248 unmapped: 24961024 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:41.437550+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 103989248 unmapped: 24961024 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:42.437715+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:43.437895+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 103989248 unmapped: 24961024 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa019000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:44.438045+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 103989248 unmapped: 24961024 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:45.438179+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 103989248 unmapped: 24961024 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219187 data_alloc: 218103808 data_used: 4837376
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:46.438347+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 103989248 unmapped: 24961024 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa019000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:47.438493+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 103940096 unmapped: 25010176 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:48.438661+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 103940096 unmapped: 25010176 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:49.438817+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 103940096 unmapped: 25010176 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa019000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:50.439703+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 103940096 unmapped: 25010176 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa019000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219187 data_alloc: 218103808 data_used: 4837376
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:51.439911+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 103940096 unmapped: 25010176 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:52.440065+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 103940096 unmapped: 25010176 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa019000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:53.440202+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 103940096 unmapped: 25010176 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:54.440341+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 103940096 unmapped: 25010176 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:55.440511+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 103940096 unmapped: 25010176 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa019000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219187 data_alloc: 218103808 data_used: 4837376
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:56.440642+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 103940096 unmapped: 25010176 heap: 128950272 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336cc00
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336cc00 session 0x558d215d74a0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21f33800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21f33800 session 0x558d24261c20
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336c400
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336c400 session 0x558d24ad83c0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23882800 session 0x558d214be960
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23883000
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.965669632s of 18.014984131s, submitted: 19
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:57.440796+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23883000 session 0x558d22f35e00
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23883400
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23883400 session 0x558d24ad8780
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104701952 unmapped: 28450816 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21f33800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21f33800 session 0x558d24455e00
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336c400
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336c400 session 0x558d231bc000
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23882800 session 0x558d24ab74a0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:58.441006+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104701952 unmapped: 28450816 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:55:59.441190+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104701952 unmapped: 28450816 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f979f000/0x0/0x4ffc00000, data 0x1e02d46/0x1ecd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23883000
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23883000 session 0x558d244552c0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:00.441326+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104701952 unmapped: 28450816 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336d000
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336d000 session 0x558d24abb860
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1285605 data_alloc: 218103808 data_used: 4837376
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:01.441455+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104685568 unmapped: 28467200 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21f33800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21f33800 session 0x558d2158e5a0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336c400
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336c400 session 0x558d2149da40
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:02.441582+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 104710144 unmapped: 28442624 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f979d000/0x0/0x4ffc00000, data 0x1e02d79/0x1ecf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336d000
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:03.441706+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 103964672 unmapped: 29188096 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 10K writes, 39K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 10K writes, 2620 syncs, 4.03 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1371 writes, 3660 keys, 1371 commit groups, 1.0 writes per commit group, ingest: 2.75 MB, 0.00 MB/s
                                           Interval WAL: 1371 writes, 611 syncs, 2.24 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:04.441827+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 26656768 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:05.442007+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 26656768 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1340445 data_alloc: 234881024 data_used: 11751424
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f979d000/0x0/0x4ffc00000, data 0x1e02d79/0x1ecf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:06.442178+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 26656768 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:07.442346+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 26656768 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:08.442497+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 26656768 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:09.442662+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 26656768 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f979d000/0x0/0x4ffc00000, data 0x1e02d79/0x1ecf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:10.442811+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 26656768 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1340445 data_alloc: 234881024 data_used: 11751424
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:11.442963+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 26656768 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:12.443128+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 26656768 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f979d000/0x0/0x4ffc00000, data 0x1e02d79/0x1ecf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:13.443283+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 26656768 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:14.443463+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 26656768 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.491628647s of 17.575763702s, submitted: 24
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:15.443586+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108888064 unmapped: 24264704 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1386909 data_alloc: 234881024 data_used: 11845632
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:16.443841+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108838912 unmapped: 24313856 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f91d7000/0x0/0x4ffc00000, data 0x23c7d79/0x2494000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:17.443982+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 25157632 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:18.444208+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 25157632 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:19.444353+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 25157632 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:20.444523+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 25157632 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f915a000/0x0/0x4ffc00000, data 0x2444d79/0x2511000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1391019 data_alloc: 234881024 data_used: 11915264
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:21.444642+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 25157632 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:22.444775+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 25157632 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:23.444884+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108265472 unmapped: 24887296 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:24.445055+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108265472 unmapped: 24887296 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:25.445286+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108265472 unmapped: 24887296 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f913a000/0x0/0x4ffc00000, data 0x2465d79/0x2532000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1389963 data_alloc: 234881024 data_used: 11915264
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:26.445454+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108265472 unmapped: 24887296 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:27.445588+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108265472 unmapped: 24887296 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:28.445742+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108265472 unmapped: 24887296 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.405244827s of 14.579542160s, submitted: 62
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:29.445890+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108339200 unmapped: 24813568 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9131000/0x0/0x4ffc00000, data 0x246ed79/0x253b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:30.446034+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108347392 unmapped: 24805376 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1390067 data_alloc: 234881024 data_used: 11915264
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:31.446285+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108347392 unmapped: 24805376 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:32.447180+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108347392 unmapped: 24805376 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:33.448315+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23883000
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108363776 unmapped: 24788992 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23883000 session 0x558d215d70e0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336d400
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336d400 session 0x558d24ab6d20
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336d800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336d800 session 0x558d24ab63c0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21f33800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21f33800 session 0x558d24ab7e00
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336c400
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336c400 session 0x558d24ab74a0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:34.448496+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 24068096 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:35.449232+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 24068096 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8924000/0x0/0x4ffc00000, data 0x2c7bd79/0x2d48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1455854 data_alloc: 234881024 data_used: 11915264
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:36.450170+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 24068096 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:37.450653+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 24068096 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8921000/0x0/0x4ffc00000, data 0x2c7ed79/0x2d4b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:38.450873+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 24068096 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:39.451014+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 24068096 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:40.451161+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 24068096 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1455854 data_alloc: 234881024 data_used: 11915264
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:41.451445+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109084672 unmapped: 24068096 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336d400
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.207920074s of 12.297619820s, submitted: 26
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336d400 session 0x558d24ab6000
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8921000/0x0/0x4ffc00000, data 0x2c7ed79/0x2d4b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23883000
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:42.451633+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109109248 unmapped: 24043520 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:43.451768+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110166016 unmapped: 22986752 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8920000/0x0/0x4ffc00000, data 0x2c7ed9c/0x2d4c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:44.451909+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 17555456 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:45.452067+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 17555456 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1513695 data_alloc: 234881024 data_used: 19017728
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:46.452173+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8920000/0x0/0x4ffc00000, data 0x2c7ed9c/0x2d4c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 115630080 unmapped: 17522688 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8920000/0x0/0x4ffc00000, data 0x2c7ed9c/0x2d4c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:47.452312+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 115744768 unmapped: 17408000 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:48.452679+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 115744768 unmapped: 17408000 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:49.452905+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 115744768 unmapped: 17408000 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:50.453169+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f891e000/0x0/0x4ffc00000, data 0x2c7fd9c/0x2d4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 115777536 unmapped: 17375232 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1514287 data_alloc: 234881024 data_used: 19021824
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:51.453374+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 115777536 unmapped: 17375232 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:52.453613+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 115777536 unmapped: 17375232 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:53.453747+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 115777536 unmapped: 17375232 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.486683846s of 12.517056465s, submitted: 8
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:54.453886+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 121249792 unmapped: 11902976 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:55.454082+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 121405440 unmapped: 11747328 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1599979 data_alloc: 234881024 data_used: 19943424
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f7e16000/0x0/0x4ffc00000, data 0x3788d9c/0x3856000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:56.454389+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118661120 unmapped: 14491648 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:57.454549+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118669312 unmapped: 14483456 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:58.454726+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118669312 unmapped: 14483456 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f7e16000/0x0/0x4ffc00000, data 0x3788d9c/0x3856000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:56:59.454924+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118702080 unmapped: 14450688 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:00.455132+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118702080 unmapped: 14450688 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1600131 data_alloc: 234881024 data_used: 19947520
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:01.455294+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118702080 unmapped: 14450688 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:02.455417+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23883000 session 0x558d231bdc20
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118718464 unmapped: 14434304 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f7e16000/0x0/0x4ffc00000, data 0x3788d9c/0x3856000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336dc00
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336dc00 session 0x558d2335dc20
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:03.455528+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113754112 unmapped: 19398656 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:04.455835+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113754112 unmapped: 19398656 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:05.456170+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f88c9000/0x0/0x4ffc00000, data 0x2472d79/0x253f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113754112 unmapped: 19398656 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1400713 data_alloc: 234881024 data_used: 10334208
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:06.456406+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113754112 unmapped: 19398656 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336d000 session 0x558d242c9e00
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.439421654s of 12.743181229s, submitted: 128
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23882800 session 0x558d22f345a0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21f33800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:07.456550+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21f33800 session 0x558d232aa960
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108969984 unmapped: 24182784 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:08.456805+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108969984 unmapped: 24182784 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:09.456968+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108969984 unmapped: 24182784 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:10.457125+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108969984 unmapped: 24182784 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c09000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239810 data_alloc: 218103808 data_used: 4837376
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:11.457252+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108969984 unmapped: 24182784 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:12.457578+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c09000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108969984 unmapped: 24182784 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:13.457808+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21610800 session 0x558d21e2d680
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336c400
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108969984 unmapped: 24182784 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21610000 session 0x558d2158e3c0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336d400
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:14.458185+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108969984 unmapped: 24182784 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c09000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21611800 session 0x558d23871e00
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23883000
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:15.458331+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108969984 unmapped: 24182784 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239810 data_alloc: 218103808 data_used: 4837376
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:16.458618+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108969984 unmapped: 24182784 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:17.458908+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108969984 unmapped: 24182784 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:18.459209+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108969984 unmapped: 24182784 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:19.459355+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108969984 unmapped: 24182784 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:20.459624+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108969984 unmapped: 24182784 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c09000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239810 data_alloc: 218103808 data_used: 4837376
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:21.459808+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.715085030s of 14.812863350s, submitted: 35
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 108969984 unmapped: 24182784 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c09000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:22.459983+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109002752 unmapped: 24150016 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:23.460158+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c0a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [0,0,0,1])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109150208 unmapped: 24002560 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:24.460316+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109248512 unmapped: 23904256 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:25.460496+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109248512 unmapped: 23904256 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c0a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239518 data_alloc: 218103808 data_used: 4837376
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:26.460685+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109256704 unmapped: 23896064 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:27.460806+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109256704 unmapped: 23896064 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21610800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:28.461001+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21610800 session 0x558d24ab74a0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109281280 unmapped: 23871488 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:29.461161+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109289472 unmapped: 23863296 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:30.461393+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9790000/0x0/0x4ffc00000, data 0x1a02d36/0x1acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109297664 unmapped: 23855104 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1277516 data_alloc: 218103808 data_used: 4837376
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:31.461546+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109297664 unmapped: 23855104 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:32.461669+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109297664 unmapped: 23855104 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21611800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21611800 session 0x558d24ab6b40
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:33.461854+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9790000/0x0/0x4ffc00000, data 0x1a02d36/0x1acc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21610800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21610800 session 0x558d23643a40
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109297664 unmapped: 23855104 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21f33800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21f33800 session 0x558d242623c0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336d000
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.204575539s of 12.448491096s, submitted: 356
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336d000 session 0x558d242612c0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:34.462015+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109338624 unmapped: 23814144 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d25c9ec00
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:35.462174+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109338624 unmapped: 23814144 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:36.462436+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1281167 data_alloc: 218103808 data_used: 4960256
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109174784 unmapped: 23977984 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:37.462583+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f978f000/0x0/0x4ffc00000, data 0x1a02d46/0x1acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109174784 unmapped: 23977984 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:38.462886+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109174784 unmapped: 23977984 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:39.463169+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109174784 unmapped: 23977984 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f978f000/0x0/0x4ffc00000, data 0x1a02d46/0x1acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:40.463449+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109174784 unmapped: 23977984 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:41.463644+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1302295 data_alloc: 218103808 data_used: 8114176
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f978f000/0x0/0x4ffc00000, data 0x1a02d46/0x1acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109174784 unmapped: 23977984 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:42.463909+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109182976 unmapped: 23969792 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:43.464150+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109182976 unmapped: 23969792 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f978f000/0x0/0x4ffc00000, data 0x1a02d46/0x1acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:44.464441+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109182976 unmapped: 23969792 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:45.464581+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109043712 unmapped: 24109056 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:46.464745+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1302295 data_alloc: 218103808 data_used: 8114176
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109043712 unmapped: 24109056 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.845697403s of 12.867346764s, submitted: 7
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:47.464920+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111583232 unmapped: 21569536 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:48.465250+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111435776 unmapped: 21716992 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:49.465446+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f92c1000/0x0/0x4ffc00000, data 0x1ec7d46/0x1f92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111460352 unmapped: 21692416 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:50.465566+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111460352 unmapped: 21692416 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:51.465776+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351943 data_alloc: 218103808 data_used: 8171520
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111460352 unmapped: 21692416 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:52.465899+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111460352 unmapped: 21692416 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:53.466055+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111468544 unmapped: 21684224 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:54.466236+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111067136 unmapped: 22085632 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:55.466383+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f92a9000/0x0/0x4ffc00000, data 0x1ee8d46/0x1fb3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111067136 unmapped: 22085632 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:56.466546+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1346975 data_alloc: 218103808 data_used: 8171520
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111067136 unmapped: 22085632 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:57.466731+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111067136 unmapped: 22085632 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:58.466979+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111067136 unmapped: 22085632 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:57:59.467181+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.613057137s of 12.828779221s, submitted: 73
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111091712 unmapped: 22061056 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21d78400 session 0x558d23318f00
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d25c9e000
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:00.467377+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111091712 unmapped: 22061056 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:01.467782+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1346999 data_alloc: 218103808 data_used: 8171520
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f929b000/0x0/0x4ffc00000, data 0x1ef6d46/0x1fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111091712 unmapped: 22061056 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:02.467962+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111091712 unmapped: 22061056 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:03.468202+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111091712 unmapped: 22061056 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:04.468450+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111091712 unmapped: 22061056 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:05.468660+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f929b000/0x0/0x4ffc00000, data 0x1ef6d46/0x1fc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111091712 unmapped: 22061056 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:06.468908+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1347375 data_alloc: 218103808 data_used: 8171520
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111091712 unmapped: 22061056 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:07.469074+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111099904 unmapped: 22052864 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:08.469279+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111099904 unmapped: 22052864 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9298000/0x0/0x4ffc00000, data 0x1ef9d46/0x1fc4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:09.469425+0000)
Nov 24 10:15:34 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111099904 unmapped: 22052864 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:10.469500+0000)
Nov 24 10:15:34 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:34.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111099904 unmapped: 22052864 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:11.469661+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1347679 data_alloc: 218103808 data_used: 8179712
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.016714096s of 12.032507896s, submitted: 5
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 21880832 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:12.469793+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 21880832 heap: 133152768 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:13.469920+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b20000
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b20000 session 0x558d214bf4a0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b20400
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b20400 session 0x558d242625a0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21610800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21610800 session 0x558d243a6f00
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9288000/0x0/0x4ffc00000, data 0x1f09d46/0x1fd4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21f33800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 25714688 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21f33800 session 0x558d24ab63c0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336d000
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336d000 session 0x558d242c9c20
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:14.470061+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 25714688 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:15.470213+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8d3c000/0x0/0x4ffc00000, data 0x2454da8/0x2520000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 25714688 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:16.470384+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1395763 data_alloc: 218103808 data_used: 8179712
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 25714688 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:17.470520+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 25706496 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:18.470724+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111173632 unmapped: 26181632 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:19.470851+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8d3c000/0x0/0x4ffc00000, data 0x2454da8/0x2520000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111173632 unmapped: 26181632 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:20.471040+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b20000
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b20000 session 0x558d2158e3c0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111181824 unmapped: 26173440 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b20800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b20c00
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:21.471189+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8d3c000/0x0/0x4ffc00000, data 0x2454da8/0x2520000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1395548 data_alloc: 218103808 data_used: 8179712
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8d3c000/0x0/0x4ffc00000, data 0x2454da8/0x2520000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111206400 unmapped: 26148864 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:22.471313+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8d3c000/0x0/0x4ffc00000, data 0x2454da8/0x2520000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 114827264 unmapped: 22528000 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:23.471494+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 114827264 unmapped: 22528000 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:24.471647+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 114827264 unmapped: 22528000 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.056807518s of 13.206870079s, submitted: 52
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:25.471801+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8d3c000/0x0/0x4ffc00000, data 0x2454da8/0x2520000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 114876416 unmapped: 22478848 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:26.471946+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1432228 data_alloc: 234881024 data_used: 13516800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 114876416 unmapped: 22478848 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:27.472167+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8d38000/0x0/0x4ffc00000, data 0x2458da8/0x2524000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 114876416 unmapped: 22478848 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:28.472346+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 114876416 unmapped: 22478848 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:29.472498+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 114876416 unmapped: 22478848 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:30.472628+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 114876416 unmapped: 22478848 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8d38000/0x0/0x4ffc00000, data 0x2458da8/0x2524000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:31.472753+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1432476 data_alloc: 234881024 data_used: 13516800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 114900992 unmapped: 22454272 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:32.472932+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117530624 unmapped: 19824640 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:33.473074+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118603776 unmapped: 18751488 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:34.473201+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118652928 unmapped: 18702336 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f827c000/0x0/0x4ffc00000, data 0x2f14da8/0x2fe0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:35.473423+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118685696 unmapped: 18669568 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:36.473577+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1520180 data_alloc: 234881024 data_used: 14266368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118693888 unmapped: 18661376 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:37.473830+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.298201561s of 12.570601463s, submitted: 89
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117907456 unmapped: 19447808 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:38.474011+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117907456 unmapped: 19447808 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8279000/0x0/0x4ffc00000, data 0x2f17da8/0x2fe3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:39.474153+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8279000/0x0/0x4ffc00000, data 0x2f17da8/0x2fe3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118079488 unmapped: 19275776 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:40.474350+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f825a000/0x0/0x4ffc00000, data 0x2f36da8/0x3002000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118079488 unmapped: 19275776 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:41.474601+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1518452 data_alloc: 234881024 data_used: 14270464
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118087680 unmapped: 19267584 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:42.474813+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118087680 unmapped: 19267584 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:43.474973+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118087680 unmapped: 19267584 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:44.475166+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8254000/0x0/0x4ffc00000, data 0x2f3cda8/0x3008000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118087680 unmapped: 19267584 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:45.475401+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118292480 unmapped: 19062784 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:46.475539+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1519144 data_alloc: 234881024 data_used: 14270464
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f824a000/0x0/0x4ffc00000, data 0x2f46da8/0x3012000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118292480 unmapped: 19062784 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:47.475675+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118300672 unmapped: 19054592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:48.475831+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118300672 unmapped: 19054592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:49.476008+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f824a000/0x0/0x4ffc00000, data 0x2f46da8/0x3012000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118300672 unmapped: 19054592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:50.476232+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118300672 unmapped: 19054592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:51.476362+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1519144 data_alloc: 234881024 data_used: 14270464
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118300672 unmapped: 19054592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:52.476493+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.050909042s of 15.083417892s, submitted: 9
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118300672 unmapped: 19054592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8244000/0x0/0x4ffc00000, data 0x2f4cda8/0x3018000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:53.476615+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118300672 unmapped: 19054592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:54.476736+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118358016 unmapped: 18997248 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:55.476864+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8241000/0x0/0x4ffc00000, data 0x2f4dda8/0x3019000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118358016 unmapped: 18997248 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:56.477003+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1519252 data_alloc: 234881024 data_used: 14270464
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118358016 unmapped: 18997248 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:57.477154+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 18989056 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:58.477323+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 18989056 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:58:59.477453+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 20570112 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:00.477596+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 20570112 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:01.477768+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1519524 data_alloc: 234881024 data_used: 14270464
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f823e000/0x0/0x4ffc00000, data 0x2f52da8/0x301e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 20561920 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:02.477940+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 20561920 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:03.478085+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 20561920 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.415008545s of 11.443326950s, submitted: 9
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:04.478332+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 20561920 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:05.478464+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 20561920 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:06.478655+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1520188 data_alloc: 234881024 data_used: 14270464
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 20561920 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8239000/0x0/0x4ffc00000, data 0x2f56da8/0x3022000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:07.478821+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116801536 unmapped: 20553728 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:08.479035+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116801536 unmapped: 20553728 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:09.479186+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8239000/0x0/0x4ffc00000, data 0x2f56da8/0x3022000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116817920 unmapped: 20537344 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:10.479344+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116817920 unmapped: 20537344 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:11.479563+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1520168 data_alloc: 234881024 data_used: 14270464
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116817920 unmapped: 20537344 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:12.479779+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8234000/0x0/0x4ffc00000, data 0x2f5cda8/0x3028000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116817920 unmapped: 20537344 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:13.480067+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116817920 unmapped: 20537344 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:14.480322+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116817920 unmapped: 20537344 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:15.480506+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.406557083s of 11.440868378s, submitted: 10
Nov 24 10:15:34 compute-0 ceph-osd[82549]: mgrc ms_handle_reset ms_handle_reset con 0x558d23035800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3769522832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3769522832,v1:192.168.122.100:6801/3769522832]
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: get_auth_request con 0x558d21611800 auth_method 0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: mgrc handle_mgr_configure stats_period=5
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116817920 unmapped: 20537344 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:16.480719+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1520076 data_alloc: 234881024 data_used: 14270464
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116817920 unmapped: 20537344 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:17.480872+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8230000/0x0/0x4ffc00000, data 0x2f5fda8/0x302b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116826112 unmapped: 20529152 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:18.481015+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116826112 unmapped: 20529152 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:19.481190+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116826112 unmapped: 20529152 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:20.481385+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 20520960 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:21.481617+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1520164 data_alloc: 234881024 data_used: 14270464
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 20520960 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:22.481855+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f822d000/0x0/0x4ffc00000, data 0x2f62da8/0x302e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 20520960 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:23.482028+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116842496 unmapped: 20512768 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:24.482217+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116842496 unmapped: 20512768 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:25.482383+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.978147507s of 10.000229836s, submitted: 7
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b20800 session 0x558d2153c3c0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b20c00 session 0x558d24260d20
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21610800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116875264 unmapped: 20480000 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:26.482517+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1360002 data_alloc: 218103808 data_used: 8179712
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21610800 session 0x558d2335d0e0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 115474432 unmapped: 21880832 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:27.482698+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f876a000/0x0/0x4ffc00000, data 0x1f35d46/0x2000000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 115474432 unmapped: 21880832 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:28.482910+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 115474432 unmapped: 21880832 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:29.483080+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8765000/0x0/0x4ffc00000, data 0x1f3ad46/0x2005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 115474432 unmapped: 21880832 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:30.483299+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 115474432 unmapped: 21880832 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:31.483435+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1360378 data_alloc: 218103808 data_used: 8179712
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 115474432 unmapped: 21880832 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:32.483606+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 115474432 unmapped: 21880832 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:33.483763+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8765000/0x0/0x4ffc00000, data 0x1f3ad46/0x2005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23882800 session 0x558d215d8000
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d25c9ec00 session 0x558d24ad9c20
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21610800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 115474432 unmapped: 21880832 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:34.483909+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21610800 session 0x558d24ab7c20
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 24174592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:35.484092+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 24174592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:36.484334+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1259967 data_alloc: 218103808 data_used: 4837376
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 24174592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:37.484532+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 24174592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:38.484861+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 24174592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:39.485033+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c0a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 24174592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:40.485166+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c0a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 24174592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:41.485289+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1259967 data_alloc: 218103808 data_used: 4837376
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 24174592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:42.485470+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 24174592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:43.485636+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c0a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 24174592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:44.485959+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 24174592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:45.486185+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 24174592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:46.486349+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1259967 data_alloc: 218103808 data_used: 4837376
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 24174592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:47.486501+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c0a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c0a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 24174592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:48.486731+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 24174592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:49.486885+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 24174592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:50.487041+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 24174592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:51.487200+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1259967 data_alloc: 218103808 data_used: 4837376
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 24174592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:52.487320+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 24174592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c0a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:53.487494+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 24174592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:54.487656+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 24174592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:55.487811+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c0a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 24174592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:56.487962+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1259967 data_alloc: 218103808 data_used: 4837376
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c0a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 24174592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:57.488144+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 24174592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:58.488297+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 24174592 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T09:59:59.488421+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23882800 session 0x558d2149d4a0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b20800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b20800 session 0x558d2153cd20
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b20c00
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b20c00 session 0x558d2149da40
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21f33800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21f33800 session 0x558d23bda1e0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21610800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 33.998332977s of 34.264808655s, submitted: 79
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21610800 session 0x558d215d70e0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 112754688 unmapped: 24600576 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:00.488558+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 112754688 unmapped: 24600576 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:01.488669+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1327283 data_alloc: 218103808 data_used: 4837376
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:02.488821+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 112754688 unmapped: 24600576 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f92e1000/0x0/0x4ffc00000, data 0x1eb1d36/0x1f7b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:03.488964+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 112754688 unmapped: 24600576 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:04.489067+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 112754688 unmapped: 24600576 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f92e1000/0x0/0x4ffc00000, data 0x1eb1d36/0x1f7b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23882800 session 0x558d22f34000
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:05.489195+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 112754688 unmapped: 24600576 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b20800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b20800 session 0x558d215c7c20
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:06.489370+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 112754688 unmapped: 24600576 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b20c00
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b20c00 session 0x558d232ab4a0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336d000
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336d000 session 0x558d24ab6960
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329097 data_alloc: 218103808 data_used: 4837376
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336d000
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:07.489503+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 24576000 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21610800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:08.489684+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 24576000 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:09.489823+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 21757952 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:10.489977+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 21757952 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f92e0000/0x0/0x4ffc00000, data 0x1eb1d46/0x1f7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:11.490124+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 21757952 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21610800 session 0x558d24250960
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336d000 session 0x558d23319c20
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1393241 data_alloc: 234881024 data_used: 12484608
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:12.490242+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.449810028s of 12.505952835s, submitted: 11
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110157824 unmapped: 27197440 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f92e0000/0x0/0x4ffc00000, data 0x1eb1d46/0x1f7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23882800 session 0x558d24aba3c0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:13.490392+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110157824 unmapped: 27197440 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:14.490557+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110157824 unmapped: 27197440 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:15.490783+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110157824 unmapped: 27197440 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:16.490958+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110157824 unmapped: 27197440 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264968 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:17.491140+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110157824 unmapped: 27197440 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c0a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:18.491310+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110157824 unmapped: 27197440 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:19.491429+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110157824 unmapped: 27197440 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:20.491580+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110157824 unmapped: 27197440 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:21.491729+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110157824 unmapped: 27197440 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c0a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264968 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:22.491939+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c0a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110157824 unmapped: 27197440 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c0a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:23.492143+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110157824 unmapped: 27197440 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:24.492268+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110157824 unmapped: 27197440 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:25.492468+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110157824 unmapped: 27197440 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:26.492617+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110157824 unmapped: 27197440 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264968 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:27.492781+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110157824 unmapped: 27197440 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:28.492922+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110157824 unmapped: 27197440 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c0a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b20800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b20800 session 0x558d23bdad20
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b20c00
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b20c00 session 0x558d232aaf00
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b20c00
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b20c00 session 0x558d24abad20
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21610800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21610800 session 0x558d24abb680
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336d000
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:29.493024+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.988937378s of 17.018053055s, submitted: 7
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 114384896 unmapped: 22970368 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336d000 session 0x558d2158e780
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:30.493163+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110190592 unmapped: 27164672 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:31.493282+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110190592 unmapped: 27164672 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1327830 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:32.493423+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110190592 unmapped: 27164672 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f93e8000/0x0/0x4ffc00000, data 0x1daad36/0x1e74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:33.493559+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110198784 unmapped: 27156480 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:34.493693+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110198784 unmapped: 27156480 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23882800 session 0x558d232aba40
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f93e8000/0x0/0x4ffc00000, data 0x1daad36/0x1e74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:35.493849+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110223360 unmapped: 27131904 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b20800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:36.493953+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110223360 unmapped: 27131904 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b20000
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1331368 data_alloc: 218103808 data_used: 3067904
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:37.494056+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111378432 unmapped: 25976832 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f93e7000/0x0/0x4ffc00000, data 0x1daad59/0x1e75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:38.494165+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113852416 unmapped: 23502848 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f93e7000/0x0/0x4ffc00000, data 0x1daad59/0x1e75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:39.494289+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113852416 unmapped: 23502848 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b20000 session 0x558d22276b40
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.594711304s of 10.678098679s, submitted: 22
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b20800 session 0x558d2149cd20
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:40.494424+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21610800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113852416 unmapped: 23502848 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21610800 session 0x558d2335cf00
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:41.494546+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109961216 unmapped: 27394048 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1271058 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:42.494718+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109961216 unmapped: 27394048 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:43.494900+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109961216 unmapped: 27394048 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:44.495090+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c09000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109961216 unmapped: 27394048 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:45.495318+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109961216 unmapped: 27394048 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:46.495476+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109961216 unmapped: 27394048 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1271058 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:47.495659+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c09000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109961216 unmapped: 27394048 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:48.495850+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109961216 unmapped: 27394048 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c09000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:49.496024+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109969408 unmapped: 27385856 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:50.496168+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109969408 unmapped: 27385856 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c09000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c09000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:51.496327+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109969408 unmapped: 27385856 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1271058 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:52.496492+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c09000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109969408 unmapped: 27385856 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:53.496649+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c09000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109969408 unmapped: 27385856 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c09000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:54.496773+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109969408 unmapped: 27385856 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:55.496897+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109969408 unmapped: 27385856 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c09000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:56.497061+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 27377664 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1271058 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:57.497224+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 27377664 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:58.497425+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 27377664 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:00:59.497551+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 27377664 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:00.497682+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 27377664 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c09000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:01.497811+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 27377664 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1271058 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:02.497972+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109977600 unmapped: 27377664 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:03.498121+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109985792 unmapped: 27369472 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:04.498310+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109985792 unmapped: 27369472 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:05.498450+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109985792 unmapped: 27369472 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:06.498676+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109985792 unmapped: 27369472 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c09000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1271058 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:07.498825+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 109985792 unmapped: 27369472 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336d000
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336d000 session 0x558d23870b40
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23882800 session 0x558d23a1ab40
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b20c00
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b20c00 session 0x558d23ab0f00
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21610800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21610800 session 0x558d23ab0960
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336d000
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 28.090379715s of 28.186922073s, submitted: 30
Nov 24 10:15:34 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Nov 24 10:15:34 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2776402292' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:08.499143+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110059520 unmapped: 27295744 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336d000 session 0x558d23ab0b40
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:09.499331+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23882800 session 0x558d214be5a0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110067712 unmapped: 27287552 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:10.499594+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110067712 unmapped: 27287552 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f92f6000/0x0/0x4ffc00000, data 0x1e9bd98/0x1f66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:11.499776+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110067712 unmapped: 27287552 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341067 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:12.499966+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110067712 unmapped: 27287552 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f92f6000/0x0/0x4ffc00000, data 0x1e9bd98/0x1f66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:13.500164+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110075904 unmapped: 27279360 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:14.500340+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110084096 unmapped: 27271168 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:15.500479+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110084096 unmapped: 27271168 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:16.500635+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110084096 unmapped: 27271168 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b20800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b20800 session 0x558d24260780
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341067 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:17.500776+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f92f6000/0x0/0x4ffc00000, data 0x1e9bd98/0x1f66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b21000
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b21000 session 0x558d215c70e0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110100480 unmapped: 27254784 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:18.500938+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b21000
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b21000 session 0x558d21d8a780
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110100480 unmapped: 27254784 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21610800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.295337677s of 10.542456627s, submitted: 29
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21610800 session 0x558d24ab6000
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:19.501276+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336d000
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 110133248 unmapped: 27222016 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:20.502720+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111812608 unmapped: 25542656 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:21.502860+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f92f5000/0x0/0x4ffc00000, data 0x1e9bda8/0x1f67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113696768 unmapped: 23658496 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1407206 data_alloc: 234881024 data_used: 12505088
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:22.503047+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113696768 unmapped: 23658496 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f92f5000/0x0/0x4ffc00000, data 0x1e9bda8/0x1f67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:23.503235+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113696768 unmapped: 23658496 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:24.503414+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113696768 unmapped: 23658496 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:25.503616+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113696768 unmapped: 23658496 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:26.503995+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113696768 unmapped: 23658496 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1407206 data_alloc: 234881024 data_used: 12505088
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:27.504167+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113696768 unmapped: 23658496 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:28.504700+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113696768 unmapped: 23658496 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f92f5000/0x0/0x4ffc00000, data 0x1e9bda8/0x1f67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:29.505059+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113704960 unmapped: 23650304 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:30.506174+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113704960 unmapped: 23650304 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:31.506563+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.962337494s of 12.983584404s, submitted: 7
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 114057216 unmapped: 23298048 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1445254 data_alloc: 234881024 data_used: 12660736
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:32.506728+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116654080 unmapped: 20701184 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:33.506922+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116973568 unmapped: 20381696 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:34.507129+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8dce000/0x0/0x4ffc00000, data 0x23c2da8/0x248e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116973568 unmapped: 20381696 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:35.507295+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116973568 unmapped: 20381696 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8dce000/0x0/0x4ffc00000, data 0x23c2da8/0x248e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:36.507433+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 20242432 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1450736 data_alloc: 234881024 data_used: 13234176
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:37.507608+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 20242432 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:38.507799+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 20242432 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8dce000/0x0/0x4ffc00000, data 0x23c2da8/0x248e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:39.507970+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 20209664 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:40.508155+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 20209664 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:41.508362+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 20209664 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1450736 data_alloc: 234881024 data_used: 13234176
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:42.508522+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 20209664 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:43.508657+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 20209664 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:44.508806+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8dce000/0x0/0x4ffc00000, data 0x23c2da8/0x248e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117178368 unmapped: 20176896 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:45.509011+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117178368 unmapped: 20176896 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:46.509159+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.702803612s of 14.805799484s, submitted: 47
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23882800 session 0x558d23a1b2c0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336d000 session 0x558d2158e1e0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117178368 unmapped: 20176896 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b20800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b20800 session 0x558d24ab63c0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280017 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:47.509271+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 25878528 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:48.509447+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c09000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 25878528 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:49.509607+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 25878528 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:50.509744+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 25878528 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c09000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:51.509875+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 25878528 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280017 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:52.510058+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 25878528 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:53.510206+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 25878528 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:54.510319+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c09000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 25878528 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:55.510662+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 25878528 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:56.510770+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 25878528 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280017 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:57.510920+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 25878528 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c09000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:58.511174+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 25878528 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:01:59.511301+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 25878528 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:00.511418+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 25878528 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c09000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:01.511537+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 25878528 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280017 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:02.511654+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 25878528 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:03.511748+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 25878528 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c09000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:04.511896+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 25878528 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:05.512068+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c09000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 25878528 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:06.512185+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 25878528 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:07.512361+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280017 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b20800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b20800 session 0x558d214b9a40
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21610800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21610800 session 0x558d23ac3e00
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336d000
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336d000 session 0x558d243a72c0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23882800 session 0x558d215c9a40
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b21000
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.234964371s of 21.351840973s, submitted: 34
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 25878528 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b21000 session 0x558d222632c0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b21000
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b21000 session 0x558d232aaf00
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21610800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21610800 session 0x558d24ab6960
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336d000
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336d000 session 0x558d22276960
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23882800 session 0x558d215d72c0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:08.512533+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 25927680 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:09.512669+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 25927680 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:10.512811+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 25927680 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:11.512983+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9840000/0x0/0x4ffc00000, data 0x1950da8/0x1a1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 25927680 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9840000/0x0/0x4ffc00000, data 0x1950da8/0x1a1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:12.513214+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1314678 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 25927680 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:13.513447+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111427584 unmapped: 25927680 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:14.513589+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b20800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b20800 session 0x558d2149d2c0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111411200 unmapped: 25944064 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b20800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:15.513730+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f983f000/0x0/0x4ffc00000, data 0x1950dcb/0x1a1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111411200 unmapped: 25944064 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:16.513832+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111624192 unmapped: 25731072 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:17.513981+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1340867 data_alloc: 218103808 data_used: 6770688
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111624192 unmapped: 25731072 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:18.514238+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111624192 unmapped: 25731072 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:19.514384+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111624192 unmapped: 25731072 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:20.514561+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f983f000/0x0/0x4ffc00000, data 0x1950dcb/0x1a1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111624192 unmapped: 25731072 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:21.514701+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111624192 unmapped: 25731072 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:22.514855+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1340867 data_alloc: 218103808 data_used: 6770688
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111624192 unmapped: 25731072 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:23.514996+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21610800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111624192 unmapped: 25731072 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.050022125s of 16.154581070s, submitted: 31
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21610800 session 0x558d2149d4a0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:24.515145+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111747072 unmapped: 25608192 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9111000/0x0/0x4ffc00000, data 0x207edcb/0x214b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:25.515294+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111747072 unmapped: 25608192 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:26.515440+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 111747072 unmapped: 25608192 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:27.515566+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1400983 data_alloc: 218103808 data_used: 6774784
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 112517120 unmapped: 24838144 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:28.515738+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 112517120 unmapped: 24838144 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:29.515877+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8cb3000/0x0/0x4ffc00000, data 0x24dcdcb/0x25a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336d000
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336d000 session 0x558d2158fc20
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113213440 unmapped: 24141824 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:30.515996+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23882800 session 0x558d24263a40
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 24133632 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:31.516141+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b21000
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b21000 session 0x558d24263c20
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b21800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b21800 session 0x558d214bef00
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113229824 unmapped: 24125440 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8c7b000/0x0/0x4ffc00000, data 0x2514dcb/0x25e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:32.516263+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1442101 data_alloc: 218103808 data_used: 6807552
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113229824 unmapped: 24125440 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21610800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:33.516405+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336d000
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 113287168 unmapped: 24068096 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:34.516521+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.491031647s of 10.713547707s, submitted: 69
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116088832 unmapped: 21266432 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:35.516649+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116088832 unmapped: 21266432 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:36.516796+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116088832 unmapped: 21266432 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:37.516967+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8c59000/0x0/0x4ffc00000, data 0x2535ddb/0x2603000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1491485 data_alloc: 234881024 data_used: 13135872
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116121600 unmapped: 21233664 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:38.517159+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116121600 unmapped: 21233664 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:39.517315+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116121600 unmapped: 21233664 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:40.517529+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116121600 unmapped: 21233664 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:41.517707+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116121600 unmapped: 21233664 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:42.517849+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1491485 data_alloc: 234881024 data_used: 13135872
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8c59000/0x0/0x4ffc00000, data 0x2535ddb/0x2603000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116154368 unmapped: 21200896 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:43.518012+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116154368 unmapped: 21200896 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:44.518311+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116154368 unmapped: 21200896 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.482767105s of 10.491147041s, submitted: 2
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:45.518459+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8c59000/0x0/0x4ffc00000, data 0x2535ddb/0x2603000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 124125184 unmapped: 13230080 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:46.518607+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 123486208 unmapped: 13869056 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:47.518780+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578925 data_alloc: 234881024 data_used: 14553088
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 123568128 unmapped: 13787136 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:48.518948+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 123568128 unmapped: 13787136 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:49.519193+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 13778944 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:50.519326+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f821d000/0x0/0x4ffc00000, data 0x2f71ddb/0x303f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 13778944 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:51.519448+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f821d000/0x0/0x4ffc00000, data 0x2f71ddb/0x303f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 13778944 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:52.519585+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1580445 data_alloc: 234881024 data_used: 14639104
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 13778944 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:53.519735+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 13778944 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:54.519862+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f821d000/0x0/0x4ffc00000, data 0x2f71ddb/0x303f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 13778944 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:55.519988+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f821d000/0x0/0x4ffc00000, data 0x2f71ddb/0x303f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 13778944 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:56.520196+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336d000 session 0x558d243a6000
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21610800 session 0x558d21df4b40
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 123576320 unmapped: 13778944 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:57.520340+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.993705750s of 12.190896034s, submitted: 85
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1388441 data_alloc: 218103808 data_used: 5189632
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23882800 session 0x558d24ad81e0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 120872960 unmapped: 16482304 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:58.520500+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 120872960 unmapped: 16482304 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:02:59.521197+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f937f000/0x0/0x4ffc00000, data 0x1e10dcb/0x1edd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 120881152 unmapped: 16474112 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:00.521333+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b20800 session 0x558d232ef0e0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 120881152 unmapped: 16474112 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b21000
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:01.521463+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b21000 session 0x558d232aa960
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 19619840 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:02.521611+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1298138 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 19619840 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:03.522200+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 19619840 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:04.522706+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c08000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 19619840 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:05.523052+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 19619840 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:06.523524+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 19619840 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:07.523733+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1298138 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 19619840 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:08.524069+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 19619840 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:09.524443+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c08000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 19619840 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:10.524696+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 19619840 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:11.524912+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 19619840 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:12.525040+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1298138 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 19619840 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:13.525358+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c08000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 19619840 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:14.525649+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 19619840 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:15.525846+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 19619840 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:16.526030+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 19619840 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:17.526234+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1298138 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 19619840 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:18.526417+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 19619840 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:19.526644+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9c08000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 19619840 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:20.526877+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 19619840 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:21.527062+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b21000
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b21000 session 0x558d22276960
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21610800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21610800 session 0x558d24abb860
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336d000
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336d000 session 0x558d23870960
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 19619840 heap: 137355264 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:22.527217+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23882800 session 0x558d24ab7e00
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b20800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 25.039752960s of 25.173450470s, submitted: 43
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1357123 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b20800 session 0x558d24262000
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21610800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d21610800 session 0x558d2158e5a0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117817344 unmapped: 22691840 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:23.527379+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117817344 unmapped: 22691840 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:24.527538+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:25.527727+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117817344 unmapped: 22691840 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f94e2000/0x0/0x4ffc00000, data 0x1cafd98/0x1d7a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:26.527919+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117817344 unmapped: 22691840 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:27.528093+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117817344 unmapped: 22691840 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f94e2000/0x0/0x4ffc00000, data 0x1cafd98/0x1d7a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1357067 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:28.528353+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117817344 unmapped: 22691840 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:29.528512+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117817344 unmapped: 22691840 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f94e2000/0x0/0x4ffc00000, data 0x1cafd98/0x1d7a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:30.528717+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 117817344 unmapped: 22691840 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336d000
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:31.528925+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336d000 session 0x558d22f35680
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118128640 unmapped: 22380544 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f94e2000/0x0/0x4ffc00000, data 0x1cafd98/0x1d7a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b21000
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:32.529277+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118128640 unmapped: 22380544 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1358767 data_alloc: 218103808 data_used: 3010560
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:33.529394+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118554624 unmapped: 21954560 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:34.529555+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118554624 unmapped: 21954560 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f94be000/0x0/0x4ffc00000, data 0x1cd3d98/0x1d9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:35.529706+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118554624 unmapped: 21954560 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:36.529916+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118554624 unmapped: 21954560 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:37.530054+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118554624 unmapped: 21954560 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1401175 data_alloc: 234881024 data_used: 9367552
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:38.530354+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118554624 unmapped: 21954560 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:39.530563+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118554624 unmapped: 21954560 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:40.530746+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118554624 unmapped: 21954560 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f94be000/0x0/0x4ffc00000, data 0x1cd3d98/0x1d9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:41.530931+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118554624 unmapped: 21954560 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:42.531149+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118554624 unmapped: 21954560 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1401631 data_alloc: 234881024 data_used: 9379840
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:43.531282+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.676862717s of 20.775457382s, submitted: 32
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118685696 unmapped: 21823488 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:44.531489+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 118964224 unmapped: 21544960 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:45.531634+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 21487616 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8ce5000/0x0/0x4ffc00000, data 0x24abd98/0x2576000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:46.531831+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 21487616 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:47.532000+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 21487616 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8ce5000/0x0/0x4ffc00000, data 0x24abd98/0x2576000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1472363 data_alloc: 234881024 data_used: 10211328
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:48.532218+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 21487616 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:49.532400+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 21487616 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:50.532550+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8ce5000/0x0/0x4ffc00000, data 0x24abd98/0x2576000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 21487616 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:51.532695+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 21487616 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:52.532837+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 21479424 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8ce5000/0x0/0x4ffc00000, data 0x24abd98/0x2576000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1472363 data_alloc: 234881024 data_used: 10211328
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:53.532985+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 21479424 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:54.533133+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 21479424 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:55.533272+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 21479424 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:56.533466+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 21479424 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:57.533614+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 21479424 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1472363 data_alloc: 234881024 data_used: 10211328
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:58.533773+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 21479424 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f8ce5000/0x0/0x4ffc00000, data 0x24abd98/0x2576000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:03:59.533928+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 21479424 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:00.534066+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 21479424 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.154628754s of 17.316360474s, submitted: 62
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23882800 session 0x558d23871c20
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b21000 session 0x558d22277c20
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b83400
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:01.534197+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23b83400 session 0x558d231c92c0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:02.534313+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:03.534450+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:04.534583+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:05.534706+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:06.534830+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:07.534958+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:08.535151+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:09.535238+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:10.535327+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:11.535453+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:12.535632+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:13.535798+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:14.535955+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:15.536084+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:16.536240+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:17.536402+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:18.536562+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:19.536700+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:20.536812+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:21.537297+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:22.537448+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:23.537609+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:24.537777+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:25.537912+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:26.538157+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:27.538951+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:28.539132+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:29.539269+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:30.539445+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:31.539663+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:32.539843+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:33.540001+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:34.540160+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:35.540314+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:36.540529+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:37.540653+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:38.540790+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:39.540978+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:40.541093+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 24231936 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:41.541268+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 24223744 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:42.541404+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 24223744 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:43.541576+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 24223744 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:44.541780+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 24223744 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:45.541929+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 24223744 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:46.542082+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 24223744 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:47.542297+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 24223744 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:48.542430+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 24223744 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:49.542600+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 24223744 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:50.542768+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 24223744 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:51.542921+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 24215552 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:52.543137+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 24215552 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:53.543254+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 24215552 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:54.543369+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 24215552 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:55.543493+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 24215552 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:56.543613+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 24215552 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:57.543825+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 24215552 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:58.544046+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 24215552 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:04:59.544162+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 24215552 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:00.544363+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 24215552 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:01.544486+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 24215552 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:02.544637+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 24215552 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:03.544785+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 24215552 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:04.544969+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 24215552 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:05.545172+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116301824 unmapped: 24207360 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:06.545354+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116301824 unmapped: 24207360 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:07.545476+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116301824 unmapped: 24207360 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:08.547643+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116301824 unmapped: 24207360 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:09.547802+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116301824 unmapped: 24207360 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:10.549376+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116301824 unmapped: 24207360 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:11.550631+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 24199168 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:12.551444+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 24199168 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:13.551581+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 24199168 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:14.552047+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 24199168 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:15.552194+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 24199168 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:16.552366+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 24199168 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:17.552551+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 24199168 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:18.552745+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 24199168 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:19.552884+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 24199168 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:20.553029+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 24199168 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:21.553700+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 24199168 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:22.553858+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 24199168 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:23.553989+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 24199168 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:24.554144+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 24199168 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:25.554247+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 24190976 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:26.554350+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 24117248 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: do_command 'config diff' '{prefix=config diff}'
Nov 24 10:15:34 compute-0 ceph-osd[82549]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 24 10:15:34 compute-0 ceph-osd[82549]: do_command 'config show' '{prefix=config show}'
Nov 24 10:15:34 compute-0 ceph-osd[82549]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 24 10:15:34 compute-0 ceph-osd[82549]: do_command 'counter dump' '{prefix=counter dump}'
Nov 24 10:15:34 compute-0 ceph-osd[82549]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 24 10:15:34 compute-0 ceph-osd[82549]: do_command 'counter schema' '{prefix=counter schema}'
Nov 24 10:15:34 compute-0 ceph-osd[82549]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:27.554450+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 115957760 unmapped: 24551424 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:28.554593+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 115834880 unmapped: 24674304 heap: 140509184 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:29.554737+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: do_command 'log dump' '{prefix=log dump}'
Nov 24 10:15:34 compute-0 ceph-osd[82549]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 115834880 unmapped: 35717120 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: do_command 'perf dump' '{prefix=perf dump}'
Nov 24 10:15:34 compute-0 ceph-osd[82549]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Nov 24 10:15:34 compute-0 ceph-osd[82549]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Nov 24 10:15:34 compute-0 ceph-osd[82549]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:30.554864+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: do_command 'perf schema' '{prefix=perf schema}'
Nov 24 10:15:34 compute-0 ceph-osd[82549]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116219904 unmapped: 35332096 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:31.554993+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116219904 unmapped: 35332096 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:32.555125+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116219904 unmapped: 35332096 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:33.555243+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116219904 unmapped: 35332096 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:34.555488+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116219904 unmapped: 35332096 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:35.555626+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116219904 unmapped: 35332096 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:36.555824+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116219904 unmapped: 35332096 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:37.557260+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116236288 unmapped: 35315712 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:38.557418+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116236288 unmapped: 35315712 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:39.557537+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116236288 unmapped: 35315712 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:40.557675+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116236288 unmapped: 35315712 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:41.557780+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116236288 unmapped: 35315712 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:42.557889+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116236288 unmapped: 35315712 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:43.558096+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116236288 unmapped: 35315712 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:44.558231+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116236288 unmapped: 35315712 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:45.558359+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116244480 unmapped: 35307520 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:46.558673+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116244480 unmapped: 35307520 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:47.558790+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116244480 unmapped: 35307520 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:48.558949+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116244480 unmapped: 35307520 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:49.559074+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116244480 unmapped: 35307520 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:50.559350+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116244480 unmapped: 35307520 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:51.559470+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116244480 unmapped: 35307520 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:52.559582+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116244480 unmapped: 35307520 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:53.559718+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116252672 unmapped: 35299328 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:54.559843+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116252672 unmapped: 35299328 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:55.559951+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116252672 unmapped: 35299328 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:56.560111+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116252672 unmapped: 35299328 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:57.560225+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116252672 unmapped: 35299328 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:58.560340+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116252672 unmapped: 35299328 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:05:59.560460+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116252672 unmapped: 35299328 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:00.560605+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116252672 unmapped: 35299328 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:01.560732+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116252672 unmapped: 35299328 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:02.560910+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116252672 unmapped: 35299328 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:03.561135+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 13K writes, 48K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 13K writes, 3756 syncs, 3.51 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2622 writes, 8802 keys, 2622 commit groups, 1.0 writes per commit group, ingest: 8.59 MB, 0.01 MB/s
                                           Interval WAL: 2622 writes, 1136 syncs, 2.31 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116252672 unmapped: 35299328 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:04.561296+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116252672 unmapped: 35299328 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:05.561456+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116252672 unmapped: 35299328 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:06.561605+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 35291136 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:07.561732+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 35291136 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:08.561887+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 35291136 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:09.562067+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 35291136 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:10.562245+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 35291136 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:11.562386+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 35291136 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:12.562541+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 35291136 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:13.562726+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 35291136 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:14.562882+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 35291136 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:15.562997+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 35291136 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:16.563144+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 35291136 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:17.563279+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 35291136 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:18.563428+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 35291136 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:19.563559+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 35291136 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:20.563714+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 35291136 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:21.563877+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 35291136 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:22.564016+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 35282944 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:23.564130+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 35282944 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:24.564313+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 35282944 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:25.564467+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 35282944 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:26.564625+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 35282944 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:27.564752+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 35282944 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:28.564911+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 35282944 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:29.565144+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 35282944 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:30.565270+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 35282944 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:31.565430+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 35282944 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:32.565569+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 35282944 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:33.565685+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 35282944 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:34.565814+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 35274752 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:35.565936+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 35274752 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:36.566080+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 35274752 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:37.566208+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 35274752 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:38.566353+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 35274752 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:39.566495+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 35274752 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:40.566628+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 35274752 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:41.566768+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 35274752 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:42.566990+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 35274752 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:43.567116+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 35274752 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:44.567189+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 35274752 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:45.567377+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 35274752 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:46.568678+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 35274752 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:47.569145+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 35274752 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:48.569863+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 35266560 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:49.570474+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 35266560 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:50.570647+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 35266560 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:51.570781+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 35266560 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:52.571382+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 35266560 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:53.571935+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 35266560 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:54.572120+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 35266560 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:55.572300+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 35266560 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:56.572412+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 35266560 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:57.572897+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 35266560 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:58.573343+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 35266560 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:06:59.573693+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 35266560 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:00.573935+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 35266560 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:01.574093+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 35266560 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:02.574490+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 35266560 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:03.574707+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 35258368 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:04.574933+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 35258368 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:05.575091+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 35258368 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:06.575387+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 35258368 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:07.575540+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 35258368 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:08.575920+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 35258368 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:09.576154+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 35258368 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:10.576391+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 35258368 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:11.576581+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 35258368 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:12.576739+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 35258368 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:13.576893+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 35258368 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:14.577075+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 35258368 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:15.577231+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116080640 unmapped: 35471360 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:16.577477+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116080640 unmapped: 35471360 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:17.578469+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116080640 unmapped: 35471360 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:18.579993+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306771 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116080640 unmapped: 35471360 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f9893000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:19.580975+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116088832 unmapped: 35463168 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:20.581681+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116088832 unmapped: 35463168 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:21.582151+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 201.047515869s of 201.123001099s, submitted: 20
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116121600 unmapped: 35430400 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:22.582557+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116121600 unmapped: 35430400 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:23.582695+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4f97fa000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 35282944 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:24.582869+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 35241984 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:25.583168+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 35233792 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:26.583474+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 35233792 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:27.583751+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 35233792 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:28.584235+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 35233792 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:29.584351+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 35233792 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:30.584777+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:31.585023+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 35233792 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:32.585459+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 35233792 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:33.585605+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 35233792 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:34.586076+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 35233792 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:35.586249+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 35233792 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:36.586682+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 35233792 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:37.586851+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 35225600 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:38.587151+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 35225600 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:39.587299+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 35225600 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:40.587436+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 35225600 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:41.587565+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 35225600 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:42.587806+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 35217408 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:43.587946+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 35217408 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:44.588228+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 35217408 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:45.588510+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 35217408 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:46.588772+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 35217408 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:47.588946+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 35217408 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:48.589245+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116342784 unmapped: 35209216 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:49.589853+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116342784 unmapped: 35209216 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:50.590211+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116342784 unmapped: 35209216 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:51.590334+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116342784 unmapped: 35209216 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:52.590656+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116342784 unmapped: 35209216 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:53.590981+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 35201024 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:54.591238+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 35201024 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:55.591467+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 35201024 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:56.591656+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 35201024 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:57.591768+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 35201024 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:58.591906+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 35201024 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:07:59.592059+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 35201024 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:00.592202+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 35201024 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:01.592325+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 35192832 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:02.592475+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 35192832 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:03.592593+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 35192832 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:04.592736+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 35192832 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:05.592877+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 35192832 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:06.593052+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 35192832 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:07.593169+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 35192832 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:08.593429+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 35192832 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:09.593592+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 35192832 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:10.593942+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 35192832 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:11.594339+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 35192832 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:12.594701+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 35192832 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:13.594854+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 35192832 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:14.595245+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 35192832 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:15.595437+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 35192832 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:16.595592+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 35192832 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:17.595745+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 35192832 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:18.595952+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 35192832 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:19.596142+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 35192832 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:20.596362+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 35192832 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:21.596503+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 35192832 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:22.597917+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 35192832 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:23.598870+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 35192832 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:24.599620+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 35192832 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:25.600202+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 35192832 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:26.601342+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 35192832 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:27.601614+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 35192832 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:28.601878+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 35192832 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:29.602229+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 35192832 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:30.602865+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 35192832 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:31.603007+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 35192832 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:32.603676+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 35192832 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:33.603814+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 35192832 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:34.604302+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 35192832 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:35.604429+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 35192832 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:36.604694+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 35192832 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:37.604833+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 35192832 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:38.605155+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 35184640 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:39.605302+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 35184640 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:40.605544+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 35184640 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:41.605684+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 35184640 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:42.605903+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 35184640 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:43.606051+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 35184640 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:44.606220+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 35184640 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:45.606379+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 35184640 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:46.606544+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 35184640 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:47.606715+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 35184640 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:48.606897+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 35184640 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:49.607039+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 35184640 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:50.607227+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 35184640 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:51.607385+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 35184640 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:52.607564+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 35184640 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:53.607780+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 35184640 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:54.607947+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 35184640 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:55.608163+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 35184640 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:56.608399+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 35184640 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:57.608519+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 35184640 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:58.608692+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 35184640 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:08:59.608836+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 35184640 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:00.609036+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 35184640 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:01.609168+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 35184640 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:02.609381+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 35184640 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:03.609628+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 35184640 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:04.609843+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 35184640 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:05.610040+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 35184640 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:06.610184+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 35184640 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:07.610353+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 35184640 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:08.610644+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 35184640 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:09.610907+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 35184640 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:10.611150+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 35184640 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:11.611343+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 35184640 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:12.611679+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 35184640 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:13.611967+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 35184640 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:14.612266+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 35184640 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:15.612416+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 35184640 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:16.612679+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 35184640 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:17.612933+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 35184640 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:18.613186+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 35184640 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:19.613363+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116375552 unmapped: 35176448 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:20.613684+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116375552 unmapped: 35176448 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:21.613897+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116375552 unmapped: 35176448 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:22.614136+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116375552 unmapped: 35176448 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:23.614320+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116375552 unmapped: 35176448 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:24.614578+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116375552 unmapped: 35176448 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:25.614778+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116375552 unmapped: 35176448 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:26.614936+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116375552 unmapped: 35176448 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:27.615076+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116375552 unmapped: 35176448 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:28.615451+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116375552 unmapped: 35176448 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:29.615734+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 35168256 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:30.616331+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 35168256 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:31.616601+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 35168256 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:32.616912+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 35168256 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:33.617204+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 35168256 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:34.617458+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 35168256 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:35.617698+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 35168256 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:36.617863+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 35168256 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:37.618071+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 35168256 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:38.618533+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 35168256 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:39.618752+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 35168256 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:40.619023+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 35168256 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:41.619201+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 35168256 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:42.619341+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 35168256 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:43.619534+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116383744 unmapped: 35168256 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:44.619743+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 35160064 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:45.619864+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 35160064 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:46.620043+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 35160064 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:47.620168+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 35160064 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:48.620442+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 35160064 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:49.620584+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 35160064 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:50.620779+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 35160064 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:51.620984+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 35160064 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:52.621160+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 35160064 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:53.621321+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 35160064 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:54.621506+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 35160064 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:55.621648+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 35151872 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:56.621845+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 35151872 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:57.621990+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 35151872 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:58.622162+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 35151872 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:09:59.622248+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 35151872 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:00.622428+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 35151872 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:01.622645+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 35151872 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:02.622816+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 35151872 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:03.622955+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 35151872 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:04.623133+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116408320 unmapped: 35143680 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:05.623306+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116408320 unmapped: 35143680 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:06.623681+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116408320 unmapped: 35143680 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:07.623862+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 35135488 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:08.624200+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 35135488 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:09.624371+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 35135488 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:10.624528+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 35135488 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:11.624684+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 35135488 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:12.624827+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 35135488 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:13.624950+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 35135488 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:14.625192+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 35135488 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:15.625345+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 35135488 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:16.625509+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 35127296 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:17.625639+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 35127296 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:18.625892+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 35127296 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:19.626040+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 35127296 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:20.626165+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 35127296 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:21.626306+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 35127296 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:22.626444+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 35127296 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:23.626596+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 35127296 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:24.626740+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 35127296 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:25.626891+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 35127296 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:26.627052+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116088832 unmapped: 35463168 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:27.627151+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116088832 unmapped: 35463168 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:28.627421+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116088832 unmapped: 35463168 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:29.627687+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 35454976 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:30.627888+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 35454976 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:31.628081+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 35454976 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:32.628261+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 35454976 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:33.628382+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 35454976 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:34.628487+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 35454976 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:35.628626+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 35454976 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:36.628796+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 35454976 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:37.628993+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 35454976 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:38.629245+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 35454976 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:39.629379+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 35454976 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:40.629539+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116105216 unmapped: 35446784 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:41.629737+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116105216 unmapped: 35446784 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:42.629915+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116105216 unmapped: 35446784 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:43.630055+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116105216 unmapped: 35446784 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:44.630176+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116105216 unmapped: 35446784 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:45.630342+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116105216 unmapped: 35446784 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:46.630581+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116105216 unmapped: 35446784 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:47.630796+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116105216 unmapped: 35446784 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:48.631063+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116113408 unmapped: 35438592 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:49.631281+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116113408 unmapped: 35438592 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:50.631410+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116113408 unmapped: 35438592 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:51.631600+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116113408 unmapped: 35438592 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:52.631828+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116113408 unmapped: 35438592 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:53.632269+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116113408 unmapped: 35438592 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:54.632467+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116113408 unmapped: 35438592 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:55.632627+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116113408 unmapped: 35438592 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:56.632796+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116113408 unmapped: 35438592 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:57.632972+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116113408 unmapped: 35438592 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:58.633159+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116113408 unmapped: 35438592 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:10:59.633310+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116113408 unmapped: 35438592 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:00.633497+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116113408 unmapped: 35438592 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:01.633682+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116113408 unmapped: 35438592 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:02.634850+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116113408 unmapped: 35438592 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:03.635033+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116113408 unmapped: 35438592 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets getting new tickets!
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:04.635252+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _finish_auth 0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:04.636137+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116121600 unmapped: 35430400 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:05.635452+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116121600 unmapped: 35430400 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:06.635662+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116121600 unmapped: 35430400 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:07.635869+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116121600 unmapped: 35430400 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:08.636046+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116121600 unmapped: 35430400 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:09.636238+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116121600 unmapped: 35430400 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:10.636342+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116121600 unmapped: 35430400 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:11.636445+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116121600 unmapped: 35430400 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:12.636661+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:13.637211+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116121600 unmapped: 35430400 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:14.637337+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116121600 unmapped: 35430400 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:15.637516+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116121600 unmapped: 35430400 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:16.637684+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116121600 unmapped: 35430400 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:17.637835+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116121600 unmapped: 35430400 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:18.638025+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116121600 unmapped: 35430400 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:19.638179+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116121600 unmapped: 35430400 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:20.638351+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116121600 unmapped: 35430400 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:21.638504+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116129792 unmapped: 35422208 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:22.638645+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116129792 unmapped: 35422208 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:23.638767+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116129792 unmapped: 35422208 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:24.638918+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116129792 unmapped: 35422208 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:25.639091+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116129792 unmapped: 35422208 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:26.639311+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116129792 unmapped: 35422208 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:27.639502+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116129792 unmapped: 35422208 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:28.639781+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116129792 unmapped: 35422208 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:29.639925+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116137984 unmapped: 35414016 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:30.640069+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116137984 unmapped: 35414016 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:31.640238+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116137984 unmapped: 35414016 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:32.640399+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116137984 unmapped: 35414016 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:33.640546+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116137984 unmapped: 35414016 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:34.640693+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116137984 unmapped: 35414016 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:35.641041+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116137984 unmapped: 35414016 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:36.641363+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116137984 unmapped: 35414016 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:37.641701+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116137984 unmapped: 35414016 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:38.642070+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116137984 unmapped: 35414016 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:39.642467+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116137984 unmapped: 35414016 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:40.642706+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 35405824 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:41.642920+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 35405824 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:42.643153+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 35405824 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:43.643319+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 35405824 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:44.643483+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 35405824 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:45.643616+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 35405824 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:46.643777+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 35405824 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:47.643917+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 35405824 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:48.644295+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 35405824 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:49.644556+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 35405824 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:50.644744+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 35405824 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:51.645047+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 35405824 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:52.645257+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 35405824 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:53.645497+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116154368 unmapped: 35397632 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:54.645690+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116154368 unmapped: 35397632 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:55.645873+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116154368 unmapped: 35397632 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:56.646141+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116154368 unmapped: 35397632 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:57.646297+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116154368 unmapped: 35397632 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:58.646577+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116154368 unmapped: 35397632 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:11:59.646785+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116154368 unmapped: 35397632 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:00.646955+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116154368 unmapped: 35397632 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:01.647181+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116154368 unmapped: 35397632 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:02.647513+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116154368 unmapped: 35397632 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:03.647794+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116154368 unmapped: 35397632 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:04.648069+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 35389440 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:05.648354+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 35389440 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:06.648612+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 35389440 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:07.648748+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 35389440 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:08.649081+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 35389440 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:09.649335+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 35389440 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:10.650556+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 35389440 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:11.651434+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 35389440 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:12.652161+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 35389440 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:13.652696+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336c400 session 0x558d243a6d20
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d21610800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116162560 unmapped: 35389440 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d2336d400 session 0x558d24ad9860
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d2336d000
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:14.652963+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116170752 unmapped: 35381248 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d23883000 session 0x558d23bdbe00
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23882800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:15.653273+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116170752 unmapped: 35381248 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:16.654912+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116170752 unmapped: 35381248 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:17.655166+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116170752 unmapped: 35381248 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:18.656666+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116170752 unmapped: 35381248 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:19.657196+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116170752 unmapped: 35381248 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:20.657705+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116170752 unmapped: 35381248 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:21.658188+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116170752 unmapped: 35381248 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:22.658924+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116178944 unmapped: 35373056 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:23.659235+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116178944 unmapped: 35373056 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:24.659533+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116178944 unmapped: 35373056 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:25.659720+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116178944 unmapped: 35373056 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:26.660276+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116178944 unmapped: 35373056 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:27.660434+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116178944 unmapped: 35373056 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:28.660590+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116178944 unmapped: 35373056 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:29.660815+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116178944 unmapped: 35373056 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:30.660960+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116178944 unmapped: 35373056 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:31.661168+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116178944 unmapped: 35373056 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:32.661403+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116178944 unmapped: 35373056 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:33.661635+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116178944 unmapped: 35373056 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:34.661870+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116178944 unmapped: 35373056 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:35.661998+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116178944 unmapped: 35373056 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:36.662191+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116178944 unmapped: 35373056 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:37.662408+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116178944 unmapped: 35373056 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:38.662666+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116178944 unmapped: 35373056 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:39.662821+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116187136 unmapped: 35364864 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:40.663012+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116187136 unmapped: 35364864 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:41.663321+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116187136 unmapped: 35364864 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:42.663564+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116187136 unmapped: 35364864 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:43.663749+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116187136 unmapped: 35364864 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:44.663891+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116187136 unmapped: 35364864 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:45.664703+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116187136 unmapped: 35364864 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:46.664850+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116187136 unmapped: 35364864 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:47.665024+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116187136 unmapped: 35364864 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:48.665256+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116187136 unmapped: 35364864 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:49.665456+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116195328 unmapped: 35356672 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:50.665756+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116195328 unmapped: 35356672 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:51.665945+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116195328 unmapped: 35356672 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:52.666171+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116195328 unmapped: 35356672 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:53.666597+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116195328 unmapped: 35356672 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:54.666883+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116195328 unmapped: 35356672 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:55.667132+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116195328 unmapped: 35356672 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:56.667330+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116195328 unmapped: 35356672 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:57.667463+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116195328 unmapped: 35356672 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:58.667611+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116195328 unmapped: 35356672 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:12:59.667752+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 ms_handle_reset con 0x558d25c9e000 session 0x558d214bf0e0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: handle_auth_request added challenge on 0x558d23b21000
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116203520 unmapped: 35348480 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:00.667898+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116203520 unmapped: 35348480 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:01.668133+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116203520 unmapped: 35348480 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:02.668304+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116203520 unmapped: 35348480 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:03.668571+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116203520 unmapped: 35348480 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:04.668781+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116203520 unmapped: 35348480 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:05.668973+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116203520 unmapped: 35348480 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:06.669202+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116203520 unmapped: 35348480 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:07.669400+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116203520 unmapped: 35348480 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:08.669604+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116203520 unmapped: 35348480 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:09.669768+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116203520 unmapped: 35348480 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:10.669918+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116211712 unmapped: 35340288 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:11.670171+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116211712 unmapped: 35340288 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:12.670392+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116211712 unmapped: 35340288 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:13.670594+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116211712 unmapped: 35340288 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:14.670751+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116211712 unmapped: 35340288 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:15.670931+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116211712 unmapped: 35340288 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:16.671133+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116211712 unmapped: 35340288 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:17.671261+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116211712 unmapped: 35340288 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:18.671478+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116211712 unmapped: 35340288 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:19.671647+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116219904 unmapped: 35332096 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:20.671788+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116219904 unmapped: 35332096 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:21.671928+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116219904 unmapped: 35332096 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:22.672242+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116219904 unmapped: 35332096 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:23.672460+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116219904 unmapped: 35332096 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:24.672640+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116219904 unmapped: 35332096 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:25.672809+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116219904 unmapped: 35332096 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:26.673013+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116219904 unmapped: 35332096 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:27.673191+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116219904 unmapped: 35332096 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:28.673362+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116219904 unmapped: 35332096 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:29.673557+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116219904 unmapped: 35332096 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:30.673721+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116219904 unmapped: 35332096 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:31.673955+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116228096 unmapped: 35323904 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:32.675227+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116228096 unmapped: 35323904 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:33.675338+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116228096 unmapped: 35323904 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:34.675447+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116228096 unmapped: 35323904 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:35.675598+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116228096 unmapped: 35323904 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:36.675712+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116228096 unmapped: 35323904 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:37.675885+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116228096 unmapped: 35323904 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:38.676082+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116228096 unmapped: 35323904 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:39.676268+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116228096 unmapped: 35323904 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:40.676440+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116228096 unmapped: 35323904 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:41.676557+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116236288 unmapped: 35315712 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:42.676762+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116236288 unmapped: 35315712 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:43.676962+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116236288 unmapped: 35315712 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:44.677139+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116236288 unmapped: 35315712 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:45.677311+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116236288 unmapped: 35315712 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:46.677495+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116236288 unmapped: 35315712 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:47.677653+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116236288 unmapped: 35315712 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:48.677839+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116236288 unmapped: 35315712 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:49.677971+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116236288 unmapped: 35315712 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:50.678091+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116236288 unmapped: 35315712 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:51.678289+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116236288 unmapped: 35315712 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:52.678470+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116236288 unmapped: 35315712 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:53.678599+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116236288 unmapped: 35315712 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:54.678757+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116244480 unmapped: 35307520 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:55.679014+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116244480 unmapped: 35307520 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:56.679170+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116244480 unmapped: 35307520 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:57.679335+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116244480 unmapped: 35307520 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:58.679599+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116244480 unmapped: 35307520 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:13:59.679754+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116244480 unmapped: 35307520 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:00.679911+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116244480 unmapped: 35307520 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:01.680075+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116244480 unmapped: 35307520 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:02.680227+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116244480 unmapped: 35307520 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:03.680390+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116244480 unmapped: 35307520 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:04.680562+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116244480 unmapped: 35307520 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:05.680677+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116244480 unmapped: 35307520 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:06.680810+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116244480 unmapped: 35307520 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:07.680948+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116244480 unmapped: 35307520 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:08.681149+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116252672 unmapped: 35299328 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:09.681284+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116252672 unmapped: 35299328 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:10.681419+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116252672 unmapped: 35299328 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:11.681560+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116252672 unmapped: 35299328 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:12.681720+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116252672 unmapped: 35299328 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:13.681833+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116252672 unmapped: 35299328 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:14.681965+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116252672 unmapped: 35299328 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:15.682199+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: mgrc ms_handle_reset ms_handle_reset con 0x558d21611800
Nov 24 10:15:34 compute-0 ceph-osd[82549]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3769522832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3769522832,v1:192.168.122.100:6801/3769522832]
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: get_auth_request con 0x558d23b83400 auth_method 0
Nov 24 10:15:34 compute-0 ceph-osd[82549]: mgrc handle_mgr_configure stats_period=5
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116252672 unmapped: 35299328 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:16.682372+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116252672 unmapped: 35299328 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:17.682510+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116252672 unmapped: 35299328 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:18.682712+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116252672 unmapped: 35299328 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:19.682852+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116252672 unmapped: 35299328 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:20.683022+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116252672 unmapped: 35299328 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:21.683290+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 35291136 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:22.683499+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 35291136 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:23.683646+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 35291136 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:24.683840+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 35291136 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:25.683979+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 35291136 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:26.684164+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 35291136 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:27.684305+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 35291136 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:28.684616+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 35291136 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:29.684768+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 35291136 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:30.684930+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 35291136 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:31.685078+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 35291136 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:32.685277+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 35282944 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:33.685452+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 35282944 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:34.685616+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 35282944 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:35.685737+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 35282944 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:36.685860+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 35282944 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:37.686005+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 35282944 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:38.686222+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 35282944 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:39.686375+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 35282944 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:40.686581+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 35282944 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:41.686718+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 35282944 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:42.686883+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 35282944 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:43.687083+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 35282944 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:44.687257+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 35282944 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:45.687399+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 35274752 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:46.687572+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 35274752 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:47.687728+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 35274752 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:48.687947+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 35274752 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:49.688147+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 35274752 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:50.688304+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 35274752 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:51.688437+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 35274752 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:52.688551+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:53.688774+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 35274752 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:54.688951+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 35274752 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:55.689142+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 35274752 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:56.689329+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 35274752 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:57.689451+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 35274752 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:58.689594+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 35274752 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:14:59.689716+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 35274752 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:15:00.689835+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 35266560 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 10:15:34 compute-0 ceph-osd[82549]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 10:15:34 compute-0 ceph-osd[82549]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306479 data_alloc: 218103808 data_used: 3002368
Nov 24 10:15:34 compute-0 ceph-osd[82549]: do_command 'config diff' '{prefix=config diff}'
Nov 24 10:15:34 compute-0 ceph-osd[82549]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:15:01.689967+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: do_command 'config show' '{prefix=config show}'
Nov 24 10:15:34 compute-0 ceph-osd[82549]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116408320 unmapped: 35143680 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: do_command 'counter dump' '{prefix=counter dump}'
Nov 24 10:15:34 compute-0 ceph-osd[82549]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 24 10:15:34 compute-0 ceph-osd[82549]: do_command 'counter schema' '{prefix=counter schema}'
Nov 24 10:15:34 compute-0 ceph-osd[82549]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:15:02.690091+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 35258368 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa81a000/0x0/0x4ffc00000, data 0x1588d36/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: tick
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_tickets
Nov 24 10:15:34 compute-0 ceph-osd[82549]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-24T10:15:03.690249+0000)
Nov 24 10:15:34 compute-0 ceph-osd[82549]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 35192832 heap: 151552000 old mem: 2845415832 new mem: 2845415832
Nov 24 10:15:34 compute-0 ceph-osd[82549]: do_command 'log dump' '{prefix=log dump}'
Nov 24 10:15:34 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.19074 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:34 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.26848 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:34 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.19077 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:34 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 24 10:15:34 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2884202917' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 10:15:34 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.19089 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:34 compute-0 nova_compute[257700]: 2025-11-24 10:15:34.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:15:34 compute-0 nova_compute[257700]: 2025-11-24 10:15:34.922 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:15:34 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.26866 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:15:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:15:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:15:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:15:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:15:34 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:15:35 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:15:35 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:15:35 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.28364 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:35 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Nov 24 10:15:35 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/358214863' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 24 10:15:35 compute-0 ceph-mon[74331]: from='client.19056 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:35 compute-0 ceph-mon[74331]: from='client.26827 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:35 compute-0 ceph-mon[74331]: from='client.28334 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:35 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2776402292' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 24 10:15:35 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/535281277' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 24 10:15:35 compute-0 ceph-mon[74331]: from='client.19074 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:35 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/1486480020' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 24 10:15:35 compute-0 ceph-mon[74331]: from='client.26848 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:35 compute-0 ceph-mon[74331]: from='client.19077 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:35 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2884202917' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 10:15:35 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/225897342' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 24 10:15:35 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/1519524842' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 24 10:15:35 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.19116 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:35 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.26881 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:35 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.28382 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:35 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Nov 24 10:15:35 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/998320868' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 24 10:15:35 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.19143 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:35 compute-0 crontab[302434]: (root) LIST (root)
Nov 24 10:15:35 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1471: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:15:35 compute-0 podman[302437]: 2025-11-24 10:15:35.794488175 +0000 UTC m=+0.065427738 container health_status 05f9b5292f1672e4c7fa72c4440542c8d193ad587864a6a98f7485e9da7348b4 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2)
Nov 24 10:15:35 compute-0 podman[302438]: 2025-11-24 10:15:35.824802213 +0000 UTC m=+0.094915915 container health_status c70a923d261cf9c97d8618e32b3a77bd3f0b1dd5c3d1af0e3f5a57673d3b0aad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible)
Nov 24 10:15:35 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.28400 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:35 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.26893 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:35 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:15:35 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:35 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:35.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:36 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.19161 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:36 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:15:36 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:36 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:36.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:36 compute-0 ceph-mon[74331]: from='client.19089 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:36 compute-0 ceph-mon[74331]: from='client.26866 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:36 compute-0 ceph-mon[74331]: from='client.28364 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:36 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/358214863' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 24 10:15:36 compute-0 ceph-mon[74331]: from='client.19116 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:36 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/2192234632' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 10:15:36 compute-0 ceph-mon[74331]: from='client.26881 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:36 compute-0 ceph-mon[74331]: from='client.28382 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:36 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/939475987' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 24 10:15:36 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/998320868' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 24 10:15:36 compute-0 ceph-mon[74331]: from='client.19143 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:36 compute-0 ceph-mon[74331]: pgmap v1471: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:15:36 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/1095355665' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 24 10:15:36 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/4072984199' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 24 10:15:36 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.28412 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:36 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.19176 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:36 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Nov 24 10:15:36 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3911979888' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 24 10:15:36 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.28433 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:36 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Nov 24 10:15:36 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1903975746' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 24 10:15:37 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.19185 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:37 compute-0 nova_compute[257700]: 2025-11-24 10:15:37.033 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:15:37 compute-0 nova_compute[257700]: 2025-11-24 10:15:37.036 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:15:37 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.28454 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:37 compute-0 ceph-mon[74331]: from='client.28400 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:37 compute-0 ceph-mon[74331]: from='client.26893 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:37 compute-0 ceph-mon[74331]: from='client.19161 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:37 compute-0 ceph-mon[74331]: from='client.28412 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:37 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/2225375590' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 24 10:15:37 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/1163184553' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 24 10:15:37 compute-0 ceph-mon[74331]: from='client.19176 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:37 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/2470601515' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 24 10:15:37 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3911979888' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 24 10:15:37 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/2154776243' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 24 10:15:37 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/4022284349' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 24 10:15:37 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1903975746' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 24 10:15:37 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/4159580487' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:15:37 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/510893352' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 24 10:15:37 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Nov 24 10:15:37 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3451260201' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 24 10:15:37 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Nov 24 10:15:37 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4138227759' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 24 10:15:37 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Nov 24 10:15:37 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2726888286' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 24 10:15:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:15:37.647Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 10:15:37 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:15:37.648Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 10:15:37 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:15:37 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1472: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:15:37 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Nov 24 10:15:37 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4128979290' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 24 10:15:37 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Nov 24 10:15:37 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3259949579' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 24 10:15:37 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:15:37 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:37 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:37.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:38 compute-0 systemd[1]: Starting Hostname Service...
Nov 24 10:15:38 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Nov 24 10:15:38 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/154418130' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 24 10:15:38 compute-0 systemd[1]: Started Hostname Service.
Nov 24 10:15:38 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:15:38 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:38 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:38.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:38 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Nov 24 10:15:38 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1942279183' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 24 10:15:38 compute-0 ceph-mon[74331]: from='client.28433 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:38 compute-0 ceph-mon[74331]: from='client.19185 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:38 compute-0 ceph-mon[74331]: from='client.28454 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:38 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/965202489' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 24 10:15:38 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3451260201' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 24 10:15:38 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/4138227759' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 24 10:15:38 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/2726888286' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 24 10:15:38 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/719084064' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 24 10:15:38 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/3419392445' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 24 10:15:38 compute-0 ceph-mon[74331]: pgmap v1472: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:15:38 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/4128979290' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 24 10:15:38 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/3267547667' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 24 10:15:38 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/3081976069' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:15:38 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/3340040457' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 24 10:15:38 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3259949579' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 24 10:15:38 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/1498752122' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 24 10:15:38 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/946831100' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 24 10:15:38 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/154418130' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 24 10:15:38 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/1200745248' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 24 10:15:38 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1942279183' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 24 10:15:38 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/1982131563' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 24 10:15:38 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Nov 24 10:15:38 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3262591560' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 24 10:15:38 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Nov 24 10:15:38 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3129312953' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 24 10:15:38 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Nov 24 10:15:38 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2809744926' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 24 10:15:38 compute-0 nova_compute[257700]: 2025-11-24 10:15:38.920 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:15:38 compute-0 nova_compute[257700]: 2025-11-24 10:15:38.921 257704 DEBUG oslo_service.periodic_task [None req-38aa3e56-8500-46eb-8132-e1c6cb5a6b52 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 24 10:15:38 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Nov 24 10:15:38 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2390519076' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 24 10:15:39 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:15:39.000Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Nov 24 10:15:39 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:15:39.001Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Nov 24 10:15:39 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:15:39.001Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:15:39 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Nov 24 10:15:39 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/293933469' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 24 10:15:39 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.27025 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:39 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Nov 24 10:15:39 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3796992106' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 24 10:15:39 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/3262591560' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 24 10:15:39 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3129312953' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 24 10:15:39 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/3599221387' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 24 10:15:39 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2809744926' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 24 10:15:39 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/3983569009' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 24 10:15:39 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/1586408974' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 24 10:15:39 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/1590080519' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 24 10:15:39 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2390519076' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 24 10:15:39 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/659856770' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 24 10:15:39 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/3371967134' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 24 10:15:39 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/293933469' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 24 10:15:39 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/2092556208' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 24 10:15:39 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3796992106' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 24 10:15:39 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.27037 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:39 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Nov 24 10:15:39 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3704579427' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 24 10:15:39 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Nov 24 10:15:39 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/410298879' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 24 10:15:39 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.27043 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:39 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1473: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:15:39 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:15:39 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000023s ======
Nov 24 10:15:39 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:39.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 24 10:15:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:15:39 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:15:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:15:40 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:15:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:15:40 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:15:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:15:40 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:15:40 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.27052 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:40 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Nov 24 10:15:40 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1993983005' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 24 10:15:40 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.19332 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:40 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:15:40 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:40 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:40.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:40 compute-0 ceph-mon[74331]: from='client.27025 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:40 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/1915736469' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 24 10:15:40 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/305860648' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 24 10:15:40 compute-0 ceph-mon[74331]: from='client.27037 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:40 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3704579427' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 24 10:15:40 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/410298879' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 24 10:15:40 compute-0 ceph-mon[74331]: from='client.27043 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:40 compute-0 ceph-mon[74331]: pgmap v1473: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:15:40 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/1378070210' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 24 10:15:40 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.27076 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:40 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/2716061122' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 24 10:15:40 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1993983005' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 24 10:15:40 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/103572047' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:15:40 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/1315247719' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 24 10:15:40 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/4202446663' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 24 10:15:40 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.28601 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:40 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.19350 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:40 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.27091 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:40 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.28616 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:40 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.19356 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:40 compute-0 ceph-mgr[74626]: [prometheus INFO cherrypy.access.140376740606640] ::ffff:192.168.122.100 - - [24/Nov/2025:10:15:40] "GET /metrics HTTP/1.1" 200 48382 "" "Prometheus/2.51.0"
Nov 24 10:15:40 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-mgr-compute-0-mauvni[74622]: ::ffff:192.168.122.100 - - [24/Nov/2025:10:15:40] "GET /metrics HTTP/1.1" 200 48382 "" "Prometheus/2.51.0"
Nov 24 10:15:41 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.19389 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:41 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.28649 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:41 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.27109 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:41 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.28640 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:41 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Nov 24 10:15:41 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2137382171' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 24 10:15:41 compute-0 ceph-mon[74331]: from='client.27052 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:41 compute-0 ceph-mon[74331]: from='client.19332 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:41 compute-0 ceph-mon[74331]: from='client.27076 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:41 compute-0 ceph-mon[74331]: from='client.28601 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:41 compute-0 ceph-mon[74331]: from='client.19350 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:41 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/257434389' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 24 10:15:41 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/1938538434' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 24 10:15:41 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/1159984311' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 10:15:41 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/325384731' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 24 10:15:41 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.28661 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:41 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.19407 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:41 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.27127 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:41 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1474: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:15:41 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.28670 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:41 compute-0 podman[303443]: 2025-11-24 10:15:41.791690352 +0000 UTC m=+0.064411222 container health_status c2993d2e1e7492e8913f6872df4d50c9e6d5a028204a98543d3426df327dd317 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent)
Nov 24 10:15:41 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0)
Nov 24 10:15:41 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1695737134' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 24 10:15:41 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:15:41 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:41 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:41.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:42 compute-0 nova_compute[257700]: 2025-11-24 10:15:42.068 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 24 10:15:42 compute-0 nova_compute[257700]: 2025-11-24 10:15:42.069 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:15:42 compute-0 nova_compute[257700]: 2025-11-24 10:15:42.069 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5033 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 24 10:15:42 compute-0 nova_compute[257700]: 2025-11-24 10:15:42.069 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 24 10:15:42 compute-0 nova_compute[257700]: 2025-11-24 10:15:42.070 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 24 10:15:42 compute-0 nova_compute[257700]: 2025-11-24 10:15:42.071 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:15:42 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.27139 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:42 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.19431 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:42 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.28685 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:42 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:15:42 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:42 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:42.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:42 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 10:15:42 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 10:15:42 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Nov 24 10:15:42 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1720225108' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 24 10:15:42 compute-0 ceph-mon[74331]: from='client.27091 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:42 compute-0 ceph-mon[74331]: from='client.28616 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:42 compute-0 ceph-mon[74331]: from='client.19356 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:42 compute-0 ceph-mon[74331]: from='client.19389 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:42 compute-0 ceph-mon[74331]: from='client.28649 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:42 compute-0 ceph-mon[74331]: from='client.27109 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:42 compute-0 ceph-mon[74331]: from='client.28640 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:42 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2137382171' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 24 10:15:42 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/1569002270' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 24 10:15:42 compute-0 ceph-mon[74331]: from='client.28661 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:42 compute-0 ceph-mon[74331]: from='client.19407 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:42 compute-0 ceph-mon[74331]: from='client.27127 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:42 compute-0 ceph-mon[74331]: pgmap v1474: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:15:42 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1695737134' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 24 10:15:42 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/39772812' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 24 10:15:42 compute-0 ceph-mon[74331]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 10:15:42 compute-0 ceph-mon[74331]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 10:15:42 compute-0 ceph-mon[74331]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 10:15:42 compute-0 ceph-mon[74331]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 10:15:42 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1720225108' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 24 10:15:42 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.19461 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:42 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.28700 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:42 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:15:42 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Nov 24 10:15:42 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/998003300' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 24 10:15:42 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.19476 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:42 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 10:15:42 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 10:15:43 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.28724 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:43 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.27178 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:43 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.28739 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:43 compute-0 ceph-mon[74331]: from='client.28670 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:43 compute-0 ceph-mon[74331]: from='client.27139 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:43 compute-0 ceph-mon[74331]: from='client.19431 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:43 compute-0 ceph-mon[74331]: from='client.28685 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:43 compute-0 ceph-mon[74331]: from='client.19461 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:43 compute-0 ceph-mon[74331]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 10:15:43 compute-0 ceph-mon[74331]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 10:15:43 compute-0 ceph-mon[74331]: from='client.28700 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:43 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/2091781462' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 24 10:15:43 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/998003300' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 24 10:15:43 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/3895810545' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 24 10:15:43 compute-0 ceph-mon[74331]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 10:15:43 compute-0 ceph-mon[74331]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 10:15:43 compute-0 ceph-mon[74331]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 10:15:43 compute-0 ceph-mon[74331]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 10:15:43 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/100271252' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 24 10:15:43 compute-0 ceph-mon[74331]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 10:15:43 compute-0 ceph-mon[74331]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 10:15:43 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:15:43.632Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:15:43 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Nov 24 10:15:43 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2221911114' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 24 10:15:43 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1475: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1022 B/s rd, 0 op/s
Nov 24 10:15:43 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.19524 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:44 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:15:44 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:44 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:44.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:44 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:15:44 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:15:44 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:44.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:15:44 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 10:15:44 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 10:15:44 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Nov 24 10:15:44 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/868985174' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 24 10:15:44 compute-0 ceph-mon[74331]: from='client.19476 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:44 compute-0 ceph-mon[74331]: from='client.28724 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:44 compute-0 ceph-mon[74331]: from='client.27178 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:44 compute-0 ceph-mon[74331]: from='client.28739 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 10:15:44 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2221911114' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 24 10:15:44 compute-0 ceph-mon[74331]: pgmap v1475: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1022 B/s rd, 0 op/s
Nov 24 10:15:44 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/2412318721' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 24 10:15:44 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/558850248' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 24 10:15:44 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/1546744390' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 24 10:15:44 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/3438556666' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 24 10:15:44 compute-0 ceph-mon[74331]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 10:15:44 compute-0 ceph-mon[74331]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 10:15:44 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/868985174' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 24 10:15:44 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df"} v 0)
Nov 24 10:15:44 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2045062225' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 24 10:15:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:15:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 24 10:15:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:15:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 24 10:15:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:15:44 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 24 10:15:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-nfs-cephfs-2-0-compute-0-ssprex[264195]: 24/11/2025 10:15:45 : epoch 69242af0 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 24 10:15:45 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-crash-compute-0[79585]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Nov 24 10:15:45 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump"} v 0)
Nov 24 10:15:45 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1269085449' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 24 10:15:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:15:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:15:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Optimize plan auto_2025-11-24_10:15:45
Nov 24 10:15:45 compute-0 ceph-mgr[74626]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 10:15:45 compute-0 ceph-mgr[74626]: [balancer INFO root] do_upmap
Nov 24 10:15:45 compute-0 ceph-mgr[74626]: [balancer INFO root] pools ['volumes', 'backups', 'images', '.rgw.root', 'vms', 'default.rgw.meta', '.nfs', 'default.rgw.control', '.mgr', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Nov 24 10:15:45 compute-0 ceph-mgr[74626]: [balancer INFO root] prepared 0/10 upmap changes
Nov 24 10:15:45 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.27229 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:15:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:15:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 10:15:45 compute-0 ceph-mgr[74626]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 10:15:45 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.28805 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:45 compute-0 ceph-mon[74331]: from='client.19524 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:45 compute-0 ceph-mon[74331]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 10:15:45 compute-0 ceph-mon[74331]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 10:15:45 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/1765316069' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 24 10:15:45 compute-0 ceph-mon[74331]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 24 10:15:45 compute-0 ceph-mon[74331]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 24 10:15:45 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2045062225' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 24 10:15:45 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/1739736414' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 24 10:15:45 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/664005702' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 24 10:15:45 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/1269085449' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 24 10:15:45 compute-0 ceph-mon[74331]: from='mgr.14715 192.168.122.100:0/2597398491' entity='mgr.compute-0.mauvni' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 24 10:15:45 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1476: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:15:45 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls"} v 0)
Nov 24 10:15:45 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4035074411' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 24 10:15:46 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:15:46 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:46 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:46.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 10:15:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 10:15:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 10:15:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 10:15:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 10:15:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:15:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 10:15:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:15:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 10:15:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:15:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:15:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:15:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:15:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:15:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 24 10:15:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:15:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 10:15:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:15:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:15:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:15:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 24 10:15:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:15:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 10:15:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:15:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 10:15:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:15:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 10:15:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 10:15:46 compute-0 ceph-mgr[74626]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 10:15:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 10:15:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 10:15:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 10:15:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 10:15:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 10:15:46 compute-0 ceph-mgr[74626]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 10:15:46 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:15:46 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:46 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:46.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:46 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.19590 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:46 compute-0 ceph-mon[74331]: from='client.27229 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:46 compute-0 ceph-mon[74331]: from='client.28805 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:46 compute-0 ceph-mon[74331]: pgmap v1476: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Nov 24 10:15:46 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/4035074411' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 24 10:15:46 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/2445363141' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 24 10:15:46 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/3294680015' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 24 10:15:46 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/1552895444' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 24 10:15:46 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/3434273546' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 24 10:15:46 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat"} v 0)
Nov 24 10:15:46 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2236352281' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 24 10:15:46 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.27247 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:47 compute-0 nova_compute[257700]: 2025-11-24 10:15:47.071 257704 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 24 10:15:47 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump"} v 0)
Nov 24 10:15:47 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3601364473' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 24 10:15:47 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.19623 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:47 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:15:47.649Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Nov 24 10:15:47 compute-0 ceph-mon[74331]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 10:15:47 compute-0 ceph-mon[74331]: from='client.19590 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:47 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2236352281' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 24 10:15:47 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/1101154622' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 24 10:15:47 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/3601364473' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 24 10:15:47 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/1370735725' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 24 10:15:47 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/1133703359' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 24 10:15:47 compute-0 ceph-mgr[74626]: log_channel(cluster) log [DBG] : pgmap v1477: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:15:47 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.27262 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:47 compute-0 ceph-mon[74331]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Nov 24 10:15:47 compute-0 ceph-mon[74331]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2145928164' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 24 10:15:47 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.28850 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:48 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:15:48 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.001000024s ======
Nov 24 10:15:48 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.102 - anonymous [24/Nov/2025:10:15:48.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 24 10:15:48 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.27271 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:48 compute-0 radosgw[89481]: ====== starting new request req=0x7fd9c8d935d0 =====
Nov 24 10:15:48 compute-0 radosgw[89481]: ====== req done req=0x7fd9c8d935d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 24 10:15:48 compute-0 radosgw[89481]: beast: 0x7fd9c8d935d0: 192.168.122.100 - anonymous [24/Nov/2025:10:15:48.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 24 10:15:48 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.19644 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:48 compute-0 ceph-mon[74331]: from='client.27247 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:48 compute-0 ceph-mon[74331]: from='client.19623 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:48 compute-0 ceph-mon[74331]: pgmap v1477: 353 pgs: 353 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Nov 24 10:15:48 compute-0 ceph-mon[74331]: from='client.27262 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:48 compute-0 ceph-mon[74331]: from='client.? 192.168.122.100:0/2145928164' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 24 10:15:48 compute-0 ceph-mon[74331]: from='client.? 192.168.122.101:0/3485591004' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 24 10:15:48 compute-0 ceph-mon[74331]: from='client.? 192.168.122.102:0/1862918711' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 24 10:15:48 compute-0 ceph-mgr[74626]: log_channel(audit) log [DBG] : from='client.19656 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 10:15:49 compute-0 ceph-84a084c3-61a7-5de7-8207-1f88efa59a64-alertmanager-compute-0[104217]: ts=2025-11-24T10:15:49.001Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
